content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
- Prerequisites
- Enable the Dependency Proxy
- View the Dependency Proxy
- Use the Dependency Proxy for Docker images
- Clear the Dependency Proxy cache
Dependency Proxy
- Introduced in GitLab Premium 11.11.
- Moved to GitLab Core in GitLab 13.6.:
- Your group must be public. Authentication for private groups is not supported yet..
- Docker Hub must be available. Follow this issue for progress on accessing images when Docker Hub is down.. | https://docs.gitlab.com/ee/user/packages/dependency_proxy/ | 2020-11-24T03:38:41 | CC-MAIN-2020-50 | 1606141171077.4 | [] | docs.gitlab.com |
Calender widget
From Opentaps WikiJump to navigationJump to search
The Calender widget
The Calender widget is used on many forms pages for entry of an accurate date that is required in processing the specific form.
To Access Calender widget
To access the Calender widget on any form, Click: [the calender_symbol Icon], a graphic that looks like this,
.
The Calender widget screen opens displaying the following contents:
Calender widget Controls
The Calender widget box (screen) has the following controls to enable entry of any date into a form screen that displays the calender_symbol Icon:
- Top Center Panel -- Displays the currently selected Month and Year. **Click and Drag the top center panel to move the Calender Widget box.
- [?] the Calender widget help button -- Click: [?] to display a help page
- [<<] -- Click to select the previous year
- [<] -- Click to select the previous month
- [Today] -- Click to select the current date, month, and year
- [>] -- Click to select the next month
- [>>] -- Click to select the next year
- [day_of_the_week] a column heading for each day of the week -- Click any day_of_the_week heading to begin the calender display on that day.
- [date] a date shown on the calender -- Click any visible date to select that date for entry into your form screen
- Bottom Panel -- Displays a prompt message when you mouse over any of the above control buttons on the Calender Widget box.
To use or to close the Calender widget box,
* Select any date using the buttons described, and clicking a visible date button to enter that date into your form page. * Or, click the [X] button in the upper right hand corner of the box to close the widget box. | https://docs.opentaps.org/docs/index.php?title=Calender_widget&oldid=6301 | 2020-11-24T03:14:52 | CC-MAIN-2020-50 | 1606141171077.4 | [] | docs.opentaps.org |
The Pretty Chic theme displays four Genesis Featured Posts widgets above the Secondary Navigation:
To set up the Pretty Chic Below Content Widget, go to
- Appearance
- Widgets
- Drag four “Genesis Featured Posts” widgets to the
- Below Content Widget
Set up each of the four widgets as shown in the screenshot below. Note that you can choose to display different categories if you like. If you prefer to display all categories, you can choose to offset each post, or select the box to ”
Exclude Previously Displayed Posts”
| https://docs.bizbudding.com/classic-docs/pretty-darn-cute/pretty-chic-theme-below-content-widget-area/ | 2020-11-24T02:55:18 | CC-MAIN-2020-50 | 1606141171077.4 | [array(['https://docs.bizbudding.com/wp-content/uploads/2020/04/pretty-chic-theme-below-content-widget-area-4.png',
'Pretty Chic Theme Below Content Widget Area 2'], dtype=object)
array(['https://docs.bizbudding.com/wp-content/uploads/2020/04/pretty-chic-theme-below-content-widget-area-3.png',
'Pretty Chic Theme Below Content Widget Area 1'], dtype=object)
array(['https://docs.bizbudding.com/wp-content/uploads/2020/04/pretty-chic-theme-below-content-widget-area-2.png',
'Pretty Chic Theme Below Content Widget Area'], dtype=object) ] | docs.bizbudding.com |
-
Change the Language of Your Search Interface
The Coveo JavaScript Search Framework allows you to change the language of the strings in your search interface. This is otherwise known as localization.
To change the language of your search page, you normally only need to reference its appropriate culture file, which is generated automatically with the Coveo JavaScript Search Framework. You can find those files under the
js\cultures folder.
You can translate a search interface in the Salesforce Lightning framework (see Changing the Language of Your Search Interface in Lightning).
By default, the framework automatically includes the English locale, so you never need to explicitly reference the
en.js localization file in your HTML page.
However, if you need to display your search interface in some other language, you need to include the corresponding localization file in your HTML page.
Including the
fr.js script to use the French locale for strings in a search page:
<script src='js/CoveoJsSearch.js'></script> <script src='js/cultures/fr.js'></script>
Where to include the localization file
You must always include the localization file you want to use after the
CoveoJsSearch.js reference. Otherwise, it will have no effect and the framework will assume that you’re using the default locale (English) in your search page.
Built-in Languages
All strings in the Coveo JavaScript Search Framework have been translated in the following languages:
Getting the Complete Dictionary of Localized Strings
The
String.locales.en object contains all localized string key-value pairs for the default locale (English):
If you include additional localization files after the
CoveoJsSearch.js reference in your HTML page, the corresponding locales also become available in the
String.locales object:
Getting the Localized Value of a Single String
If you know the name of a localized string key, you can get its current locale value using
Coveo.l("[StringKey]"):
You can achieve the exact same result using
[StringKey]".toLocaleString():
Quickly Finding Certain Key-Value Pairs
You can use a function such as the one below to quickly identify all of a localized string dictionary entries whose key or value matches a certain regular expression:
/** * Looks in the `dictionary` for all key-value pairs matching the `regularExpression` and logs those in the * browser console. * * @param dictionary The dictionary to look for matching key-value pairs in (e.g., `String.locales.en`). * @param regularExpression The regular expression to match (e.g., `/^Search/i`). */ var findStringToMatch = function (dictionary, regularExpression) { var filteredValues = _.filter(_.pairs(dictionary), function (keyValuePair) { return keyValuePair[1].match(regularExpression); }); _.each(filteredValues, function (filtered) { console.log('Key is: "' + filtered[0] + '"; value is: "' + filtered[1] + '".') } ) };
Adding or Overriding Strings
You may want to add or override strings in the default Coveo localization files. You can do so by calling a JavaScript function and passing a dictionary of strings for a certain language. If you provide values for existing string keys, those values will override the default ones. If you provide new keys, those keys will be added to the dictionary.
How stable are the keys?
All Coveo JavaScript Search Framework localized strings keys are fairly stable. While it’s conceivable that some key names may change in future updates, such changes are rare, and are only made for good reasons (e.g., when a key no longer fits its intended value at all).
In conclusion, some string overrides may break over time, but as a rule, they should hold fast.
Changing one of the default strings and adding a new string in the English dictionary:
String.toLocaleString({ "en": { "Forward": "Full Speed Ahead!", // Overrides an existing string. "MyCustomStringId": "Foo" // Defines a new string. } });
When to execute string override code
If you write code to add or override default string values, you should always make sure this code executes after the
CoveoJsSearch.js and localization file references in your HTML page.
Example
The following example shows how to translate a search interface to French. It also demonstrates how you could define and use some custom locale strings.
Include the French culture file after the
CoveoJsSearch.jsinclude.
This will automatically translate all default locale strings into French when you load the search page.
<script src='js/CoveoJsSearch.js'></script> <script src='js/cultures/fr.js'></script>
After the init function call, override an existing locale string (e.g.,
File Type).
When you load the search page, this string will replace the corresponding one in the French culture file.
Additionally, define a brand new locale string (e.g.,
VideoContent). You will use this string in the next step.
// Initializing the interface... document.addEventListener('DOMContentLoaded', function () { Coveo.SearchEndpoint.configureSampleEndpointV2(); Coveo.init(document.body); }) // Adding and overriding locale strings String.toLocaleString({ "fr": { "File Type": "Type", // Override string "VideoContent": "Contenu vidéo" // Create new string } });
Add a component that uses the new locale string (
VideoContent).
When you load the search page, the caption of this component will be automatically translated to the newly defined French locale string.
<div class="coveo-tab-section"> <!-- ... --> <a class="CoveoTab" data-</a> </div>
If you load the search page, you should see that the search interface has been translated to French, and that the new and overridden locale strings are taken into account when rendering the corresponding captions. | https://docs.coveo.com/en/421/ | 2020-11-24T02:58:01 | CC-MAIN-2020-50 | 1606141171077.4 | [array(['/en/assets/images/jsui/attachments/39977496/39944928.png?effects=border-simple,blur-border',
None], dtype=object)
array(['/en/assets/images/jsui/attachments/39977496/39944930.png?effects=border-simple,blur-border',
None], dtype=object)
array(['/en/assets/images/jsui/attachments/39977496/39944927.png?effects=border-simple,blur-border',
None], dtype=object)
array(['/en/assets/images/jsui/attachments/39977496/39944926.png?effects=border-simple,blur-border',
None], dtype=object) ] | docs.coveo.com |
A Step-By-Step Manufacturing Example -- A Configurable Product Run
Reference: Here is a good manufacturing process example for opentaps manufacturing users.
The following article is an interesting and useful example of how the manufacturing processes in opentaps can be used to make a product that is of the virtual type, with features. This article is available in the apache ofbiz project documentation wiki.
Since opentaps manufacturing uses much of the same code, derived from apache ofbiz project, for the manufacturing process section, this example should be very useful for opentaps manufacturing users.
To see this example, for additional insight into how manufacturing functions can be used, refer to the item written by C. J. Horton, entitled "Beginner's Guide to the Manufacturing Process", and available at this link: Manufacturing Example Link on the Apache ofbiz wiki.
Manufacturing Accounting < Section Pages | https://docs.opentaps.org/docs/index.php?title=A_Step-By-Step_Manufacturing_Example_--_A_Configurable_Product_Run&printable=yes | 2020-11-24T03:42:26 | CC-MAIN-2020-50 | 1606141171077.4 | [] | docs.opentaps.org |
cover letter 大学生的英语求职信样板
No.12, North Renmin Road
Jishou,Hunan Province
416000,P.R.China
Tel: 0743-152****1631
May 3, 2014
Mr.Wheaton
Director of Human Resources
Cape May Import Export Co.
Shenzhen, Guangdong, China
Dear sir ,
In reference to your advertisement in the ,please accept this cover l etter as an application for the position as an English language instruction coordinator .I love this job since I am an English major. I believe that I am the right one you are finding .
I’m currently a senior Business English major student in the College of International Exchange and Public Education of ** University .In school period, I studied the specialized knowledge diligently, and put huge enthusiasm and energy in it. Beside learning the textbook knowledge, I participated in all kinds of school activities actively and have obtained a lot of experience . I have passed TEM4 and TEM8 and I have great confidence with my oral English. I also familiar with Microsoft Office Programs: Word, Excel.I am interested in teaching .This year ,as a substitute teacher in my university,I teached fresh students oral English for nearly two months. I also worked as a English teacher in a local middle school of my town for two months, and I designed and supervised an oral English training activity for fresh students. And the training result turned out to be highly efficient. My teachers and friends say I have great responsibility for my job.
My classroom experience in English education and English training activity experience will be great helpful to the job of English language instruction coordinator ,if fortunately, i am employed by you .
If you desire an interview, I shall be most happy to be
called, on any day and at any time. Enclosed is my resume. If there is further information that you wish in the meantime, please reach me at the address given at the beginning of this letter.
Sincerely,
*****
Enclosure
cover letter 大学生的英语求职信样板的相关文档搜索
推荐阅读
- 2015年广西教师招聘考试题型
- 建设工程勘察合同范本(最新)2012.10.25
- 考研科目,动物生物化学 第10章 脂类代谢
- 膝关节功能锻炼方法
- “三严三实”专题学习研讨会发言稿2
- 广东省深圳市2016届高三第二次调研考试语文试题 Word版含答案
- 亳州中公教育:2014年蒙城事业单位招聘考试面试参考真题4
- 班组长高效执行力培训
- 高考数学解题方法8.常用数学方法——分离变量法
- 生物化学选择题
- chapter 9 carboxylic acid and Carboxylic Acid Derivatives
- 2020年整理《大学语文》超星尔雅完整课后作业答案.doc
- 施工控制网测角与测边的合理匹配
- 观赏鱼常见疾病与防治
- 场地平整技术交底
- 一道力学动态平衡问题巧解
- 考研英语班一般多少钱?文都2020考研英语辅导班价目表
- [天气变冷关心句子]天气变冷关心人的话 2018秋冬天气冷了及时给TA送去温暖话语 .doc
- 14年北京大学汉语国际教育专业考研-中国文化要略笔记题库
- 2018-2019年高中政治河北高三月考试卷汇编试卷【1】含答案考点及解析
- matlab仿真--二自由度机械臂动态仿真
- 路线价估价法举例(1)
- 马原重点王锁明
- 2013年国家公务员考试《行测》考前冲刺试卷三及参考答案 文档
- 旅游旅游管理类专业毕业实习报告范文
- 2010年西安房地产市场调研分析报告
- 签约仪式一般操作规范 | http://www.360docs.net/doc/info-338eb7692f60ddccdb38a02f.html | 2020-11-24T03:28:06 | CC-MAIN-2020-50 | 1606141171077.4 | [] | www.360docs.net |
>>=-. Search with an exact date as a boundary
With a boundary such as from November 5 at 8 PM to November 12 at 8 PM, use the timeformat
%m/%d/%Y:%H:%M:%S.
earliest="11/5/2017:20:00:00" latest="11/12/2017:20:00:00"! | https://docs.splunk.com/Documentation/Splunk/6.3.0/SearchReference/SearchTimeModifiers | 2020-11-24T04:09:05 | CC-MAIN-2020-50 | 1606141171077.4 | [array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'],
dtype=object) ] | docs.splunk.com |
Building Prototypes from Axure Cloud Artboard Projects
You can create interactive prototypes from Sketch artboards, Adobe XD artboards, and other image assets right in your web browser or in the Axure Cloud desktop app.
To get started, click Build Prototype at the top of an artboard project's Overview screen.
Adding HotspotsAdding Hotspots
- Click-and-drag on a screen to add a hotspot.
In the Add Interaction dialog that appears, select the trigger that you want to activate the hotspot:
- Click / Tap
- Double Click / Double Tap
- Swipe Right, Left, Up, or Down
In the Link To dropdown, choose where the hotspot should link to:
- Previous or Next Screen in the order of screens in the project
- Previous Screen in History (works like a web browser's Back button)
- External URL
- A specific screen in the project
Click Add Hotspot.
Note
To delete a hotspot, select it and click Delete at the bottom-left of the dialog that appears.
Click Preview at the top of the page to test out your new hotspot.
Hotspot MastersHotspot Masters
When creating or editing a hotspot, you can check the box for Add hotspot in master to add it to a new or existing master. Hotspot masters are global groups of hotspots that you can add to multiple screens.
When you edit a master hotspot on one screen, its behavior is updated on every other screen to which you've added the master.
Note
By default, positioning for master hotspots is set relative to the top of the artboard. To instead have it set relative to the bottom of the artboard — for example, when creating footer links — check the box for Position relative to bottom.
To add a master to a screen, click the ellipsis menu at the top-right of the screen and check the box next to the master's name. Uncheck the box to remove the master from the screen.
To delete or duplicate a master, click the ellipsis menu at the top-right of any screen and click Manage Masters. This dialog will also show the number of screens to which each master has been added.
Fixed Headers and FootersFixed Headers and Footers
You can define fixed header and footer regions for screens by dragging the handles at the top and bottom of the screen. While you drag, a magnified region will appear to help you set the region's boundary with pixel-precision.
Fixed header and footer regions will remain in place when you scroll the rest of the screen.
Tip
A screen is scrollable when its height exceeds the device height defined in the artboard project's Project Settings. | https://docs.axure.com/axure-cloud/reference/build/ | 2020-11-24T04:20:36 | CC-MAIN-2020-50 | 1606141171077.4 | [array(['/assets/screenshots/axure-cloud/axure-cloud-build.gif',
'adding hotspots to screens in an artboard project'], dtype=object)
array(['/assets/screenshots/axure-cloud/build1.png', None], dtype=object)
array(['/assets/screenshots/axure-cloud/build2.png', None], dtype=object)
array(['/assets/screenshots/axure-cloud/build3.png', None], dtype=object)
array(['/assets/screenshots/axure-cloud/build4.png', None], dtype=object)] | docs.axure.com |
Citrix Endpoint Management
-
About Endpoint Management
Endpoint Management integration with Microsoft Endpoint Manager
Onboarding and resource setup
Prepare to enroll devices and deliver resources
Certificates and authentication
Citrix Gateway and Endpoint Management
Domain or domain plus security token authentication
Client certificate or certificate plus domain authentication
-
-
-
SAML for single sign-on with Citrix Files
Authentication with Azure Active Directory through Citrix Cloud
Authentication with Okta through Citrix Cloud
Authentication with an on-premises Citrix Gateway through Citrix Cloud (Preview)
-
User accounts, roles, and enrollment
-
-
-
-
-
Workspace hub device management
AirPlay mirroring device policy
-
Android Enterprise app permissions
Android Enterprise managed configurations device policy
-
-
App attributes device policy
App configuration device policy
App inventory device policy
Application Guard device policy
-
Apps notifications device policy
App restrictions device policy
App tunneling device policy
App uninstall device policy
App uninstall restrictions device policy
-
-
Calendar (CalDav) device policy
-
Connection scheduling device policy
Contacts (CardDAV) device policy
-
Copy Apps to Samsung Container device policy
Credentials device policy
Custom XML device policy
-
Device Guard device policy
Device Health Attestation device policy
Device name device policy
Education Configuration device policy
Endpoint Management options device policy
Endpoint Management uninstall device policy
Enterprise Hub device policy
-
-
-
-
-
Home screen layout device policy
Import Device Configuration device policy
Import iOS & macOS Profile device policy
Keyguard Management device policy
-
Knox Platform for Enterprise device policy
Launcher configuration device policy for Android
-
-
Lock screen message device policy
-
Managed bookmarks device policy
Managed domains device policy
-
Maximum resident users device policy
MDM options device policy
Network usage device policy
-
Organization information device policy
-
-
Passcode lock grace period device policy
Personal hotspot device policy
Power management device policy
Profile Removal device policy
Provisioning profile device policy
Provisioning profile removal device policy
-
Public session device policy
Restrictions device policy
-
Samsung MDM license key device policy
-
Siri and dictation policies
SSO account device policy
Storage encryption device policy
-
Subscribed calendars device policy
Verified Access device policy
-
-
Web content filter device policy
-
-
Windows Agent device policy
Windows Hello for Business device policy
Windows GPO configuration device policy
Windows Information Protection device policy
-
-
-
-
-
-
Endpoint Management deployment!
Custom XML device policy
You can create custom XML policies in Endpoint Management to customize the following features on supported Windows, Zebra Android, and Android Enterprise devices:
- Provisioning, which includes configuring the device, and enabling or disabling features
- Device configuration, which includes allowing users to change settings and device parameters
- Software upgrades, which include providing new software or bug fixes to be loaded onto the device, including apps and system software
- Fault management, which includes receiving error and status reports from the device
For Windows devices: You create your custom XML configuration by using the Open Mobile Alliance Device Management (OMA DM) API in Windows. Creating custom XML with the OMA DM API is beyond the scope of this topic. For more information about using the OMA DM API, see OMA Device Management on the Microsoft Developer Network site.
Note:
For Windows 10 RS2 Phone: After a Custom XML policy or Restrictions policy that disables Internet Explorer deploys to the phone, the browser remains enabled. To work around this issue, restart the phone. This is a third-party issue.
For Zebra Android and Android Enterprise devices: You create your custom XML configuration by using the MX Management System (MXMS). Creating custom XML with the MXMS API is beyond the scope of this article. For more information about using MXMS, see the Zebra product documentation.
To add or configure this policy, go to Configure > Device Policies. For more information, see Device policies.
Windows Phone, Windows Desktop/Tablet, Zebra Android, and Android Enterprise settings
XML content: Type, or cut and paste, the custom XML code you want to add to the policy.
After you click Next, Endpoint Management checks the XML content syntax. Any syntax errors appear below the content box. Fix any errors before you continue.
If there are no syntax errors, the Custom XML Policy assignment page appears.
Use Windows AutoPilot to set up and configure devices
Windows AutoPilot is a collection of technologies used to set up and pre-configure new devices, getting them ready for productive use. You can use Windows AutoPilot to reset, repurpose, and recover devices. AutoPilot helps to remove some of the complexity of your current operating system deployment. Using AutoPilot reduces the task to a set of simple settings and operations that can get your devices ready to use quickly and efficiently.
Prerequisites
- Devices registered to the organization in the Microsoft Store for Business portal.
- Company branding configured in the Azure Active Directory portal.
- Company has an Azure Active Directory Premium P1 or P2 subscription.
- Configure Azure Active Directory as the IdP type for Endpoint Management. In the Endpoint Management console, go to Settings > Identity Provider (IDP).
- Network connectivity to cloud services used by Windows AutoPilot.
- Devices pre-installed with Windows 10 Professional, Enterprise or Education, version 1703 or later.
- Devices have access to the internet.
For more information about configuring prerequisites, see the Microsoft Windows documentation on AutoPilot:.
To configure Windows Automatic Redeployment in Endpoint Management for AutoPilot devices
Follow the steps to add a custom XML policy at Custom XML Device Policy. Add the following in XML Content:
<Add> <CmdID>_cmdid_</CmdID> <Item> <Target> <LocURI>./Vendor/MSFT/Policy/Config/CredentialProviders/DisableAutomaticReDeploymentCredentials</LocURI> </Target> <Meta> <Format xmlns="syncml:metinf">int</Format> </Meta> <Data>0</Data> </Item> </Add>
On the Windows lock screen, type the keystroke CTRL + Windows key + R.
Log in with an Azure Active Directory account.
The device verifies that the user has rights to redeploy the device. The device then redeploys.
After the device updates with the AutoPilot configuration, the user can then log into the freshly configured device.
Custom XML device. | https://docs.citrix.com/en-us/citrix-endpoint-management/policies/custom-xml-policy.html | 2020-11-24T04:34:09 | CC-MAIN-2020-50 | 1606141171077.4 | [] | docs.citrix.com |
DNS Group Objects
About DNS Group Objects
Domain Name System (DNS) groups define a list of DNS servers and some associated attributes. DNS servers are needed to resolve fully-qualified domain names (FQDN), such as, to IP addresses. Prior to creating a DNS group object, you must configure a DNS server.
You can configure different DNS group for management and data interfaces.
To configure a DNS server in CDO, see Firepower Threat Defense Device Settings; to configure a DNS server in FDM, see Configuring DNS for Data and Management Interfaces in the Cisco Firepower Device Manager Configuration Guide, Version 6.4. or later.
Create a DNS Group Object
You must create a DNS group object in FDM. See Configuring DNS Groups for more information.
Like any other object, during the onboarding process, CDO reads-into its database any DNS object groups that exist in FDM. Once they have been stored on CDO, they can seen on the Objects page. See the Configuring DNS Groups chapter of the Cisco Firepower Device Manager Configuration Guide, Version 6.4. or later.
Related Articles: | https://docs.defenseorchestrator.com/Configuration_Guides/Objects/DNS_Group_Objects | 2020-11-24T03:57:35 | CC-MAIN-2020-50 | 1606141171077.4 | [] | docs.defenseorchestrator.com |
Installation Overview
Page last updated:
This topic provides an overview of how to install and configure Pivotal Platform.
Pivotal Platform Installation Sequence
Pivotal Platform is a suite of products that runs on multiple IaaSes. Planning and installing Pivotal Platform means building layers from the bottom up, starting with the details of your IaaS and ending with “Day 2” configurations that you perform on a installed and running Pivotal Platform deployment.
The typical Pivotal Platform planning and installation process is:
Plan
Deploy BOSH and Ops Manager
- BOSH is an open-source tool that lets you run software systems in the cloud.
- BOSH and its IaaS-specific Cloud Provider Interfaces (CPIs) are what enable Pivotal Platform to run on multiple IaaSes.
- See Deploying with BOSH for a description of how BOSH deploys cloud software.
- Ops Manager is a graphical dashboard that deploys with BOSH. Ops Manager works with the BOSH Director to manage, configure, and upgrade Pivotal Platform products such as Pivotal Application Service (PAS), Enterprise Pivotal Container Service (Enterprise PKS), and Pivotal Platform services and partner products.
- Ops Manager represents Pivotal Platform products as tiles with multiple configuration panes that let you input or select configuration values needed for the product.
- Ops Manager generates BOSH manifests containing the user-supplied configuration values, and sends them to the BOSH Director.
- After you install Ops Manager and BOSH, you use Ops Manager to deploy almost all Pivotal Platform products.
- Deploying Ops Manager deploys both BOSH and Ops Manager with a single procedure.
- On AWS, you can deploy Ops Manager manually, or automatically with a Terraform template.
- On Azure, you can deploy Ops Manager manually, or automatically with a Terraform template. On Azure Government Cloud and Azure Germany, you can only deploy Ops Manager manually.
Deploy BOSH Add-ons (Optional)
- BOSH add-ons include the IPsec, ClamAV, and File Integrity Monitoring, which enhance Pivotal Platform platform security and security logging.
- You deploy these add-ons via BOSH rather than installing them with Ops Manager tiles.
Install Runtimes
- Pivotal Application Service (PAS) lets developers develop and manage cloud-native apps and software services.
- PAS is based on the Cloud Foundry Foundation’s open-source Application Runtime (formerly Elastic Runtime) project.
- Enterprise Pivotal Container Service (Enterprise PKS) uses BOSH to run and manage Kubernetes container clusters.
- Enterprise PKS is based on the Cloud Foundry Foundation’s open-source Container Runtime (formerly Kubo) project.
- Pivotal Isolation Segment lets a single PAS deployment run apps from separate, isolated pools of computing, routing, and logging resources.
- Operators replicate and configure an Pivotal Isolation Segment tile for each new resource pool they want to create.
- You must install PAS before you can install Pivotal Isolation Segment.
- Pivotal Application Service for Windows (PASW) enables PAS to manage Windows Server 2016 (1709) stemcells hosting .NET apps, and can also be replicated to create multiple isolated resource pools.
- Operators replicate and configure a PASW tile for each new resource pool they want to create.
- You must install PAS before you can install PASW.
- Small Footprint PAS is an alternative to PAS that uses far fewer VMs than PAS but has limitations.
Day 2 Configurations
- Day 2 configurations set up internal operations and external integrations on a running Pivotal Platform platform.
- Examples include front end configuration, user accounts, logging and monitoring, internal security, and container and stemcell images.
Install Services
- Install software services for Pivotal Platform developers to use in their apps.
- Services include the databases, caches, and message brokers that stateless cloud apps rely on to save information.
- Installing and managing software services on Pivotal Platform is an ongoing process, and is covered in the Pivotal Platform Operator Guide.
Deploying with BOSH
The following describes how you can use BOSH to run software in the cloud:
To use BOSH, you create a manifest
.ymlfile that specifies your software system’s component processes, the VMs they run on, how they communicate, and anything else they need.
The BOSH command-line interface (CLI) or API sends the manifest to the BOSH Director, BOSH’s executive process.
The BOSH Director provisions what it needs from the IaaS, deploys your software to run in the cloud, and heals automatically when VMs go down.
BOSH CLI and API commands let you control BOSH-managed processes and allocate or release IaaS resources.
- Configuring BOSH Director on OpenStack
- Using Your Own Load Balancer
- Pivotal Platform User Types
- Creating and Managing Ops Manager User Accounts
- Creating New PAS User Accounts
- Logging In to Apps Manager
- Adding Existing SAML or LDAP Users to a Pivotal Platform Deployment
- Deleting an AWS Installation from the Console
- Modifying Your Ops Manager Installation and Product Template Files
- Managing Errands in Ops Manager | https://docs.pivotal.io/ops-manager/2-7/install/ | 2020-11-24T04:10:19 | CC-MAIN-2020-50 | 1606141171077.4 | [] | docs.pivotal.io |
Cal.
Note: Calendly.com is not affiliated with MuseThemes, and is used as a third-party service integration for our Calendly widget. The instructions provided below are subject to change, or appear different, if any changes are made on Calendly.com website.
On this page, you will see all of your events.
If you wish to display multiple events, copy the main account link:
NOTES ON DISPLAY MODES:
No commonly asked questions
No known issues or conflicts | http://docs.muse-themes.com/widgets/calendly | 2020-11-24T04:08:56 | CC-MAIN-2020-50 | 1606141171077.4 | [] | docs.muse-themes.com |
You can enable FIPS Mode Compliance using Easy Installer during vRealize Suite Lifecycle Manager installation or by selecting the option as a Day-2 operation in the Settings page. To know more about FIPS Mode Compliance using Easy Installer, see vRealize Automation documentation.
Procedure
- From My Service dashboard, select Lifecycle Operations, and then select the Settings page.
- Under System Administration, click System Details.
- Enable or disable the FIPS Mode Compliance check box, as required. Click Update. vRealize Suite Lifecycle Manager restarts when you enable or disable FIPS Mode Compliance.Note: When you enable FIPS Mode Compliance, vRealize Suite Lifecycle Manager 8.2 does not upgrade to the next version. You must disable the FIPS Mode Compliance, and upgrade vRealize Suite Lifecycle Manager 8.2, and then re-enable FIPS Mode Compliance. | https://docs.vmware.com/en/VMware-vRealize-Suite-Lifecycle-Manager/8.2/com.vmware.vrsuite.lcm.8.2.doc/GUID-4C29A6BF-2570-47A7-8A8B-591AC4C8A5CD.html | 2020-11-24T04:30:42 | CC-MAIN-2020-50 | 1606141171077.4 | [] | docs.vmware.com |
Rapid7 Universal DHCP
If Rapid7 does not support the logging format of your DHCP server, DHCP logs contain the following fields so that you can construct a valid UEF DHCP object. Objects that violate the UEF will not be ingested by InsightIDR and will be unavailable for log search.
See for more information about DHCP protocols.
Example Format
NOTE - Case sensitivity
Be aware that InsightIDR regards these log lines as case sensitive.
You must send events to the InsightIDR collector in UTF-8 format, with each log line representing a single event and a newline delimiting each event. For example,
{"event_type":"DHCP_LEASE","version": “v1”,"time": "2018-06-07T18:18:31.1Z","client_hostname":"pc.acme.com","client_ip":"10.6.102.53","operation": "OBTAIN"}
Each event sent to InsightIDR must not contain newline characters.
Here are some examples of a Universal DHCP Event with readable formatting:
12{3"version": "v1",4"event_type": "DHCP_LEASE",5"time": "2018-06-07T18:18:31.1234+0300",6"client_hostname": "pc.acme.com",7"client_ip": "10.6.102.53",8"operation": "OBTAIN"9}
Or:
1{2"version": "v1",3"event_type": "DHCP_LEASE",4"time": "2018-06-07T18:18:31.123Z",5"client_hostname": "pc.acme.com",6"client_ip": "10.6.102.53",7"operation": "OBTAIN",8"client_mac_address": "02:42:c9:a9:cd:b6",9"custom_data": {10"location": "Vancouver Office",11"dhcp_server_number": 112}13} | https://docs.rapid7.com/insightidr/rapid7-universal-dhcp/ | 2020-11-24T04:06:10 | CC-MAIN-2020-50 | 1606141171077.4 | [] | docs.rapid7.com |
DNS
Overview
InsightOps monitors a set of fields from the event source log.
- Timestamp
- Asset
- User
- Source Address
- Query
- Public Suffix
- Top Private Domain
How to Collect DNS Server Logs
Perform the following steps on the server side for the InsightOps collector to incorporate logs from the DNS:
- Create a destination folder on the hard drive where the logs reside.
- Share that folder with a read-only credential that is also entered in InsightOps.
- Enable logging onto the service and direct those logons to the newly created folder.
- DNS
Rapid7 recommends that the folder for DNS logging resides on the root (C) drive of the server that hosts the DNS, for example, C:\dnslogs.
Begin by creating the log file folder and sharing it:
- Create a folder for the DNS logs.
C:\dnslogsis the recommended directory for storing DNSOps when the DNS event source is set up.
- To enable logging onto the DNS server, right click the server’s name in the DNS Manager and select Properties from the drop-down menu.
- Click the Debug Logging tab, select Log packets for debugging, and enter the destination file name (the shared directory that you previously created in the File path and name field.) The remaining check boxes can keep the default values.
On the InsightOps side, you can configure the DNS event source to read the shared folder via UNC notation and by providing the credential that was used when setting up the shared folder. UNC notation is Microsoft's Universal Naming Convention which is a common syntax used to describe the location of a network resource.
NOTE: Make sure the file path includes the filename for the tail file as in the sample image. Unlike DHCP, just providing the directory path for the log is not sufficient for the DNS file configuration.
Configuring Microsoft DNS
InsightOps can collect Microsoft DNS audit logs.
To prepare to collect the logs, you need the DNS log to be written into a folder that the collector can connect to as a network share.
Microsoft DNS Servers Logs
Microsoft DHCP and DNS servers use similar technology to produce audit logs. In both cases, when logging is enabled, the services log their activity to a configured location on the file system. In order to read those logs in InsightOps, Microsoft Domain Name Server (DNS) names resources that are connected to the Internet or a private network. It translates domain names, for example, to its numerical Internet Protocol (IP) address, for example, 172.16.254.1. InsightOps can ingest these logs for further context around outbound traffic and network activity. DNS adds visibility, along with firewall, Web proxy, and other outbound traffic-based event sources, so that InsightOps can identify cloud services used by your organization. DNS logs are also available for detailed review in investigations.
Troubleshooting Configuration Issues
IfOps.
My Microsoft DNS Log File has 0 Bytes
It appears in some cases, whenever a log file needs to roll over, the old file cannot be deleted because the collector has it open. There is an article that discusses the issue here.
A temporary workaround for this issue is to enable DNS log file rotation and then use nxlog or a similar tool to collect the DNS logs, and forward them to the InsightOps collector as syslog.
Step 1: Enable DNS Log File Rotation
- Follow the instructions from the above sections to configure the Microsoft DNS server to create a single DNS text/debug log.
- After the log is created, on the DNS server, open a PowerShell command prompt as Administrator.
- Run the command:
Set-DNSServerDiagnostics -EnableLogFileRollover $true
- You can then verify that the DNS logging settings are correct by using the command:
Get-DnsServerDiagnostics
What you expect to see is that the original dns.log in the same place it was created, but there is a new DNS log file with a timestamp inserted into the name.
The final configuration should look similar to this:
Step 2: Install and Configure Nxlog
- Follow the instructions from nxlog to install and use it to forward the DNS logs that you created above to the InsightOps collector.
Step 3: Set up the Event Source
- Configure the Microsoft DNS event source so that it is listening for syslog from the nxlog service.
Step 4: Enable Log File Deletion (Optional)
- You may wish to also enable the deletion of the old DNS logs so that they do not fill up the hard drive of the DNS server.
- Use the following command:
Get-ChildItem C:\locallogs\dnslogs | where LastWriteTime -lt ((Get-Date).AddDays(-2)) | Remove-Item -WhatIf
Other Errors
It has been at least 120 minutes since the last event.
The DNS event source sometimes can stop working and produce the above error.
However, the error is false because the dns log has not stopped logging. The log file can be opened from the collector, so there is no apparent reason for the error. A review of the collector.log may show the following error:
Read already in progress, consider increasing scan interval
Solution To fix this error and to allow the collector to read the file again, check the collector.log. Start at the bottom of the log and search upwards for the DNS server name, look for the following line:
FILE READ: smb://DNSServerNameHere/ShareName/dnsdebug.log [176748106 -> 176837156, Bytes Read: 89050]
If the file contains errors, an indication that the log is not being read, or the "read already in progress" messages, complete the following in order:
- Verify that an antivirus software has not locked the file. The folder where the log is located should be excluded from being scanning by AV software.
- When configuring debug logging on the DNS server, there is an option to configure a large file size before it can "roll over." If the file must becomes very large before it rolls over, decrease the log file size.
- Reboot the collector/restart the Rapid7 Collector service.
- Restart the DNS Server service.
- Reboot the DNS server.
- Delete the event source and recreate it.
Once the log is readable to the collector, you do not need to complete any additional steps. If the error persists, please contact Rapid7 for Support. | https://docs.rapid7.com/insightops/dns/ | 2020-11-24T04:18:18 | CC-MAIN-2020-50 | 1606141171077.4 | [array(['/areas/docs/_repos//product-documentation__master/5749334d28edb74f382195092590608b0d5e3e73/insightops/images/Screen Shot 2017-09-13 at 3.20.45 PM.png',
None], dtype=object)
array(['/areas/docs/_repos//product-documentation__master/5749334d28edb74f382195092590608b0d5e3e73/insightops/images/Screen Shot 2017-09-13 at 3.20.10 PM.png',
None], dtype=object)
array(['/areas/docs/_repos//product-documentation__master/5749334d28edb74f382195092590608b0d5e3e73/insightops/images/Screen Shot 2017-09-13 at 3.22.24 PM.png',
None], dtype=object)
array(['/areas/docs/_repos//product-documentation__master/5749334d28edb74f382195092590608b0d5e3e73/insightops/images/Screen Shot 2017-09-13 at 3.35.56 PM.png',
None], dtype=object)
array(['/areas/docs/_repos//product-documentation__master/5749334d28edb74f382195092590608b0d5e3e73/insightops/images/Screen Shot 2017-09-13 at 3.36.22 PM.png',
None], dtype=object)
array(['/areas/docs/_repos//product-documentation__master/5749334d28edb74f382195092590608b0d5e3e73/insightops/images/Screen Shot 2017-09-13 at 3.55.56 PM.png',
None], dtype=object)
array(['/areas/docs/_repos//product-documentation__master/5749334d28edb74f382195092590608b0d5e3e73/insightops/images/Screen Shot 2017-09-13 at 3.56.05 PM.png',
None], dtype=object)
array(['/areas/docs/_repos//product-documentation__master/5749334d28edb74f382195092590608b0d5e3e73/insightops/images/Screen Shot 2017-12-14 at 3.13.56 PM.png',
None], dtype=object)
array(['/areas/docs/_repos//product-documentation__master/5749334d28edb74f382195092590608b0d5e3e73/insightops/images/Screen Shot 2017-12-14 at 3.14.28 PM.png',
None], dtype=object) ] | docs.rapid7.com |
How to configure pywws to post messages to Twitter¶
Install dependencies¶
Posting to Twitter requires some extra software. See Dependencies - Twitter updates.
Create a Twitter account¶
You could post weather updates to your ‘normal’ Twitter account, but I think it’s better to have a separate account just for weather reports. This could be useful to someone who lives in your area, but doesn’t want to know what you had for breakfast.
Add location data (optional)¶
Edit your
weather.ini file and add
latitude and
longitude entries to the
[twitter] section.
For example:
[twitter] secret = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx key = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx latitude = 51.501 longitude = -0.142
Create a template¶
Twitter messages are generated using a template, just like creating files to upload to a website. Copy the example template ‘tweet.txt’ to your template directory, then test it:
python -m pywws.Template ~/weather/data ~/weather/templates/tweet.txt tweet.txt cat tweet.txt
(Replace
~/weather/data and
~/weather/templates with your data and template directories.)
If you need to change the template (e.g. to change the units or language used) you can edit it now or later.
Post your first weather Tweet¶
Now everything is prepared for
ToTwitter to be run:
python -m pywws.ToTwitter ~/weather/data tweet.txt
If this works, your new Twitter account will have posted its first weather report. (You should delete the tweet.txt file now.)
Add Twitter updates to your hourly tasks¶
Edit your
weather.ini file and edit the
[hourly] section.
For example:
[hourly] services = [] plot = ['7days.png.xml', '24hrs.png.xml', 'rose_12hrs.png.xml'] text = [('tweet.txt', 'T'), '24hrs.txt', '6hrs.txt', '7days.txt']
Note the use of the
'T' flag – this tells pywws to tweet the template result instead of uploading it to your ftp site.
You could change the
[logged],
[12 hourly] or
[daily] sections instead, but I think
[hourly] is most appropriate for Twitter updates.
Changed in version 13.06_r1015: added the
'T' flag.
Previously Twitter templates were listed separately in
[hourly] and other sections.
The older syntax still works, but is deprecated.
Include an image in your tweet¶
New in version 14.05.dev1216.
You can add up to four images to your tweets by specifying the image file locations in the tweet template.
Make the first line of the tweet
media path where
path is the absolute location of the file.
Repeat for any additional image files.
The “tweet_media.txt” example template shows how to do this.
The image could be from a web cam, or for a weather forecast it could be an icon representing the forecast.
To add a weather graph you need to make sure the graph is drawn before the tweet is sent.
I do this by using two
[cron xxx] sections in weather.ini:
[cron prehourly] format = 59 * * * * services = [] plot = [('tweet.png.xml', 'L')] text = [] [cron hourly] format = 0 * * * * services = [] plot = ['7days.png.xml', '24hrs.png.xml', 'rose_12hrs.png.xml'] text = [('tweet_media.txt', 'T'), '24hrs.txt', '6hrs.txt', '7days.txt']
Comments or questions? Please subscribe to the pywws mailing list and let us know. | http://pywws.readthedocs.io/en/latest/guides/twitter.html | 2018-02-17T23:08:28 | CC-MAIN-2018-09 | 1518891808539.63 | [] | pywws.readthedocs.io |
Microsoft Translator Text API
Language customization
An extension of the core Microsoft Translator service, the Microsoft Translator Hub, can be used in conjunction with the Translator Text API to help you customize the translation system and improve the translation for your specific terminology and style.
With the Microsoft Translator Hub,. The Hub customizes the statistical translation systems.
Learn more about language customization
Microsoft Translator Neural Machine Translation (NMT)
Microsoft Translator has used statistical machine translation (SMT) technology to provide translations. The technology they will are not visible to end users. The only noticeable differences are
- The improved translation quality, especially for languages such as Chinese, Japanese, and Arabic. View supported languages on Microsoft.com.
- The incompatibility with the existing Hub customization features
Learn more about how NMT works | https://docs.microsoft.com/ja-jp/azure/cognitive-services/translator/translator-info-overview | 2018-02-17T23:40:16 | CC-MAIN-2018-09 | 1518891808539.63 | [] | docs.microsoft.com |
Configuration for Census Data
Configuration for Attributes information about the population for a whole country or parts of the USA. User can also plot their own shape files on the map which contains census data.
To plot the census data, user can choose the option of shape files in the feature of Overlay. Please refer to the section of ‘Overlay’..
Related Posts:
Understand your Market Demographics Geographically with Census Data within Dynamics 365 CRM / PowerApps
Expand your business reach by configuring and analyzing Census data within your Dynamics 365 CRM
Census Data visualization within Dynamics 365 CRM: Make your campaigns successful by executing at the right location
Download Map
Configuration for Census Data
Last modified
1mo ago
Copy link | https://docs.maplytics.com/features/census-data-visualization | 2021-10-16T02:42:26 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.maplytics.com |
Restoring a Snapshot copy from a different host
Contributors
Download PDF of this page
Use the
snapdrive snap restore command to restore a Snapshot copy from a different host.
Usually, you can restore a Snapshot copy from the host where you took the Snapshot copy. Occasionally, you might need to restore a Snapshot copy using a different or non-originating host. To restore a Snapshot copy using a non-originating host, use the same
snapdrive snap restore command that you would normally use. If the Snapshot copy you restore contains NFS entities, the non-originating host must have permission to access the NFS directory. | https://docs.netapp.com/us-en/snapdrive-unix/linux-administration/concept_restoring_asnapshot_copy_froma_different_host.html | 2021-10-16T03:30:14 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.netapp.com |
Setup.
This varies by hardware, and may appear with a slightly different string, such as:
0000:01:00.0 Co-processor: Intel Corporation Atom Processor C3000 Series QuickAssist Technology (rev 11)
dataplane shell command, verify that VPP can see the
crypto device, and that it is being used to handle cryptographic operations:
tnsr# dataplane shell sudo vppctl show crypto engines [...] dpdk_cryptodev 100 DPDK Cryptodev Engine tnsr# dataplane shell sudo vppctl show crypto async handlers Algo Type Handler aes-128-gcm-aad8 async-encrypt sw_scheduler dpdk_cryptodev* async-decrypt sw_scheduler dpdk_cryptodev* [...]
The output of those commands may vary slightly depending on hardware and TNSR
version. In both commands, look for the presence of
dpdk_cryptodev.
Troubleshooting¶
If the QAT device does not appear in the
show crypto async handlers output,:
$ sudo lspci | grep -i 'co-processor'
Note
Some platforms expand the QAT acronym to
QuickAssist Technology. If
lspci does not recognize the specific chipset, the list may include a
device ID such as
19e3,
18ef, or
37c9 instead of the string
QAT
Virtual Function. Refer to the list of QAT device IDs from DPDK to see
if one matches. may also be present in that output in addition to the
QAT VF ID, which matches the QAT VF ID configured for
dpdk in
/etc/vpp/startup.conf.
Note
As with
lspci, not every QAT VF device is recognized by name, so match up
the devices by PCI ID. Additionally, some PF devices will not show
igb_uio
but the device-appropriate QAT driver instead.
For example, the following QAT PF and VF devices are present on a properly working C3XXX system with QAT:
0000:01:00.0 0 8086:19e2 5.0 GT/s x16 c3xxx 0000:01:01.0 0 8086:19e3 unknown igb_uio 0000:01:01.1 0 8086:19e3 unknown c3xxxvf
If any of those tests do not provide the expected output, then reboot the system and check again. Ensure the TNSR services and VPP are running, and then check the VPP QAT status again.
$ sudo vppctl show crypto engines $ sudo vppctl show crypto async handlers
If there is still no
dpdk_cryptodev shown in the output of either command,
verify the PCI ID for the crypto device specified in TNSR is accurate. It must
be the first PCI ID displayed by
sudo lspci | grep -i 'co-processor'. Then
verify the PCI ID of the next listing in that output (first VF device) is
specified in
/etc/vpp/startup.conf properly and also the same PCI ID seen by
VPP when running:
$ sudo vppctl show pci all | https://docs.netgate.com/tnsr/en/latest/setup/setup-qat.html | 2021-10-16T02:52:16 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.netgate.com |
Changes for June 2016 release 2¶
List of the key changes made for the minor release updating the June 2016 ome-xml data model. These changes were introduced with the release of Bio-Formats 5.2.3 in October 2016.
The new minor release of the schema has the same namespace (2016-06).
The version number of the
ome.xsdschema is now 2. | https://docs.openmicroscopy.org/ome-model/6.2.2/schemas/june-2016-2.html | 2021-10-16T01:51:29 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.openmicroscopy.org |
Migrating from the OpenNMS Data Source plugin Overview Installation Application Setup Migrating Data Sources Verify Dashboards Plugin Removal Overview This guide helps migrate an existing installation of Grafana from using the OpenNMS Data Source plugin to using HELM to interface with OpenNMS. The tutorial assumes that you have: an instance of Grafana with the OpenNMS Data Source installed. a data source configured with type OpenNMS, and one or more dashboards using the data source. Installation Both the OpenNMS Data Source plugin and HELM can be installed at the same time while you complete the migration. If you have not already done so, you may install HELM using your prefered method. See Installation for detailed instructions. Application Setup Once HELM is installed, you need to enable the application in Grafana to make the provided panels and data sources available: Navigate to the home page of your Grafana instance. In the top-left corner of the page, click on the Grafana icon, and then click Plugins: Next, navigate to the Apps tab, and click on the OpenNMS Helm application. If the OpenNMS Helm is application is not listed on the Apps tab, try restarting the Grafana server. If the issue persists, make sure the application is installed correctly. Enable the application by clicking on the Enable button. If you see a Disable button, then the application is already enabled and you can skip to the next step. Migrating Data Sources Once the HELM application is enabled, you can convert your existing OpenNMS data sources to use the OpenNMS Perfomance type. When switching the type, you may need to re-enter the URL and authentication details. HELM provides two data source types. The OpenNMS Performance type is equivalent to the previous OpenNMS data source. Verify Dashboards Once the existing data sources have been converted to use the new OpenNMS Performance type, you should verify your existing dashboards to make sure they continue to render properly. If you encounter any errors when switching, you can revert to the previous data source type. Plugin Removal Once you have verified that your dashboards continue to work with the new data source, you can remove the previous plugin. Use the grafana-cli tool to remove the OpenNMS Data Source plugin from the commandline: sudo grafana-cli plugins remove opennms-datasource Restart Grafana for the plugin to be completely unregistered: sudo service grafana-server restart Basic Walk-Through Importing a Dashboard | https://docs.opennms.com/helm/7.1.1/getting_started/migrating_from_opennms_datasource.html | 2021-10-16T03:32:23 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.opennms.com |
Scalar DL v1 design document
Introduction
Scalar DL is a blockchain-inspired distributed ledger. This design document briefly explains the background, design and implementation of Scalar DL.
Background and Objectives
Distributed ledgers or blockchains have been attracting a lot of attention recently, especially in the areas of financial and legal applications. They have gained acceptance due to their tamper-evidence and decentralized control properties. However, the existing platforms do not necessarily handle properties such as finality and scalability well, which are particularly important for mission-critical applications. HyperLedger fabric [1] applies blockchain to a private network owned by the fixed number of organizations so that ledger states are always finalized unless malicious attacks happen. But its architecture inherently focuses on realtime tamper-evidence over scalability due to its endorsement mechanism so that its performance does not necessarily scale as the number of peers increases. Also, because it is designed toward a general distributed ledger, its complexity is becoming quite high and developers and administrators may have a lot of difficulties using it properly. Scalar DL is a simple and practical solution to solve such issues in an essentially different approach.
Design Goals
The primary design goals of Scalar DL are to achieve both high tamper-evidence of data and high scalability of performance. We have also taken great care to provide ACID-compliance, exact finality, linearizable consistency, and high availability. The performance of Scalar DL is highly dependent on the underlying database performance, but it can be modified without much effort by replacing the underlying database with one that is suitable for the user's needs because of its loosely-coupled architecture. Ease of use and simplicity are also part of our main design goals since they are the keys to making Scalar DL scalable.
Fault Model
The assumed fault model behind Scalar DL is byzantine fault [2]. However, with some configurations, it only assumes weak (limited) byzantine fault; that is, the database component assumes byzantine fault but the ledger component assumes only crash fault.
Data Model
Scalar DL abstracts data as a set of assets. An asset can be arbitrary data but is more compatible to being viewed as a historical series of data. For example, assets can range from the tangible (real estate and hardware) to the intangible (contracts and intellectual property).
An asset is composed of one or more asset records where each asset record is identified by an asset ID and an age. An asset record with age M has a cryptographic hash of the previous asset record with age M-1, forming a hash-chain, so that removing or updating an intermediate asset record may be detected by traversing the chain.
There is also a chain structure in between multiple assets. This chain is a relationship constructed by business/application logic. For example in a banking application, payment in between multiple accounts would update the both accounts, which will create such a relationship between assets. In Scalar DL, business logic is digitally signed and tamper evident, and the initial state of an asset is the empty state, which is also regarded as tamper-evident, so that we can deduce the intermediate asset state is also tamper evident as shown below.
Sn = F (Sn-1) Si: the state of a set of asset at age i F: the signed business logic
Thus, assets in Scalar DL can be seen as a DAG of dependencies.
Smart Contract
Scalar DL defines a digitally signed business logic as a
Smart Contract, which only a user with access to the signer's private key can execute.
This makes the system easier to detect tampering because the signature can be made only by the owners of private keys.
Implementation
High-level Architecture
WIP: software stack
Scalar DL is composed of 3 layers.
The bottom layer is called
Ledger. It mainly executes contracts and manages assets. It uses Scalar DB as a data and transaction manager, but also abstracts such management so that the implementation can be replaced with other database implementations.
The middle layer is called
Ordering. It orders contract execution requests in a deterministic way, so that multiple independent organizations will receive requests in the same order. It is similar to HyperLedger fabric's
Orderer, but it differs in the way it does the processing.
The top layer is called
Client SDK. It is a client-facing library composed of a set of Java programs to interact with either
Ordering, or
Ledger.
The basic steps of contract execution is as follows: 1. client programs interacting with the Client SDK request one or more execution of contracts to Ordering 2. Ordering orders the requests and pass them to Ledger 3. Ledger executes the requests in the order given from the Ordering
Smart Contract as a Distributed Transaction
Scalar DL executes a contract as a distributed transaction of the underlining database system (Scalar DB at the moment). More specifically, a contract (or a set of contracts invoked in one execution request) is composed of multiple reads and writes from/to assets, and those reads and writes are treated as single distributed transaction, so that they are atomically executed, consistently and durably written, and isolated from other contract executions.
A Smart Contract in Scalar DL is a java program which extends the base class
Contract.
Determinism management by Ordering
Scalar DL pre-orders contract execution requests before execution so that multiple independent organizations receive the requests in the same order and can make their states the same as others' without having to interact with each other. It uses Kafka as the ordering manager because of its reliability and performance. Ordering component only assumes crash fault, but it just orders and relays signed requests so that there is nothing it can do except for removing a request, which can be detectable by the requester.
Key Features
Client-side Proof
This is for tamper-evidence enhancement. (omitted because it is patent-pending)
Decoupled Consensus
This is for scalable contract execution. (omitted because it is patent-pending)
Partial-order-aware Execution
This is for scalable contract execution. (omitted because it is patent-pending)
Future Work
WIP
References
- [1], Hyperledger fabric: a distributed operating system for permissioned blockchains, Proceedings of the Thirteenth EuroSys Conference, April 23-26, 2018, Porto, Portugal.
- [2] Leslie Lamport, Robert Shostak, Marshall Pease, The Byzantine Generals Problem, ACM Transactions on Programming Languages and Systems (TOPLAS), v.4 n.3, p.382-401, July 1982. | https://scalardl.readthedocs.io/en/latest/design/ | 2021-10-16T02:36:07 | CC-MAIN-2021-43 | 1634323583408.93 | [] | scalardl.readthedocs.io |
Frequently Asked Questions (FAQ)
Origins
What is the purpose of. Regarding the points above:
-.
A much more expansive answer to this question is available in the article, Go at Google: Language Design in the Service of Software Engineering.
What is the status of the project?.
What's the origin of the mascot?.
What is the history of the project?, working with.
What are the guiding principles in the design?.
Usage
Is Google using Go internally?
Yes. There are now several Go programs deployed in
production inside Google. A public example is the server behind
golang.org..
Do Go programs link with C/C++ programs?
There are two Go compiler implementations,
gc.
Does Go support Google's protocol buffers?
A separate open source project provides the necessary compiler plugin and library. It is available at github.com/golang/protobuf/
Can I translate the Go home page into another language?
Absolutely. We encourage developers to make Go Language sites in their own languages. However, if you choose to add the Google logo or branding to your site (it does not appear on golang.org), you will need to abide by the guidelines at
Design.
What's up with Unicode identifiers?.
Why does Go not have feature X?.
Why does Go not have generic types?.
The topic remains open. For a look at several previous unsuccessful attempts to design a good generics solution for Go, see this proposal.
Why does Go not have exceptions?.
Why does Go not have assertions?.
Why build concurrency on the ideas of CSP?.
Why are map operations not defined to be atomic?.
Will you accept my language change?.
Types.
How do I get dynamic dispatch of methods?
The only way to have dynamically dispatched methods is through an interface. Methods on a struct or any other concrete type are always resolved statically.
Why is there no type inheritance?.
Why is
len a function and not a method?
We debated this issue but decided
implementing
len and friends as functions was fine in practice and
didn't complicate questions about the interface (in the Go type sense)
of basic types.
Why does Go not support overloading of methods and operators?.
Why doesn't Go have "implements" declarations?.
How can I guarantee my type satisfies an interface?).
Why doesn't type T satisfy the Equal interface? polymorphic typing, we expect there would be a way to express the idea of these examples and also have them be statically checked.
Can I convert a []T to an []interface{}? }
Can I convert []T1 to []T2 if T1 and T2 have the same underlying type?This last line of this code sample does not compile.
type T1 int type T2 int var t1 T1 var x = T2(t1) // OK var st1 []T1 var sx = ([]T2)(st1) // NOT OK
In Go, types are closely tied to methods, in that every named type has a (possibly empty) method set. The general rule is that you can change the name of the type being converted (and thus possibly change its method set) but you can't change the name (and method set) of elements of a composite type. Go requires you to be explicit about type conversions.
Why is my nil error value not equal to nil?
nil pointer of type
*int inside
an interface value, the inner type will be
*int regardless of the value of the pointer:
(
*int,
nil).
Such an interface value will therefore be non-
nil
even when the pointer inside is
nil.
This situation can be confusing, and.
Why are there no untagged unions, as in C?
Untagged unions would violate Go's memory safety guarantees.
Why does Go not have variant types?.
Why does Go not have covariant result types?
Covariant result types would mean that an interface like
type Copyable interface { Copy() interface{} }
would be satisfied by the method
func (v Value) Copy() Value
because
Value implements the empty interface.
In Go method types must match exactly, so
Value does not
implement
Copyable.
Go separates the notion of what a
type does—its methods—from the type's implementation.
If two methods return different types, they are not doing the same thing.
Programmers who want covariant result types are often trying to
express a type hierarchy through interfaces.
In Go it's more natural to have a clean separation between interface
and implementation.
Values
Why does Go not provide implicit numeric conversions?.
A blog post titled Constants explores this topic in more detail.
Why are maps built in?.
Why don't maps allow slices as keys?.
Why are maps, slices, and channels references while arrays are values?.
Writing Code.
Is there a Go programming style guide?.
How do I submit patches to the Go libraries?
The library sources are in the
src directory of the repository.
If you want to make a significant change, please discuss on the mailing list before embarking.
See the document Contributing to the Go project for more information about how to proceed.
Why does "go get" use HTTPS when cloning a repository?:
- Manually clone the repository in the expected package directory:
$ cd src/github.com/username $ git clone [email protected]:username/package.git
- Force
git pushto use the
SSHprotocol by appending these two lines to
~/.gitconfig:
[url "[email protected]:"] pushInsteadOf =
How should I manage package versions using "go get"?
"..
Pointers and Allocation
When are function parameters passed a later.
Note that this discussion is about the semantics of the operations. Actual implementations may apply optimizations to avoid copying as long as the optimizations do not change the semantics.
When should I use a pointer to an interface?.
Should I define methods on values or pointers?.
What's the difference between new and make?
In short:
new allocates memory,
make initializes
the slice, map, and channel types.
See the relevant section of Effective Go for more details. basic).
How do I know whether a variable is allocated on the heap or the stack?.
Concurrency
What operations are atomic? What about mutexes?.
Why doesn't my multi-goroutine program use multiple CPUs?.
Why does
may,.
Functions and Methods
Why do T and *T have different method sets? an example, if the
Write method of
bytes.Buffer
used a value receiver rather than a pointer,
this code:
var buf bytes.Buffer io.Copy(buf, os.Stdin)
would copy standard input into a copy of
buf,
not into
buf itself.
This is almost never the desired behavior.
What happens with closures running as goroutines? }() }
Control flow
Does Go have the
?: operator?
There is no ternary testing operation in Go. You may use the following to achieve the same result:
if expr { n = trueVal } else { n = falseVal }
Packages and Testing
How do I create a multifile package?.
How do I write a unit test?.
Where is my favorite helper function for testing?.
Why isn't X in the standard library?.
Implementation
What compiler technology is used to build the compilers?
Gccgo has a front end written in C++, with a recursive descent parser coupled to the
standard GCC back end.
Gc is written in Go with a recursive descent.
How is the run-time support implemented?.
Why is my trivial program such a large binary?
The linker in the
gc tool chain
creates statically-linked binaries by default..5 MB, but
that includes more powerful run-time support and type information.
Can I stop these complaints about my unused variable/import?,. .... }
Nowadays, most Go programmers use a tool, goimports, which automatically rewrites a Go source file to have the correct imports, eliminating the unused imports issue in practice. This program is easily connected to most editors to run automatically when a Go source file is written.
Performance
Why does Go perform badly on benchmark X?
One of Go's design goals is to approach the performance of C for comparable programs, yet on some benchmarks it does quite poorly, including several in golang.org/x/exp.
Changes from C
Why is the syntax so different from C?.
Why are declarations backwards?.
Why is there no pointer arithmetic?.
Why are there braces but no semicolons? And why can't I put the opening brace on the next line?.
Why do garbage collection? Won't it be too expensive?. Recent improvements, documented in this design document, have introduced bounded pause times and improved the parallelism. Future versions might attempt new approaches.. | http://docs.activestate.com/activego/1.8/doc/faq.html | 2019-01-16T05:34:00 | CC-MAIN-2019-04 | 1547583656897.10 | [] | docs.activestate.com |
/etc.
/etc/sysconfig/networkis for global settings. Information for VPNs, mobile broadband and PPPoE connections is stored in
/etc/NetworkManager/system-connections/.-, issue a command as follows:
ifname
~]#The command accepts multiple file names. These commands require
nmcli con load /etc/sysconfig/network-scripts/ifcfg-
ifname
rootprivileges. For more information on user privileges and gaining privileges, see the Fedora 25 System Administrator's Guide and the
su(1)and
sudo(8)man pages.
nmcli dev disconnectFollowed by:
interface-name
nmcli con up
interface-name
ifupcommands are used. See Section 1.6, “NetworkManager and the Network Scripts” for an explanation of the network scripts.cfgfile exists,
ifuplooks for the
TYPEkey in that file to determine which type-specific script to call;
ifupcalls
ifup-wirelessor
ifup-ethor
ifup-XXXbased on
TYPE;
IP-related tasks like
DHCPor static setup.
will continue with their traditional behavior and call
ifupfor that
ifcfgfile.
ifcfgfile that has
ONBOOT=yesis expected to be started on system bootup, either by NetworkManager or by the initscripts. This ensures that some legacy network types which NetworkManager does not handle (such as ISDN or analog dialup modems) as well as any new application not yet supported by NetworkManager are still correctly started by the initscripts even though NetworkManager is unable to handle them.
ifcfgfiles in the same location as the live ones. The script literally does
ifcfg-*with an exclude only for these extensions:
.old,
.orig,
.rpmnew,
.rpmorig, and
.rpmsave. The best way is not to store backup files anywhere within the
/etc/directory. | https://docs.fedoraproject.org/en-US/Fedora/25/html/Networking_Guide/sec-Network_Configuration_Using_sysconfig_Files.html | 2019-01-16T06:21:01 | CC-MAIN-2019-04 | 1547583656897.10 | [] | docs.fedoraproject.org |
Magento for B2B Commerce, 2.2.x
Updating Custom Pricing.
To update the custom pricing:
- In the progress indicator at the top of the page, click Pricing.
- In the upper-right corner, tap Next.
A quick rating takes only 3 clicks. Add a comment to help us improve Magento even more. | https://docs.magento.com/m2/2.2/b2b/user_guide/catalog/catalog-shared-custom-pricing-update.html | 2019-01-16T06:42:17 | CC-MAIN-2019-04 | 1547583656897.10 | [] | docs.magento.com |
Package dogstatsd
Overview ▹
Overview ¶
Counter is a DogStatsD counter. Observations are forwarded to a Dogstatsd object, and aggregated (summed) per timeseries.
type Counter struct { // contains filtered or unexported fields }
func (*Counter) Add ¶
func (c *Counter) Add(delta float64)
Add implements metrics.Counter.
func (*Counter) With ¶
func (c *Counter) With(labelValues ...string) metrics.Counter
With implements metrics.Counter.
type Dogstatsd ¶.
type Dogstatsd struct { // contains filtered or unexported fields }
func New ¶
func New(prefix string, logger log.Logger) *Dogstatsd
New returns a Dogstatsd object that may be used to create metrics. Prefix is applied to all created metrics. Callers must ensure that regular calls to WriteTo are performed, either manually or with one of the helper methods.
func (*Dogstatsd) NewCounter ¶
func (d *Dogstatsd) NewCounter(name string, sampleRate float64) *Counter
NewCounter returns a counter, sending observations to this Dogstatsd object.
func (*Dogstatsd) NewGauge ¶
func (d *Dogstatsd) NewGauge(name string) *Gauge
NewGauge returns a gauge, sending observations to this Dogstatsd object.
func (*Dogstatsd) NewHistogram ¶
func (d *Dogstatsd) NewHistogram(name string, sampleRate float64) *Histogram
NewHistogram returns a histogram whose observations are of an unspecified unit, and are forwarded to this Dogstatsd object.
func (*Dogstatsd) NewTiming ¶
func (d *Dogstatsd) NewTiming(name string, sampleRate float64) *Timing
NewTiming returns a histogram whose observations are interpreted as millisecond durations, and are forwarded to this Dogstatsd object.
func (*Dogstatsd) SendLoop ¶
func (d *Dogstatsd) SendLoop(c <-chan time.Time, network, address string).
func (*Dogstatsd) WriteLoop ¶
func (d *Dogstatsd) WriteLoop(c <-chan time.Time, w io.Writer) (*Dogstatsd) WriteTo ¶
func (d *Dogstatsd) WriteTo(w io.Writer) (count int64, err error).
type Gauge ¶
Gauge is a DogStatsD gauge. Observations are forwarded to a Dogstatsd object, and aggregated (the last observation selected) per timeseries.
type Gauge struct { // contains filtered or unexported fields }
func (*Gauge) Add ¶
func (g *Gauge) Add(delta float64)
Add implements metrics.Gauge.
func (*Gauge) Set ¶
func (g *Gauge) Set(value float64)
Set implements metrics.Gauge.
func (*Gauge) With ¶
func (g *Gauge) With(labelValues ...string) metrics.Gauge
With implements metrics.Gauge.
type Histogram ¶
Histogram is a DogStatsD histrogram. Observations are forwarded to a Dogstatsd object, and collected (but not aggregated) per timeseries.
type Histogram struct { // contains filtered or unexported fields }
func (*Histogram) Observe ¶
func (h *Histogram) Observe(value float64)
Observe implements metrics.Histogram.
func (*Histogram) With ¶
func (h *Histogram) With(labelValues ...string) metrics.Histogram
With implements metrics.Histogram.
type Timing ¶
Timing is a DogStatsD timing, or metrics.Histogram. Observations are forwarded to a Dogstatsd object, and collected (but not aggregated) per timeseries.
type Timing struct { // contains filtered or unexported fields }
func (*Timing) Observe ¶
func (t *Timing) Observe(value float64)
Observe implements metrics.Histogram. Value is interpreted as milliseconds.
func (*Timing) With ¶
func (t *Timing) With(labelValues ...string) metrics.Histogram
With implements metrics.Timing. | http://docs.activestate.com/activego/1.8/pkg/github.com/go-kit/kit/metrics/dogstatsd/ | 2019-01-16T06:35:44 | CC-MAIN-2019-04 | 1547583656897.10 | [] | docs.activestate.com |
...
Command stress
The stress utility is intended for catching of episodic episodic failures). | http://docs.activestate.com/activego/1.8/pkg/golang.org/x/tools/cmd/stress/ | 2019-01-16T05:35:37 | CC-MAIN-2019-04 | 1547583656897.10 | [] | docs.activestate.com |
Layered Navigation
Layered navigationThe primary group of web page links that a customer uses to navigate around the website; the navigation links to the most important categories or pages on an online store. makes it easy to find products based on categoryA set of products that share particular characteristics or attributes., price range, or any other available attributeA characteristic or property of a product; anything that describes a product. Examples of product attributes include color, size, weight, and price.. Layered navigation usually appears in the left column of search results and category pages and sometimes on the home pageThe first home page a visitor sees when they access your website URL. Considered the most important page on your website according to search engine indexing.. The standard navigation includes a “Shop By” list of categories and price range. You can configure the display of layered navigation, including product count and price range.
A quick rating takes only 3 clicks. Add a comment to help us improve Magento even more. | https://docs.magento.com/m2/2.2/b2b/user_guide/catalog/navigation-layered.html | 2019-01-16T06:45:20 | CC-MAIN-2019-04 | 1547583656897.10 | [] | docs.magento.com |
The OfficeScan server downloads the following components and deploys them to agents:
Update reminders and tips:
To allow the server to deploy the updated components to agents, enable automatic agent update. For details, see OfficeScan Agent Automatic Updates. If automatic agent update is disabled, the server downloads the updates but does not deploy them to the agents.
A pure IPv6 OfficeScan server cannot distribute updates directly to pure IPv4 agents. Similarly, a pure IPv4 OfficeScan server cannot distribute updates directly to pure IPv6 agents. A dual-stack proxy server that can convert IP addresses, such as DeleGate, is required to allow the OfficeScan server to distribute update to the agents.
Trend Micro releases pattern files regularly to keep agent protection current. Since pattern file updates are available regularly, OfficeScan uses a mechanism called "component duplication" that allows faster downloads of pattern files. See OfficeScan Server Component Duplication for more information.
If you use a proxy server to connect to the Internet, use the correct proxy settings to download updates successfully.
On the web console's Dashboard, add the Agent Updates widget to view the current versions of components and determine the number of agents with updated and outdated components. | http://docs.trendmicro.com/en-us/enterprise/officescan-120-server-online-help/keeping-protection-u/product_short_name-s1.aspx | 2019-01-16T05:30:29 | CC-MAIN-2019-04 | 1547583656897.10 | [] | docs.trendmicro.com |
When dealing with tautomers there are four primary tasks to be addressed. These include:
Each of these four tasks is covered in a section below. In every case, the tautomerization can be handled alone, but it is generally more useful to handle tautomerization in conjunction with ionization. Each function includes the potential for pKa normalization alongside the tautomer normalization or enumeration.
While the unique or canonical tautomer generated above is useful for storage in a database, they do not necessarily represent a low-energy tautomer suitable for visualization by modelers and chemists. In order to generate a structure suitable for visualization, we recommend you use the function OEGetReasonableProtomer. This function sets the molecule into a low-energy, neutral-pH, aqueous-phase tautomeric state that should be pleasing for visualization in a scientific setting.
In the course of molecular modeling, it is often desirable to generate a small ensemble of low-energy, neutralpH, aqueous-phase tautomers. The function OEGetReasonableTautomers returns such an ensemble. In order to generate low-energy tautomers reliably, the function works with a form of the molecule that has formal charges removed. By default, each tautomer’s ionization state is set to a neutral pH form, but ionization states are not enumerated.
The following depictions show some examples of tautomers that.
The OEGetUniqueProtomer function is used for canonicalizing the tautomeric forms of a small molecule. Canonicalization converts any of the tautomeric forms of a given molecule into a single unique representation and removes all formal charges that can be appropriately neutralized. This is useful for database registration where alternate representations of tautomeric compounds often leads to duplicate entries in a database.
It is important to remember that a time limit cannot be used to control a canonical process as it might lead to hardware dependent behavior. Thus the identification of a unique protomer can take significant time when the size of the largest contiguous tautomeric atoms approaches or exceeds 30 atoms.
The tautomer returned by OEGetUniqueProtomer as the “canonical” representation often is not the physiologically preferred form. If a representative form is necessary, please use the functions referred to in the Visualization section above. If an ensemble of biologically relevant tautomers are necessary, please see the functions in the Reasonable Tautomer Ensemble section above.
OEGetUniqueProtomer is not a conformer generation function and will not create coordinates for molecules that are read in with no coordinates. When used on molecules with three-dimensional coordinates, OEGetUniqueProtomer attempts to place hydrogens in a reasonable manner. However, OEGetUniqueProtomer.
The OEEnumerateTautomers function is the core algorithm used to implement all functions listed above. It is useful for enumerating the tautomeric forms of a small molecule. Using the parameters in OETautomerOptions a user can control the behavior of the OEEnumerateTautomers to yield the behavior for their particular application.
It is recommended that before passing a molecule to OEEnumerateTautomers that first any dative bonds are normalized to the hypervalent form using OEHypervalentNormalization and second, formal charges are removed using OERemoveFormalCharge. These two steps improve the tautomers that are returned.
Tautomer generation is a combinatorial process and the time and memory requirements can grow quite rapidly. There are two mechanisms to help a use control this growth. First, the number of tautomers generated and the number of tautomers returned can be controlled with the OETautomerOptions.SetMaxTautomersGenerated and OETautomerOptions.SetMaxTautomersToReturn respectively. Please be aware that if you require that few tautomers be generated, it is possible that no low energy tautomers will be generated. Second, one can limit the time the algorithm spends generating tautomers for each molecule using OETautomerOptions.SetMaxSearchTime.
OEEnumerateTautomers is not a conformer generation function and will not create coordinates for molecules that are read in with no coordinates. When used on molecules with three-dimensional coordinates, OEEnumerateTautomers attempts to place hydrogens in a reasonable manner. However, OEEnumerateTautomers. | https://docs.eyesopen.com/toolkits/csharp/quacpactk/tautomerstheory.html | 2019-01-16T07:06:06 | CC-MAIN-2019-04 | 1547583656897.10 | [] | docs.eyesopen.com |
The web console is the central point for monitoring OfficeScan throughout the corporate network. The console comes with a set of default settings and values that you can configure based on your security requirements and specifications. The web console uses standard Internet technologies, such as JavaScript, CGI, HTML, and HTTPS.
Configure the timeout settings from the web console. See Web Console Settings.
Use the web console to do the following:
Manage agents installed on networked computers
Group agents into logical domains for simultaneous configuration and management
Set scan configurations and initiate manual scan on a single or multiple networked computers
Configure notifications about security risks on the network and view logs sent by agents
Configure outbreak criteria and notifications
Delegate web console administration tasks to other OfficeScan administrators by configuring roles and user accounts
Ensure that agents comply with security guidelines
The web console does not support Windows 8, 8.1, 10, or Windows Server 2012 in Windows UI mode. | http://docs.trendmicro.com/en-us/enterprise/officescan-120-server-online-help/getting-started-with/the-web-console.aspx | 2019-01-16T05:46:10 | CC-MAIN-2019-04 | 1547583656897.10 | [] | docs.trendmicro.com |
Table widget
What is a table widget?
The table widget displays data as a set of facts or figures systematically displayed in columns and rows.
Configuration options
The following table lists the configuration options of this widget:
Example
The widget below is a sample of the ISP response time (average) for the last day.
Hints
- Click Filter to look for specific values in the table columns.
- Use the arrows next to each column name to reverse the data order. | https://docs.devo.com/confluence/ndt/dashboards/working-with-dashboard-widgets/table-widget | 2019-01-16T06:32:39 | CC-MAIN-2019-04 | 1547583656897.10 | [] | docs.devo.com |
Magento for B2B Commerce, 2.2.x
Adding Product Video
To add product video, you must first obtain an APIApplication Program Interface: A software interface that lets third-party applications read and write to a system using programming language constructs or statements. Key from your Google account, and enter it in the configuration of your store. Then, you can link to the video from the product.
In the next step, you will paste the key into your store’s configuration.
A quick rating takes only 3 clicks. Add a comment to help us improve Magento even more. | https://docs.magento.com/m2/2.2/b2b/user_guide/catalog/product-video.html | 2019-01-16T06:42:51 | CC-MAIN-2019-04 | 1547583656897.10 | [array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object) ] | docs.magento.com |
Assembly: GemStone.GemFire.Cache (in GemStone.GemFire.Cache.dll) Version: 8.1.0.0
public uint LastModifiedTime { get; }
public uint LastModifiedTime { get; }
Public ReadOnly Property LastModifiedTime As UInteger Get
Public ReadOnly Property LastModifiedTime As UInteger Get
public: property unsigned int LastModifiedTime { unsigned int get (); }
public: property unsigned int LastModifiedTime { unsigned int get (); }
Return Valuethe last modification time of the region or the entry; returns 0 if the entry is invalid or the modification time is uninitialized.. | http://data-docs-samples.cfapps.io/docs-gemfire/81/net_api/DotNetDocs/html/BAE721D9.htm | 2019-01-16T06:54:50 | CC-MAIN-2019-04 | 1547583656897.10 | [] | data-docs-samples.cfapps.io |
memoA document issued by the merchant to a customer to write off an outstanding balance because of overcharge, rebate, or return of goods...
A quick rating takes only 3 clicks. Add a comment to help us improve Magento even more. | https://docs.magento.com/m2/2.2/b2b/user_guide/sales/store-credit.html | 2019-01-16T06:47:08 | CC-MAIN-2019-04 | 1547583656897.10 | [] | docs.magento.com |
Vertex Cloud
Vertex is a cloud-based solution that automates your sales and use tax compliance, and generates a signature-ready PDF for your monthly returns.
Automate Tax Calculations & Returns
Vertex saves time, reduces risk, and helps you file your tax returns on time.
<![CDATA[ ]]>
Sales & Use Tax
Vertex Cloud calculates sales tax in the shopping cart based on the tax profile of each product that is purchased, and the jurisdiction.
<![CDATA[ ]]>
Manage Exception Certificates
Vertex makes it easy to manage customers by jurisdiction who have non-standard tax requirements.
<![CDATA[ ]]>
Generate Returns
Vertex Cloud automatically regenerates signature-ready PDF returns and sends a message when the returns are available.
<![CDATA[ ]]>
A quick rating takes only 3 clicks. Add a comment to help us improve Magento even more. | https://docs.magento.com/m2/2.2/b2b/user_guide/tax/vertex.html | 2019-01-16T06:41:36 | CC-MAIN-2019-04 | 1547583656897.10 | [array(['../Resources/Images/tile-vertex-calculations-returns_212x148.png',
None], dtype=object)
array(['../Resources/Images/tile-vertex-consumer-use-tax_212x148.png',
None], dtype=object)
array(['../Resources/Images/tile-vertex-exemption-certificates_212x148.png',
None], dtype=object)
array(['../Resources/Images/tile-vertex-prepare-returns_212x148.png',
None], dtype=object) ] | docs.magento.com |
Files with the RB0~RB9 extensions are backup copies of infected files. OfficeScan creates a backup of the infected file in case the virus/malware damaged the file during the cleaning process.
Solution: If OfficeScan successfully cleans the infected file, you do not need to keep the backup copy. If the endpoint functions normally, you can delete the backup file. | http://docs.trendmicro.com/en-us/enterprise/officescan-120-server-online-help/glossary/uncleanable-files/files-infected-with-/backup-files.aspx | 2019-01-16T06:12:55 | CC-MAIN-2019-04 | 1547583656897.10 | [] | docs.trendmicro.com |
Monitors and Rules
Updated: May 13, 2016
Applies To: System Center 2012 R2 Operations Manager, System Center 2012 - Operations Manager, System Center 2012 SP1 - Operations Manager
Monitors and rules are the primary elements for measuring health and detecting errors in Operations Manager and provide similar yet distinct functionality. Monitors set the state of an object while rules create alerts and collect data for analysis and reporting. Each monitor and rule is primarily defined by the source of the data that is used to perform its required functionality and the logic used to evaluate this data.
Although they provide different functionality, monitors and rules both use a common set of sources that provide the data to evaluate. For example, a monitor may use a performance counter to set the state of a particular object. A rule may access the same performance counter in order to store its value for analysis and reporting.
Monitors
A monitor measures the health of some aspect of a managed object. There are three kinds of monitors as shown in the following table:
Health State
Monitors each have either two or three health states. A monitor will be in one and only one of its potential states at any given time. When a monitor loaded by the agent, it is initialized to a healthy state. The state will change only if the specified conditions for another state are detected.
The overall health of a particular object is determined from the health of each of its monitors. This will be a combination of monitors targeted directly at the object, monitors target at objects rolling up to the object through a dependency monitor, dependency monitors targeted at those objects, and so on. This hierarchy is illustrated in the Health Explorer of the Operations console. The policy for how health is rolled up is part of the configuration of the aggregate and dependency monitors.
When you create a monitor, you must specify a condition for each of its health states. When one of those conditions is met, the monitor changes to that state. Each of the conditions must be unique such that only one can be true at a particular time. When a monitor changes to a Warning or Critical state, then it can optionally generate an alert. When it changes to a Healthy state, then any previously generated alert can optionally be automatically resolved.
Types of Monitors
Note
When the term monitor is alone, it typically refers to a unit monitor. Aggregate and dependency monitors will typically be referred to with their full name.
The following diagram shows an example of the Health Explorer for the Windows Server class. This shows the use of the different kinds of monitors contributing to an overall health state.
Sample Health Explorer
Rules
Rules do not affect the health state of the target object. They are used for one of three functions as described in the following table:
Should you create a monitor or a rule?
Unit monitors and rules in Operations Manager are similar. They are both workflows that run on an agent, they both can generate an alert when a particular condition is met, and they both use a similar set of data sources to detect these conditions. As a result, it can be difficult to determine if you want to create a monitor or rule for a particular scenario.
Use the following criteria to determine which one to create for different conditions.
Create a monitor if…
You want to affect the health of an object. In addition to generating an alert, a monitor will affect the health state of its target object. This is displayed in state views and availability reports.
You want to automatically resolve an alert when the error condition has been cleared. An alert from a rule cannot be automatically cleared since a rule has no way of detecting that the problem has been resolved. A monitor can detect that the problem has been resolved when the condition for its healthy state is met, and the alert can automatically be resolved.
You are creating an alert based on a performance threshold. There are no rules available to generate an alert from a performance threshold. A monitor should be used for this scenario anyway since you can use the condition where the performance counter is under the defined threshold.
You have a condition that requires more complex logic than is possible with rules. The Operations console provides a variety of options for setting the health state of a monitor but only simple detection for a rule. If you need more complex logic for a rule but don’t have a method to detect the monitor’s healthy state, then you can create a monitor using Manual or Timer reset. See Event Monitor Reset for more information.
Note
Using the adb418d7-95ab-4e33-8ced-34a934016aa3#VMPD you can create custom rules using the same logic available in the Operations console for monitors.
Create a Rule if…
You want to collect performance counters or events for analysis and reporting. Monitors only collect this information when it initiates a change in health state. If you want to collect the information you need to create a collection rule.
If you want to both collect a performance counter and set a threshold for it to set a health state, then create both a rule and a monitor using the same performance counter.
You want to generate an alert that is not related to health state of an object.
Monitors and Rules Topics
Monitors and rules are described in the following topics.
Describes the concept of a data source and lists the different kinds of data sources available for monitors and rules.
Describes how to create an expression for different kinds of monitors and rules.
Describes how to configure alerts created by monitors and rules.
Describes monitors and rules that use different kinds of events and provides details and procedures for creating them using wizards in the Operations console.
Performance Monitors and Rules
Describes monitors and rules that collect and monitor performance and provides details and procedures for creating them using wizards in the Operations console.
Script Monitors and Rules
Provides the details of how to write a monitoring script and how to create monitors and rules using scripts.
Describes monitors that allow the health of one kind of object to be dependent on the health of another object.
Describes monitors that consolidate the health of other monitors for a particular kind of object. | https://docs.microsoft.com/en-us/previous-versions/system-center/system-center-2012-R2/hh457603(v=sc.12) | 2019-01-16T06:31:13 | CC-MAIN-2019-04 | 1547583656897.10 | [array(['images/hh457603.991549aa-3729-4c18-b85d-3c1e344d0f65%28sc.12%29.jpeg',
'Heath Explorer sample Heath Explorer sample'], dtype=object) ] | docs.microsoft.com |
The Description Property dialog box provides an editable area where you can write a detailed description of database objects such as tables, columns, and foreign key constraints. You can access this dialog box from the Properties window for objects such as tables and views when they are selected in a designer, from dialog boxes for objects such as indexes and check constraints, and from the Column Properties tab in Table Designer for table columns. The description is stored as an extended property for the object.
See Also
How to: Show Table Properties (Visual Database Tools) | https://docs.microsoft.com/en-us/sql/ssms/visual-db-tools/description-property-dialog-box-visual-database-tools | 2017-05-23T00:27:35 | CC-MAIN-2017-22 | 1495463607242.32 | [] | docs.microsoft.com |
Applies To: Windows Server 2016
You can use this topic to learn about the tools that you can use to manage your NPS servers.
After you install NPS, you can administer NPS servers:
- Locally, by using the NPS Microsoft Management Console (MMC) snap-in, the static NPS console in Administrative Tools, Windows PowerShell commands, or the Network Shell (Netsh) commands for NPS.
- From a remote NPS server, by using the NPS MMC snap-in, the Netsh commands for NPS, the Windows PowerShell commands for NPS, or Remote Desktop Connection.
- From a remote workstation, by using Remote Desktop Connection in combination with other tools, such as the NPS MMC or Windows PowerShell.
Note
In Windows Server 2016, you can manage the local NPS server by using the NPS console. To manage both remote and local NPS servers, you must use the NPS MMC snap-in.
The following sections provide instructions on how to manage your local and remote NPS servers.
Configure the Local NPS Server by Using the NPS Console
After you have installed NPS, you can use this procedure to manage the local NPS server by using the NPS MMC.
Administrative Credentials
To complete this procedure, you must be a member of the Administrators group.
To configure the local NPS server by using the NPS console
In Server Manager, click - RADIUS server, RADIUS proxy, or both.
Manage Multiple NPS Servers by Using the NPS MMC Snap-in
You can use this procedure to manage the local NPS server and multiple remote NPS servers by using the NPS MMC snap-in.
Before performing the procedure below, you must install NPS on the local computer and on remote computers.
Depending on network conditions and the number of NPS servers you manage by using the NPS MMC snap-in, response of the MMC snap-in might be slow. In addition, NPS server configuration traffic is sent over the network during a remote administration session by using the NPS snap-in. Ensure that your network is physically secure and that malicious users do not have access to this network traffic.
Administrative Credentials
To complete this procedure, you must be a member of the Administrators group.
To manage multiple NPS servers by using the NPS snap-in
- To open the MMC, run Windows PowerShell as an Administrator. In Windows PowerShell, type mmc, and then press ENTER. The Microsoft Management Console opens.
- In the MMC, on the File menu, click Add/Remove Snap-in. The Add or Remove Snap-ins dialog box opens.
- In Add or Remove Snap-ins, in Available snap-ins, scroll down the list, click Network Policy Server, and then click Add. The Select Computer dialog box opens.
- In Select Computer, verify that Local computer (the computer on which this console is running) is selected, and then click OK. The snap-in for the local NPS server is added to the list in Selected snap-ins.
- In Add or Remove Snap-ins, in Available snap-ins, ensure that Network Policy Server is still selected, and then click Add. The Select Computer dialog box opens again.
- In Select Computer, click Another computer, and then type the IP address or fully qualified domain name (FQDN) of the remote NPS server that you want to manage by using the NPS snap-in. Optionally, you can click Browse to peruse the directory for the computer that you want to add. Click OK.
- Repeat steps 5 and 6 to add more NPS servers to the NPS snap-in. When you have added all the NPS servers you want to manage, click OK.
- To save the NPS snap-in for later use, click File, and then click Save. In the Save As dialog box, browse to the hard disk location where you want to save the file, type a name for your Microsoft Management Console (.msc) file, and then click Save.
Manage an NPS Server by Using Remote Desktop Connection
You can use this procedure to manage a remote NPS server by using Remote Desktop Connection.
By using Remote Desktop Connection, you can remotely manage your NPS servers running Windows Server 2016. You can also remotely manage NPS servers from a computer running Windows 10 or earlier Windows client operating systems.
You can use Remote Desktop connection to manage multiple NPS servers by using one of two methods.
- Create a Remote Desktop connection to each of your NPS servers individually.
- Use Remote Desktop to connect to one NPS server, and then use the NPS MMC on that server to manage other remote servers. For more information, see the previous section Manage Multiple NPS Servers by Using the NPS MMC Snap-in.
Administrative Credentials
To complete this procedure, you must be a member of the Administrators group on the NPS server.
To manage an NPS server by using Remote Desktop Connection
- On each NPS server that you want to manage remotely, in Server Manager, select Local Server. In the Server Manager details pane, view the Remote Desktop setting, and do one of the following.
- If the value of the Remote Desktop setting is Enabled, you do not need to perform some of the steps in this procedure. Skip down to Step 4 to start configuring Remote Desktop User permissions.
- If the Remote Desktop setting is Disabled, click the word Disabled. The System Properties dialog box opens on the Remote tab.
- In Remote Desktop, click Allow remote connections to this computer. The Remote Desktop Connection dialog box opens. Do one of the following.
- To customize the network connections that are allowed, click Windows Firewall with Advanced Security, and then configure the settings that you want to allow.
- To enable Remote Desktop Connection for all network connections on the computer, click OK.
- In System Properties, in Remote Desktop, decide whether to enable Allow connections only from computers running Remote Desktop with Network Level Authentication, and make your selection.
- Click Select Users. The Remote Desktop Users dialog box opens.
- In Remote Desktop Users, to grant permission to a user to connect remotely to the NPS server, click Add, and then type the user name for the user's account. Click OK.
- Repeat step 5 for each user for whom you want to grant remote access permission to the NPS server. When you're done adding users, click OK to close the Remote Desktop Users dialog box and OK again to close the System Properties dialog box.
- To connect to a remote NPS server that you have configured by using the previous steps, click Start, scroll down the alphabetical list and then click Windows Accessories, and click Remote Desktop Connection. The Remote Desktop Connection dialog box opens.
- In the Remote Desktop Connection dialog box, in Computer, type the NPS server name or IP address. If you prefer, click Options, configure additional connection options, and then click Save to save the connection for repeated use.
- Click Connect, and when prompted provide user account credentials for an account that has permissions to log on to and configure the NPS server.
Use Netsh NPS commands to manage an NPS Server
You can use commands in the Netsh NPS context to show and set the configuration of the authentication, authorization, accounting, and auditing database used both by NPS and the Remote Access service. Use commands in the Netsh NPS context to:
- Configure or reconfigure an NPS server, including all aspects of NPS that are also available for configuration by using the NPS console in the Windows interface.
- Export the configuration of one NPS server (the source server), including registry keys and the NPS configuration store, as a Netsh script.
- Import the configuration to another NPS server by using a Netsh script and the exported configuration file from the source NPS server.
You can run these commands from the Windows Server 2016 Command Prompt or from Windows PowerShell. You can also run netsh nps commands in scripts and batch files.
Administrative Credentials
To perform this procedure, you must be a member of the Administrators group on the local computer.
To enter the Netsh NPS context on an NPS server
- Open Command Prompt or Windows PowerShell.
- Type netsh, and then press ENTER.
- Type nps, and then press ENTER.
- To view a list of available commands, type a question mark (?) and press ENTER.
For more information about Netsh NPS commands, see Netsh Commands for Network Policy Server in Windows Server 2008, or download the entire Netsh Technical Reference from TechNet Gallery. This download is the full Network Shell Technical Reference for Windows Server 2008 and Windows Server 2008 R2. The format is Windows Help (*.chm) in a zip file. These commands are still present in Windows Server 2016 and Windows 10, so you can use netsh in these environments, although using Windows PowerShell is recommended.
Use Windows PowerShell to manage NPS servers
You can use Windows PowerShell commands to manage NPS servers. For more information, see the following Windows PowerShell command reference topics.
- Network Policy Server (NPS) Cmdlets in Windows PowerShell. You can use these netsh commands in Windows Server 2012 R2 or later operating systems.
- NPS Module. You can use these netsh commands in Windows Server 2016.
For more information about NPS administration, see Manage Network Policy Server (NPS). | https://docs.microsoft.com/en-us/windows-server/networking/technologies/nps/nps-admintools | 2017-05-22T23:46:55 | CC-MAIN-2017-22 | 1495463607242.32 | [] | docs.microsoft.com |
[…]
Category: Extensions
Google Maps […]
About Extensions
One of the most powerful features of Redux are extensions. With extensions you can override or customize any field type, or even extend Redux to do more than it was originally meant to do. With extensions we’ve built metaboxes, customizer support, and a slew of field types. Loading Extensions By using the Redux API, you can […] […]
Icon Select
The […]
Custom Fonts
The Custom Fonts extensions is for users to upload a .ttf, .woff, .otf, or .zip containing any of the afore mentioned fonts. It will then generate whatever fonts it may need, and make the font accessible via the typography field within Redux. A custom font CSS file will be enqueued to the panel as well […]
Repeater
The Redux Repeater extension easily allows developers to group like fields in a dynamic manner, or static number. Allowing values to be grouped (nested) under a single key, or under each individual key. All values will be returned as an array. Incompatible Fields Due to the complexities of this extension, the following Redux fields […] | https://docs.reduxframework.com/category/extensions/ | 2017-05-22T23:23:11 | CC-MAIN-2017-22 | 1495463607242.32 | [] | docs.reduxframework.com |
By: Philipp Haller, Aleksandar Prokopec, Heather Miller, Viktor Klang, Roland Kuhn, and Vojin Jovanovic
Futures provide a way to reason about performing many operations
in parallel– in an efficient and non-blocking way.
A
Future
is a placeholder object for a value that may not yet exist.
Generally, the value of the Future is supplied concurrently and can subsequently be used.
Composing concurrent tasks in this way tends to result in faster, asynchronous, non-blocking parallel code.
By default, futures and promises are non-blocking, making use of
callbacks instead of typical blocking operations.
To simplify the use of callbacks both syntactically and conceptually,
Scala provides combinators such as
flatMap,
foreach, and
filter used to compose
futures in a non-blocking way.
Blocking is still possible - for cases where it is absolutely
necessary, futures can be blocked on (although this is discouraged).
A typical future looks like this:
val inverseFuture: Future[Matrix] = Future { fatMatrix.inverse() // non-blocking long lasting computation }(executionContext)
Or with the more idiomatic:
implicit val ec: ExecutionContext = ... val inverseFuture : Future[Matrix] = Future { fatMatrix.inverse() } // ec is implicitly passed
Both code snippets delegate the execution of
fatMatrix.inverse() to an
ExecutionContext and embody the result of the computation in
inverseFuture.
Future and Promises revolve around
ExecutionContexts, responsible for executing computations.
An
ExecutionContext is similar to an Executor:
it is free to execute computations in a new thread, in a pooled thread or in the current thread
(although executing the computation in the current thread is discouraged – more on that below).
The
scala.concurrent package comes out of the box with an
ExecutionContext implementation, a global static thread pool.
It is also possible to convert an
Executor into an
ExecutionContext.
Finally, users are free to extend the
ExecutionContext trait to implement their own execution contexts,
although this should only be done in rare cases.
ExecutionContext.global is an
ExecutionContext backed by a ForkJoinPool.
It should be sufficient for most situations but requires some care.
A
ForkJoinPool manages a limited amount of threads (the maximum amount of thread being referred to as parallelism level).
The number of concurrently blocking computations can exceed the parallelism level
only if each blocking call is wrapped inside a
blocking call (more on that below).
Otherwise, there is a risk that the thread pool in the global execution context is starved,
and no computation can proceed.
By default the
ExecutionContext.global sets the parallelism level of its underlying fork-join pool to the amount of available processors
(Runtime.availableProcessors).
This configuration can be overridden by setting one (or more) of the following VM attributes:
Runtime.availableProcessors
Runtime.availableProcessors
Runtime.availableProcessors
The parallelism level will be set to
numThreads as long as it remains within
[minThreads; maxThreads].
As stated above the
ForkJoinPool can increase the amount of threads beyond its
parallelismLevel in the presence of blocking computation.
As explained in the
ForkJoinPool API, this is only possible if the pool is explicitly notified:
import scala.concurrent.Future import scala.concurrent.forkjoin._ // the following is equivalent to `implicit val ec = ExecutionContext.global` import ExecutionContext.Implicits.global Future { ForkJoinPool.managedBlock( new ManagedBlocker { var done = false def block(): Boolean = { try { myLock.lock() // ... } finally { done = true } true } def isReleasable: Boolean = done } ) }
Fortunately the concurrent package provides a convenient way for doing so:
import scala.concurrent.Future import scala.concurrent.blocking Future { blocking { myLock.lock() // ... } }
Note that
blocking is a general construct that will be discussed more in depth below.
Last but not least, you must remember that the
ForkJoinPool is not designed for long lasting blocking operations.
Even when notified with
blocking the pool might not spawn new workers as you would expect,
and when new workers are created they can be as many as 32767.
To give you an idea, the following code will use 32000 threads:
implicit val ec = ExecutionContext.global for( i <- 1 to 32000 ) { Future { blocking { Thread.sleep(999999) } } }
If you need to wrap long lasting blocking operations we recommend using a dedicated
ExecutionContext, for instance by wrapping a Java
Executor.
Using the
ExecutionContext.fromExecutor method you can wrap a Java
Executor into an
ExecutionContext.
For instance:
ExecutionContext.fromExecutor(new ThreadPoolExecutor( /* your configuration */ ))
One might be tempted to have an
ExecutionContext that runs computations within the current thread:
val currentThreadExecutionContext = ExecutionContext.fromExecutor( new Executor { // Do not do this! def execute(runnable: Runnable) { runnable.run() } })
This should be avoided as it introduces non-determinism in the execution of your future.
Future { doSomething }(ExecutionContext.global).map { doSomethingElse }(currentThreadExecutionContext)
The
doSomethingElse call might either execute in
doSomething’s thread or in the main thread, and therefore be either asynchronous or synchronous.
As explained here a callback should not be both.
A
Future is an object holding a value which may become available at some point.
This value is usually the result of some other computation:
Futureis not completed.
Futureis completed.
Completion can take one of two forms:
Futureis completed with a value, we say that the future was successfully completed with that value.
Futureis completed with an exception thrown by the computation, we say that the
Futurewas failed with that exception.
A
Future has an important property that it may only be assigned
once.
Once a
Future object is given a value or an exception, it becomes
in effect immutable– it can never be overwritten.
The simplest way to create a future object is to invoke the
Future.apply
method which starts an asynchronous computation and returns a
future holding the result of that computation.
The result becomes available once the future completes.
Note that
Future[T] is a type which denotes future objects, whereas
Future.apply is a method which creates and schedules an asynchronous
computation, and then returns a future object which will be completed
with the result of that computation.
This is best shown through an example.
Let’s assume that we want to use a hypothetical API of some popular social network to obtain a list of friends for a given user. We will open a new session and then send a request to obtain a list of friends of a particular user:
import scala.concurrent._ import ExecutionContext.Implicits.global val session = socialNetwork.createSessionFor("user", credentials) val f: Future[List[Friend]] = Future { session.getFriends() }
Above, we first import the contents of the
scala.concurrent package
to make the type
Future visible.
We will explain the second import shortly.
We then initialize a session variable which we will use to send
requests to the server, using a hypothetical
createSessionFor
method.
To obtain the list of friends of a user, a request
has to be sent over a network, which can take a long time.
This is illustrated with the call to the method
getFriends that returns
List[Friend].
To better utilize the CPU until the response arrives, we should not
block the rest of the program– this computation should be scheduled
asynchronously. The
Future.apply method does exactly that– it performs
the specified computation block concurrently, in this case sending
a request to the server and waiting for a response.
The list of friends becomes available in the future
f once the server
responds.
An unsuccessful attempt may result in an exception. In
the following example, the
session value is incorrectly
initialized, so the computation in the
Future block will throw a
NullPointerException.
This future
f is then failed with this exception instead of being completed successfully:
val session = null val f: Future[List[Friend]] = Future { session.getFriends }
The line
import ExecutionContext.Implicits.global above imports
the default global execution context.
Execution contexts execute tasks submitted to them, and
you can think of execution contexts as thread pools.
They are essential for the
Future.apply method because
they handle how and when the asynchronous computation is executed.
You can define your own execution contexts and use them with
Future,
but for now it is sufficient to know that
you can import the default execution context as shown above.
Our example was based on a hypothetical social network API where the computation consists of sending a network request and waiting for a response. It is fair to offer an example involving an asynchronous computation which you can try out of the box. Assume you have a text file and you want to find the position of the first occurrence of a particular keyword. This computation may involve blocking while the file contents are being retrieved from the disk, so it makes sense to perform it concurrently with the rest of the computation.
val firstOccurrence: Future[Int] = Future { val source = scala.io.Source.fromFile("myText.txt") source.toSeq.indexOfSlice("myKeyword") }
We now know how to start an asynchronous computation to create a new future value, but we have not shown how to use the result once it becomes available, so that we can do something useful with it. We are often interested in the result of the computation, not just its side. If the
future has already been completed when registering the callback, then
the callback may either be executed asynchronously, or sequentially on
the same thread.
The most general form of registering a callback is by using the
onComplete
method, which takes a callback function of type
Try[T] => U.
The callback is applied to the value
of type
Success[T] if the future completes successfully, or to a value
of type
Failure[T] otherwise.
The
Try[T] is similar to
Option[T] or
Either[T, S], in that it is a monad
potentially holding a value of some type.
However, it has been specifically designed to either hold a value or
some throwable object.
Where an
Option[T] could either be a value (i.e.
Some[T]) or no value
at all (i.e.
None),
Try[T] is a
Success[T] when it holds a value
and otherwise
Failure[T], which holds an exception.
Failure[T] holds
more information than just a plain
None by saying why the value is not
there.
In the same time, you can think of
Try[T] as a special version
of
Either[Throwable, T], specialized for the case when the left
value is a
Throwable.
Coming back to our social network example, let’s assume we want to
fetch a list of our own recent posts and render them to the screen.
We do so by calling a method
getRecentPosts which returns
a
List[String]– a list of recent textual posts:
import scala.util.{Success, Failure} val f: Future[List[String]] = Future { session.getRecentPosts } f onComplete { case Success(posts) => for (post <- posts) println(post) case Failure(t) => println("An error has occured: " + t.getMessage) }
The
onComplete method is general in the sense that it allows the
client to handle the result of both failed and successful future
computations. To handle only successful results, the
onSuccess
callback is used (which takes a partial function):
val f: Future[List[String]] = Future { session.getRecentPosts } f onSuccess { case posts => for (post <- posts) println(post) }
To handle failed results, the
onFailure callback is used:
val f: Future[List[String]] = Future { session.getRecentPosts } f onFailure { case t => println("An error has occured: " + t.getMessage) } f onSuccess { case posts => for (post <- posts) println(post) }
The
onFailure callback is only executed if the future fails, that
is, if it contains an exception.
Since partial functions have the
isDefinedAt method, the
onFailure method only triggers the callback if it is defined for a
particular
Throwable. In the following example the registered
onFailure
callback is never triggered:
val f = Future { 2 / 0 } f onFailure { case npe: NullPointerException => println("I'd be amazed if this printed out.") }
Coming back to the previous example with searching for the first occurrence of a keyword, you might want to print the position of the keyword to the screen:
val firstOccurrence: Future[Int] = Future { val source = scala.io.Source.fromFile("myText.txt") source.toSeq.indexOfSlice("myKeyword") } firstOccurrence onSuccess { case idx => println("The keyword first appears at position: " + idx) } firstOccurrence onFailure { case t => println("Could not process file: " + t.getMessage) }
The
onComplete,
onSuccess, and
onFailure methods have result type
Unit, which means invocations
of these methods cannot be chained. Note that this design is intentional,
to avoid suggesting that chained
invocations may imply an ordering on the execution of the registered
callbacks (callbacks registered on the same future are unordered).
That said,.
Furthermore, the order in which the callbacks are executed is
not predefined, even between different runs of the same application.
In fact, the callbacks may not be called sequentially one after the other,
but may concurrently execute at the same time.
This means that in the following example the variable
totalA may not be set
to the correct number of lower case and upper case
a characters from the computed
text.
@volatile var totalA = 0 val text = Future { "na" * 16 + "BATMAN!!!" } text onSuccess { case txt => totalA += txt.count(_ == 'a') } text onSuccess { case txt => totalA += txt.count(_ == 'A') }
Above, the two callbacks may execute one after the other, in
which case the variable
totalA holds the expected value
18.
However, they could also execute concurrently, so
totalA could
end up being either
16 or
2, since
+= is not an atomic
operation (i.e. it consists of a read and a write step which may
interleave arbitrarily with other reads and writes).
For the sake of completeness the semantics of callbacks are listed here:
Registering an
onComplete callback on the future
ensures that the corresponding closure is invoked after
the future is completed, eventually.
Registering an
onSuccess or
onFailure callback has the same
semantics as
onComplete, with the difference that the closure is only called
if the future is completed successfully or fails, respectively.
Registering a callback on the future which is already completed will result in the callback being executed eventually (as implied by 1).
In the event that multiple callbacks are registered on the future,
the order in which they are executed is not defined. In fact, the
callbacks may be executed concurrently with one another.
However, a particular
ExecutionContext implementation may result
in a well-defined order.
In the event that some of the callbacks throw an exception, the other callbacks are executed regardless.
In the event that some of the callbacks never complete (e.g. the
callback contains an infinite loop), the other callbacks may not be
executed at all. In these cases, a potentially blocking callback must
use the
blocking construct (see below).
Once executed, the callbacks are removed from the future object, thus being eligible for GC.
The callback mechanism we have shown is sufficient to chain future results with subsequent computations. However, it is sometimes inconvenient and results in bulky code. We show this with an example. Assume we have an API for interfacing with a currency trading service. Suppose we want to buy US dollars, but only when it’s profitable. We first show how this could be done using callbacks:
val rateQuote = Future { connection.getCurrentValue(USD) } rateQuote onSuccess { case quote => val purchase = Future { if (isProfitable(quote)) connection.buy(amount, quote) else throw new Exception("not profitable") } purchase onSuccess { case _ => println("Purchased " + amount + " USD") } }
We start by creating a future
rateQuote which gets the current exchange
rate.
After this value is obtained from the server and the future successfully
completed, the computation proceeds in the
onSuccess callback and we are
ready to decide whether to buy or not.
We therefore create another future
purchase which makes a decision to buy only if it’s profitable
to do so, and then sends a request.
Finally, once the purchase is completed, we print a notification message
to the standard output.
This works, but is inconvenient for two reasons. First, we have to use
onSuccess, and we have to nest the second
purchase future within
it. Imagine that after the
purchase is completed we want to sell
some other currency. We would have to repeat this pattern within the
onSuccess callback, making the code overly indented, bulky and hard
to reason about.
Second, the
purchase future is not in the scope with the rest of
the code– it can only be acted upon from within the
onSuccess
callback. This means that other parts of the application do not
see the
purchase future and cannot register another
onSuccess
callback to it, for example, to sell some other currency.
For these two reasons, futures provide combinators which allow a
more straightforward composition. One of the basic combinators
is
map, which, given a future and a mapping function for the value of
the future, produces a new future that is completed with the
mapped value once the original future is successfully completed.
You can reason about mapping futures in the same way you reason
about mapping collections.
Let’s rewrite the previous example using the
map combinator:
val rateQuote = Future { connection.getCurrentValue(USD) } val purchase = rateQuote map { quote => if (isProfitable(quote)) connection.buy(amount, quote) else throw new Exception("not profitable") } purchase onSuccess { case _ => println("Purchased " + amount + " USD") }
By using
map on
rateQuote we have eliminated one
onSuccess callback and,
more importantly, the nesting.
If we now decide to sell some other currency, it suffices to use
map on
purchase again.
But what happens if
isProfitable returns
false, hence causing
an exception to be thrown?
In that case
purchase is failed with that exception.
Furthermore, imagine that the connection was broken and that
getCurrentValue threw an exception, failing
rateQuote.
In that case we’d have no value to map, so the
purchase would
automatically be failed with the same exception as
rateQuote.
In conclusion, if the original future is completed successfully then the returned future is completed with a mapped value from the original future. If the mapping function throws an exception the future is completed with that exception. If the original future fails with an exception then the returned future also contains the same exception. This exception propagating semantics is present in the rest of the combinators, as well.
One of the design goals for futures was to enable their use in for-comprehensions.
For this reason, futures also have the
flatMap,
filter and
foreach combinators. The
flatMap method takes a function that maps the value
to a new future
g, and then returns a future which is completed once
g is completed.
Lets assume that we want to exchange US dollars for Swiss francs
(CHF). We have to fetch quotes for both currencies, and then decide on
buying based on both quotes.
Here is an example of
flatMap and
withFilter usage within for-comprehensions:
val usdQuote = Future { connection.getCurrentValue(USD) } val chfQuote = Future { connection.getCurrentValue(CHF) } val purchase = for { usd <- usdQuote chf <- chfQuote if isProfitable(usd, chf) } yield connection.buy(amount, chf) purchase onSuccess { case _ => println("Purchased " + amount + " CHF") }
The
purchase future is completed only once both
usdQuote
and
chfQuote are completed– it depends on the values
of both these futures so its own computation cannot begin
earlier.
The for-comprehension above is translated into:
val purchase = usdQuote flatMap { usd => chfQuote .withFilter(chf => isProfitable(usd, chf)) .map(chf => connection.buy(amount, chf)) }
which is a bit harder to grasp than the for-comprehension, but
we analyze it to better understand the
flatMap operation.
The
flatMap operation maps its own value into some other future.
Once this different future is completed, the resulting future
is completed with its value.
In our example,
flatMap uses the value of the
usdQuote future
to map the value of the
chfQuote into a third future which
sends a request to buy a certain amount of Swiss francs.
The resulting future
purchase is completed only once this third
future returned from
map completes.
This can be mind-boggling, but fortunately the
flatMap operation
is seldom used outside for-comprehensions, which are easier to
use and understand.
The
filter combinator creates a new future which contains the value
of the original future only if it satisfies some predicate. Otherwise,
the new future is failed with a
NoSuchElementException. For futures
calling
filter has exactly the same effect as does calling
withFilter.
The relationship between the
collect and
filter combinator is similar
to the relationship of these methods in the collections API.
It is important to note that calling the
foreach combinator does not
block to traverse the value once it becomes available.
Instead, the function for the
foreach gets asynchronously
executed only if the future is completed successfully. This means that
the
foreach has exactly the same semantics as the
onSuccess
callback.
Since the
Future trait can conceptually contain two types of values
(computation results and exceptions), there exists a need for
combinators which handle exceptions.
Let’s assume that based on the
rateQuote we decide to buy a certain
amount. The
connection.buy method takes an
amount to buy and the expected
quote. It returns the amount bought. If the
quote has changed in the meanwhile, it will throw a
QuoteChangedException and it will not buy anything. If we want our
future to contain
0 instead of the exception, we use the
recover
combinator:
val purchase: Future[Int] = rateQuote map { quote => connection.buy(amount, quote) } recover { case QuoteChangedException() => 0 }
The
recover combinator creates a new future which holds the same
result as the original future if it completed successfully. If it did
not then the partial function argument is applied to the
Throwable
which failed the original future. If it maps the
Throwable to some
value, then the new future is successfully completed with that value.
If the partial function is not defined on that
Throwable, then the
resulting future is failed with the same
Throwable.
The
recoverWith combinator creates a new future which holds the
same result as the original future if it completed successfully.
Otherwise, the partial function is applied to the
Throwable which
failed the original future. If it maps the
Throwable to some future,
then this future is completed with the result of that future.
Its relation to
recover is similar to that of
flatMap to
map.
Combinator
fallbackTo creates a new future which holds the result
of this future if it was completed successfully, or otherwise the
successful result of the argument future. In the event that both this
future and the argument future fail, the new future is completed with
the exception from this future, as in the following example which
tries to print US dollar value, but prints the Swiss franc value in
the case it fails to obtain the dollar value:
val usdQuote = Future { connection.getCurrentValue(USD) } map { usd => "Value: " + usd + "$" } val chfQuote = Future { connection.getCurrentValue(CHF) } map { chf => "Value: " + chf + "CHF" } val anyQuote = usdQuote fallbackTo chfQuote anyQuote onSuccess { println(_) }
The
andThen combinator is used purely for side-effecting purposes.
It. This ensures that
multiple
andThen calls are ordered, as in the following example
which stores the recent posts from a social network to a mutable set
and then renders all the posts to the screen:
val allposts = mutable.Set[String]() Future { session.getRecentPosts } andThen { case Success(posts) => allposts ++= posts } andThen { case _ => clearAll() for (post <- allposts) render(post) }
In summary, the combinators on futures are purely functional. Every combinator returns a new future which is related to the future it was derived from.
To enable for-comprehensions on a result returned as an exception,
futures also have projections. If the original future fails, the
failed projection returns a future containing a value of type
Throwable. If the original future succeeds, the
failed projection
fails with a
NoSuchElementException. The following is an example
which prints the exception to the screen:
val f = Future { 2 / 0 } for (exc <- f.failed) println(exc)
The following example does not print anything to the screen:
val f = Future { 4 / 2 } for (exc <- f.failed) println(exc)
Support for extending the Futures API with additional utility methods is planned. This will allow external frameworks to provide more specialized utilities.
Futures are generally asynchronous and do not block the underlying execution threads. However, in certain cases, it is necessary to block. We distinguish two forms of blocking the execution thread: invoking arbitrary code that blocks the thread from within the future, and blocking from outside another future, waiting until that future gets completed.
As seen with the global
ExecutionContext, it is possible to notify an
ExecutionContext of a blocking call with the
blocking construct.
The implementation is however at the complete discretion of the
ExecutionContext. While some
ExecutionContext such as
ExecutionContext.global
implement
blocking by means of a
ManagedBlocker, some execution contexts such as the fixed thread pool:
ExecutionContext.fromExecutor(Executors.newFixedThreadPool(x))
will do nothing, as shown in the following:
implicit val ec = ExecutionContext.fromExecutor( Executors.newFixedThreadPool(4)) Future { blocking { blockingStuff() } }
Has the same effect as
Future { blockingStuff() }
The blocking code may also throw an exception. In this case, the exception is forwarded to the caller.
As mentioned earlier, blocking on a future is strongly discouraged for the sake of performance and for the prevention of deadlocks. Callbacks and combinators on futures are a preferred way to use their results. However, blocking may be necessary in certain situations and is supported by the Futures and Promises API.
In the currency trading example above, one place to block is at the end of the application to make sure that all of the futures have been completed. Here is an example of how to block on the result of a future:
import scala.concurrent._ import scala.concurrent.duration._ def main(args: Array[String]) { val rateQuote = Future { connection.getCurrentValue(USD) } val purchase = rateQuote map { quote => if (isProfitable(quote)) connection.buy(amount, quote) else throw new Exception("not profitable") } Await.result(purchase, 0 nanos) }
In the case that the future fails, the caller is forwarded the
exception that the future is failed with. This includes the
failed
projection– blocking on it results in a
NoSuchElementException
being thrown if the original future is completed successfully.
Alternatively, calling
Await.ready waits until the future becomes
completed, but does not retrieve its result. In the same way, calling
that method will not throw an exception if the future is failed.
The
Future trait implements the
Awaitable trait with methods
method
ready() and
result(). These methods cannot be called directly
by the clients– they can only be called by the execution context.
When asynchronous computations throw unhandled exceptions, futures
associated with those computations fail. Failed futures store an
instance of
Throwable instead of the result value.
Futures provide
the
onFailure callback method, which accepts a
PartialFunction to
be applied to a
Throwable. The following special exceptions are
treated differently:
scala.runtime.NonLocalReturnControl[_] – this exception holds a value
associated with the return. Typically,
return constructs in method
bodies are translated to
throws with this exception. Instead of
keeping this exception, the associated value is stored into the future or a promise.
ExecutionException - stored when the computation fails due to an
unhandled
InterruptedException,
Error or a
scala.util.control.ControlThrowable. In this case the
ExecutionException has the unhandled exception as its cause. The rationale
behind this is to prevent propagation of critical and control-flow related
exceptions normally not handled by the client code and at the same time inform
the client in which future the computation failed.
Fatal exceptions (as determined by
NonFatal) are rethrown in the thread executing
the failed asynchronous computation. This informs the code managing the executing
threads of the problem and allows it to fail fast, if necessary. See
NonFatal
for a more precise description of the semantics.
So far we have only considered
Future objects created by
asynchronous computations started using the
future method.
However, futures can also be created using promises.
While futures are defined as a type of read-only placeholder object
created for a result which doesn’t yet exist, a promise can be thought
of as a writable, single-assignment container, which completes a
future. That is, a promise can be used to successfully complete a
future with a value (by “completing” the promise) using the
success
method. Conversely, a promise can also be used to complete a future
with an exception, by failing the promise, using the
failure method.
A promise
p completes the future returned by
p.future. This future
is specific to the promise
p. Depending on the implementation, it
may be the case that
p.future eq p.
Consider the following producer-consumer example, in which one computation produces a value and hands it off to another computation which consumes that value. This passing of the value is done using a promise.
import scala.concurrent.{ Future, Promise } import scala.concurrent.ExecutionContext.Implicits.global val p = Promise[T]() val f = p.future val producer = Future { val r = produceSomething() p success r continueDoingSomethingUnrelated() } val consumer = Future { startDoingSomething() f onSuccess { case r => doSomethingWithResult() } }
Here, we create a promise and use its
future method to obtain the
Future that it completes. Then, we begin two asynchronous
computations. The first does some computation, resulting in a value
r, which is then used to complete the future
f, by fulfilling
the promise
p. The second does some computation, and then reads the result
r
of the completed future
f. Note that the
consumer can obtain the
result before the
producer task is finished executing
the
continueDoingSomethingUnrelated() method.
As mentioned before, promises have single-assignment semantics. As
such, they can be completed only once. Calling
success on a
promise that has already been completed (or failed) will throw an
IllegalStateException.
The following example shows how to fail a promise.
val p = Promise[T]() val f = p.future val producer = Future { val r = someComputation if (isInvalid(r)) p failure (new IllegalStateException) else { val q = doSomeMoreComputation(r) p success q } }
Here, the
producer computes an intermediate result
r, and checks
whether it’s valid. In the case that it’s invalid, it fails the
promise by completing the promise
p with an exception. In this case,
the associated future
f is failed. Otherwise, the
producer
continues its computation, and finally completes the future
f with a
valid result, by completing promise
p.
Promises can also be completed with a
complete method which takes
a potential value
Try[T]– either a failed result of type
Failure[Throwable] or a
successful result of type
Success[T].
Analogous to
success, calling
failure and
complete on a promise that has already
been completed will throw an
IllegalStateException.
One nice property of programs written using promises with operations described so far and futures which are composed through monadic operations without side-effects is that these programs are deterministic. Deterministic here means that, given that no exception is thrown in the program, the result of the program (values observed in the futures) will always be the same, regardless of the execution schedule of the parallel program.). For these reasons methods
tryComplete,
trySuccess and
tryFailure exist on future. The client should be
aware that using these methods results in programs which are not
deterministic, but depend on the execution schedule.
The method
completeWith completes the promise with another
future. After the future is completed, the promise gets completed with
the result of that future as well. The following program prints
1:
val f = Future { 1 } val p = Promise[Int]() p completeWith f p.future onSuccess { case x => println(x) }
When failing a promise with an exception, three subtypes of
Throwables
are handled specially. If the
Throwable used to break the promise is
a
scala.runtime.NonLocalReturnControl, then the promise is completed with
the corresponding value. If the
Throwable used to break the promise is
an instance of
Error,
InterruptedException, or
scala.util.control.ControlThrowable, the
Throwable is wrapped as
the cause of a new
ExecutionException which, in turn, is failing
the promise.
Using promises, the
onComplete method of the futures and the
future construct
you can implement any of the functional composition combinators described earlier.
Let’s assume you want to implement a new combinator
first which takes
two futures
f and
g and produces a third future which is completed by either
f or
g (whichever comes first), but only given that it is successful.
Here is an example of how to do it:
def first[T](f: Future[T], g: Future[T]): Future[T] = { val p = promise[T] f onSuccess { case x => p.trySuccess(x) } g onSuccess { case x => p.trySuccess(x) } p.future }
Note that in this implementation, if neither
f nor
g succeeds, then
first(f, g) never completes (either with a value or with an exception).
To simplify handling of time in concurrent applications
scala.concurrent
introduces a
Duration abstraction.
Duration is not supposed to be yet another
general time abstraction. It is meant to be used with concurrency libraries and
resides in
scala.concurrent package.
Duration is the base class representing length of time. It can be either finite or infinite.
Finite duration is represented with
FiniteDuration class which is constructed from
Long length and
java.util.concurrent.TimeUnit. Infinite durations, also extended from
Duration,
exist in only two instances ,
Duration.Inf and
Duration.MinusInf. Library also
provides several
Duration subclasses for implicit conversion purposes and those should
not be used.
Abstract
Duration contains methods that allow :
toNanos,
toMicros,
toMillis,
toSeconds,
toMinutes,
toHours,
toDaysand
toUnit(unit: TimeUnit)).
<,
<=,
>and
>=).
+,
-,
*,
/and
unary_-).
thisduration and the one supplied in the argument (
min,
max).
isFinite).
Duration can be instantiated in the following ways:
Intand
Long. For example
val d = 100 millis.
Longlength and a
java.util.concurrent.TimeUnit. For example
val d = Duration(100, MILLISECONDS).
val d = Duration("1.2 µs").
Duration also provides
unapply methods so it can be used in pattern matching constructs.
Examples:
import scala.concurrent.duration._ import java.util.concurrent.TimeUnit._ // instantiation val d1 = Duration(100, MILLISECONDS) // from Long and TimeUnit val d2 = Duration(100, "millis") // from Long and String val d3 = 100 millis // implicitly from Long, Int or Double val d4 = Duration("1.2 µs") // from String // pattern matching val Duration(length, unit) = 5 millis
Contents | http://docs.scala-lang.org/overviews/core/futures.html | 2017-05-22T23:15:24 | CC-MAIN-2017-22 | 1495463607242.32 | [] | docs.scala-lang.org |
Manual Booking Process - Tutorial
The Manual Booking Process (Phone Reservations) / Rate Quoting Workflow
Lodgix.com Vacation Rental Manager is a full-featured vacation rental management tool designed to help track and manage your vacation rental properties. The entire process below can make a manual guest booking in <60 seconds. There are quite a few steps in the process of making a manual booking, that's why we always recommend to ENCOURAGE GUESTS TO MAKE ONLINE BOOKINGS WHENEVER POSSIBLE!!
Add Booking & Phone Inquiries w/ Rate Quoting
Phone inquiries are still quite common. Lodgix.com has a built in rate quoting feature that allows for quick and easy access to rate data when you need it. When a guest calls for an availability inquiry or rate quote or calls to book your vacation rental, the first step in Lodgix is to open the availability calendar, check availability and begin the rate quoting or reservation process.
Making a reservation / generate a rate quote using the calendar tapeMaking a reservation / generate a rate quote using the calendar tape
This is the heart of creating a new reservation. All of the fields are pre-populated with the dates and # of guests that were selected on the availability calendar.
Note: You can also access this screen and enter the dates / # of guests manually at any time by clicking on the Reservations > Create Reservation menu item.
Follow the sequence of six steps outlined above.
- Click on the "select existing guest or add new guest" link and enter the guest name and email address. The system will automatically filter the results while you are typing to alert you if the guest is already in the system.
- Confirm the start dates and end dates of the reservation
- Enter the number of guests. (Note: if making this booking manually you will have to choose and confirm the property as well)
- If this is a non-profit group, click to make the group "tax exempt" and ask for the state tax exemption number from the guest
- From the drop down apply any discounts you have setup in the system.
- Click "Create Invoice"
If the reservation was successful, then a message will appear saying "Reservation successfully saved" and a new tab will be presented which can be clicked to view the invoice which has been created (#2). A link to the "Guest Control Panel" (#1) is also presented. The next step is either to go to the invoice and review / modify it to meet your needs or go to the guest control panel and enter or confirm the guest mailing address.
Most folks tend to go to the Invoice first to make sure the invoice makes sense and then head over to the Guest Control Panel to complete the reservation.
Billing Data Entry FormBilling Data Entry Form
Once you've entered or verified the guest contact details, click on the Invoice tab at the top of the screen and review the check-in / check-out times and make any modifications for the guest if necessary. Next open the Billing Tab and request the guest payment details (#2) and process or record the payment within the Transactions window (#3).
Billing data can only be saved if you have a payment gateway setup and configured correctly. All guest credit card data is saved on the servers of payment gateway, assuring PCI compliance for both the vacation rental manager and Lodgix.com.
Billing Data - Safe and SecureBilling Data - Safe and Secure
Process the Credit CardProcess the Credit Card
Process & Record Payment WindowProcess & Record Payment Window
- Enter the amount of the payment. The "Amt to Confirm" button will enter the required confirmation amount automatically for you.. The "Balance" button will enter any remaining balance amount.
- Click "Process Payment". The application will communicate with the payment gateway using tokenization and charge the guest's credit card. Funds will be deposited into your bank account in 24-48 hours, depending on the terms you have with your payment gateway / merchant account provider.
Next Step: Send a Confirmation
Send ConfirmationSend Confirmation
The "Conversations" tab within the guest control panel is an important component of running a streamlined vacation rental business. Once a reservation has been confirmed (via payment of the reservation deposit), the next step is to email the guest a confirmation. Lodgix supplies a default confirmation and a default invoice. You can use the default templates right out of the box or you can edit them and tweak them for your specific business. All communications with the guest control panel are logged.
This is the modal window that appears when sending a guest an email. In this example, you can see that the system has dynamically generated a confirmation and attached it in PDF format. The property owner has appended a standard confirmation response to save time and then it's just a matter of hitting the "Send" button. | http://docs.lodgix.com/m/tutorials/l/25944-manual-booking-process-tutorial | 2019-09-15T10:15:05 | CC-MAIN-2019-39 | 1568514571027.62 | [] | docs.lodgix.com |
Try it now and let us know what you think. Switch to the new look >>
You can return to the original look by selecting English in the language selector above.
Creating a Trail for an Organization
If you have created an organization in AWS Organizations, you can create a trail that will log all events for all AWS accounts in that organization. This is sometimes referred to as an organization trail. You can also choose to edit an existing trail in the master account and apply it to an organization, making it an organization trail. Organization trails log events for the master account and all member accounts in the organization. For more information about AWS Organizations, see Organizations Terminology and Concepts.
Note
You must be logged in with the master account for the organization in order to create an organization trail. You must also have sufficient permissions for the IAM user or role in the master account in order to successfully create an organization trail. If you do not have sufficient permissions, you will not see the option to apply a trail to an organization.
When you create an organization trail, a trail with the name that you give it will
be
created in every AWS account that belongs to your organization. Users with CloudTrail
permissions in member accounts will be able to see this trail when they log into the
AWS CloudTrail console from their AWS accounts, or when they run AWS CLI commands
such as
describe-trail. However, users in member accounts will not have sufficient
permissions to delete the organization trail, turn logging on or off, change what
types of
events are logged, or otherwise alter the organization trail in any way.
When you create an organization trail in the console, or when you enable CloudTrail as a trusted service in the Organizations, this creates a service-linked role to perform logging tasks in your organization's member accounts. This role is named AWSServiceRoleForCloudTrail, and is required for CloudTrail to successfully log events for an organization. If an AWS account is added to an organization, the organization trail and service-linked role will be added to that AWS account, and logging will begin for that account automatically in the organization trail. If an AWS account is removed from an organization, the organization trail and service-linked role will be deleted from the AWS account that is no longer part of the organization. However, log files for that removed account created prior to the account's removal will still remain in the Amazon S3 bucket where log files are stored for the trail.
In the following example, a user in the master account 111111111111 creates a
trail named
MyOrganizationTrail for the organization
o-exampleorgid. The trail logs activity for all accounts in
the organization in the same Amazon S3 bucket. All accounts in the organization can
see
MyOrganizationTrail in their list of trails, but member
accounts will not be able to remove or modify the organization trail. Only the master
account will be able to change or delete the trail for the organization, just as only
the
master account can remove a member account from an organization. Similarly, by default,
only
the master account has access to the Amazon S3 bucket
my-organization-bucket for the trail and the logs contained
within it. The high-level bucket structure for log files contains a folder named with
the
organization ID, with subfolders named with the account IDs for each account in the
organization. Events for each member account are logged in the folder that corresponds
to
the member account ID. If member account 444444444444 is removed from the
organization at some point in the future,
MyOrganizationTrail and
the service-linked role will no longer appear in AWS account 444444444444, and
no further events will be logged for that account by the organization trail. However,
the
444444444444 folder remains in the Amazon S3 bucket, with all logs created before
the
removal of the account from the organization.

In this example, the ARN of the trail created in the master account is
aws:cloudtrail:us-east-2:111111111111:trail/.
This ARN is the ARN for the trail in all member accounts as well.
MyOrganizationTrail
Organization trails are similar to regular trails in many ways. You can create multiple trails for your organization, and choose whether to create an organization trail in all regions or a single region, and what kinds of events you want logged in your organization trail, just as in any other trail. However, there are some differences. For example, when creating a trail in the console and choosing whether to log data events for Amazon S3 buckets or AWS Lambda functions, the only resources listed in the CloudTrail console are those for the master account, but you can add the ARNs for resources in member accounts. Data events for specified member account resources will be logged without having to manually configure cross-account access to those resources. For more information about logging management events and data events, see Logging Data and Management Events for Trails.
You can also configure other AWS services to further analyze and act upon the event data collected in CloudTrail logs for an organization trail the same way you would for any other trail. For example, you can analyze the data in an organization trail using Amazon Athena. For more information, see AWS Service Integrations With CloudTrail Logs.
Topics | https://docs.aws.amazon.com/awscloudtrail/latest/userguide/creating-trail-organization.html | 2019-09-15T11:10:42 | CC-MAIN-2019-39 | 1568514571027.62 | [] | docs.aws.amazon.com |
Try it now and let us know what you think. Switch to the new look >>
You can return to the original look by selecting English in the language selector above.
DescribeMatchmaking
Retrieves one or more matchmaking tickets. Use this operation to retrieve ticket information, including status and--once a successful match is made--acquire connection information for the resulting new game session.
You can use this operation to track the progress of matchmaking requests (through polling) as an alternative to using event notifications. See more details on tracking matchmaking requests through polling or notifications in StartMatchmaking.
To request matchmaking tickets, provide a list of up to 10 ticket IDs. If the request is successful, a ticket object is returned for each requested ID that currently exists.
Learn more
Add FlexMatch to a Game Client
Set Up FlexMatch Event Notification
Related operations
-
DescribeMatchmaking
-
-
-
Request Syntax
{ "TicketIds": [ "
string" ] }
Request Parameters
For information about the parameters that are common to all actions, see Common Parameters.
The request accepts the following data in JSON format.
Note
In the following list, the required parameters are described first.
Response Syntax
{ "TicketList": [ { .
- TicketList
Collection of existing matchmaking ticket objects matching the request.
Type: Array of MatchmakingTicket
- UnsupportedRegionException
The requested operation is not supported in the region specified.
HTTP Status Code: 400
See Also
For more information about using this API in one of the language-specific AWS SDKs, see the following: | https://docs.aws.amazon.com/gamelift/latest/apireference/API_DescribeMatchmaking.html | 2019-09-15T10:18:30 | CC-MAIN-2019-39 | 1568514571027.62 | [] | docs.aws.amazon.com |
Try it now and let us know what you think. Switch to the new look >>
You can return to the original look by selecting English in the language selector above.
Tutorial: Using AWS Lambda with Amazon S3
Suppose you want to create a thumbnail for each image file that is uploaded to a bucket.
You can create a Lambda
function (
CreateThumbnail) that Amazon S3 can invoke when objects are created. Then, the Lambda function can
read the image object from the source bucket and create a thumbnail image target bucket.
Upon completing this tutorial, you will have the following Amazon S3, Lambda, and IAM resources in your account:
Lambda Resources
A Lambda function.
An access policy associated with your Lambda function that grants Amazon S3 permission to invoke the Lambda function.
IAM Resources
An execution role that grants permissions that your Lambda function needs through the permissions policy associated with this role.
Amazon S3 Resources
A source bucket with a notification configuration that invokes the Lambda function.
A target bucket where the function saves resized images..
Install NPM to manage the function's dependencies.
Create the Execution Role
Create the execution role that gives your function permission to access AWS resources..
Create Buckets and Upload a Sample Object
Follow the steps to create buckets and upload an object.
Open the Amazon S3 console.
Create two buckets. The target bucket name must be
sourcefollowed by
resized, where
sourceis the name of the bucket you want to use for the source. For example,
mybucketand
mybucketresized.
In the source bucket, upload a .jpg object,
HappyFace.jpg.
When you invoke the Lambda function manually before you connect to Amazon S3, you pass sample event data to the function that specifies the source bucket and
HappyFace.jpgas the newly created object so you need to create this sample object first.
Create the Function
The following example code receives an Amazon S3 event input and processes the message that it contains. It resizes an image in the source bucket and saves the output to the target bucket.
Note
For sample code in other languages, see Sample Amazon S3 Function Code.
Example index.js
// dependencies var async = require('async'); var AWS = require('aws-sdk'); var gm = require('gm') .subClass({ imageMagick: true }); // Enable ImageMagick integration. var util = require('util'); // constants var MAX_WIDTH = 100; var MAX_HEIGHT = 100; // get reference to S3 client var s3 = new AWS.S3(); exports.handler = function(event, context, callback) { // Read options from the event. console.log("Reading options from event:\n", util.inspect(event, {depth: 5})); var srcBucket = event.Records[0].s3.bucket.name; // Object key may have spaces or unicode non-ASCII characters. var srcKey = decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, " ")); var dstBucket = srcBucket + "resized"; var dstKey = "resized-" + srcKey; // Sanity check: validate that source and destination are different buckets. if (srcBucket == dstBucket) { callback("Source and destination buckets are the same."); return; } // Infer the image type. var typeMatch = srcKey.match(/\.([^.]*)$/); if (!typeMatch) { callback("Could not determine the image type."); return; } var imageType = typeMatch[1].toLowerCase(); if (imageType != "jpg" && imageType != "png") { callback(`Unsupported image type: ${imageType}`); return; } // Download the image from S3, transform, and upload to a different S3 bucket. async.waterfall([ function download(next) { // Download the image from S3 into a buffer. s3.getObject({ Bucket: srcBucket, Key: srcKey }, next); }, function transform(response, next) { gm(response.Body).size(function(err, size) { // Infer the scaling factor to avoid stretching the image unnaturally. var scalingFactor = Math.min( MAX_WIDTH / size.width, MAX_HEIGHT / size.height ); var width = scalingFactor * size.width; var height = scalingFactor * size.height; // Transform the image buffer in memory. this.resize(width, height) .toBuffer(imageType, function(err, buffer) { if (err) { next(err); } else { next(null, response.ContentType, buffer); } }); }); }, function upload(contentType, data, next) { // Stream the transformed image to a different S3 bucket. s3.putObject({ Bucket: dstBucket, Key: dstKey, Body: data, ContentType: contentType }, next); } ], function (err) { if (err) { console.error( 'Unable to resize ' + srcBucket + '/' + srcKey + ' and upload to ' + dstBucket + '/' + dstKey + ' due to an error: ' + err ); } else { console.log( 'Successfully resized ' + srcBucket + '/' + srcKey + ' and uploaded to ' + dstBucket + '/' + dstKey ); } callback(null, "message"); } ); };
Review the preceding code and note the following:
The function knows the source bucket name and the key name of the object from the event data it receives as parameters. If the object is a .jpg, the code creates a thumbnail and saves it to the target bucket.
The code assumes that the destination bucket exists and its name is a concatenation of the source bucket name followed by the string
resized. For example, if the source bucket identified in the event data is
examplebucket, the code assumes you have an
examplebucketresizeddestination bucket.
For the thumbnail it creates, the code derives its key name as the concatenation of the string
resized-followed by the source object key name. For example, if the source object key is
sample.jpg, the code creates a thumbnail object that has the key
resized-sample.jpg.
The deployment package is a .zip file containing your Lambda function code and dependencies.
To create a deployment package
Save the function code as
index.jsin a folder named
lambda-s3.
Install the GraphicsMagick and Async libraries with NPM.
lambda-s3$
npm install async gm
After you complete this step, you will have the following folder structure:
lambda-s3 |- index.js |- /node_modules/gm └ /node_modules/async
Create a deployment package with the function code and dependencies.
lambda-s3$
zip -r function.zip .
To create the function
Create a Lambda function with the
create-functioncommand.
$
aws lambda create-function --function-name CreateThumbnail \ --zip-file fileb://function.zip --handler index.handler --runtime nodejs8.10 \ --timeout 10 --memory-size 1024 \ --role arn:aws:iam::
123456789012:role/lambda-s3-role
The preceding command specifies a 10-second timeout value as the function configuration. Depending on the size of objects you upload, you might need to increase the timeout value using the following AWS CLI command.
$
aws lambda update-function-configuration --function-name CreateThumbnail --timeout
30
Test the Lambda Function
In this step, you invoke the Lambda function manually using sample Amazon S3 event data.
To test the Lambda function
Save the following Amazon S3 sample event data in a file and save it as
inputFile.txt. You need to update the JSON by providing your
sourcebucketname and a .jpg object key.
{ "Records":[ { "eventVersion":"2.0", ":"A3NL1KOZZKExample" }, "arn":"arn:aws:s3:::
source" } } } ] }
Run the following Lambda CLI
invokecommand to invoke the function. Note that the command requests asynchronous execution. You can optionally invoke it synchronously by specifying
RequestResponseas the
invocation-typeparameter value.
$
aws lambda invoke --function-name CreateThumbnail --invocation-type Event \ --payload outputfile.txt
Verify that the thumbnail was created in the target bucket..
To add permissions to the function:
An object-created event is detected on a specific bucket.
The bucket is owned by your account. If you delete a bucket, it is possible for another account to create a bucket with the same ARN.
$command.
$
aws lambda get-policy --function-name CreateThumbnail
Add notification configuration on the source bucket to request Amazon S3 to publish object-created events to Lambda.
To configure notifications
Open the Amazon S3 console.
Choose the source bucket.
Choose Properties.
Under Events, configure a notification with the following settings.
Name –
lambda-trigger.
Events –
ObjectCreate (All).
Send to –
Lambda function.
Lambda –
CreateThumbnail.
For more information on event configuration, see Enabling Event Notifications in the Amazon Simple Storage Service Console User Guide.
Test the Setup
Now you can test the setup as follows:
Upload .jpg or .png objects to the source bucket using the Amazon S3 console.
Verify that the thumbnail was created in the target bucket using the
CreateThumbnailfunction.
View logs in the CloudWatch console. | https://docs.aws.amazon.com/lambda/latest/dg/with-s3-example.html | 2019-09-15T10:34:35 | CC-MAIN-2019-39 | 1568514571027.62 | [array(['images/s3-admin-iser-walkthrough-10.png', None], dtype=object)] | docs.aws.amazon.com |
Contents Now Platform Capabilities Previous Topic Next Topic Workflow movement with update sets Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Workflow movement with update sets The system tracks workflows in update sets differently than other records because workflow information is stored across multiple tables. Changes made to a workflow version are not added to the update set until the workflow is published, at which point the entire workflow is added into the update set. Update sets store workflows as a single Workflow [wf_workflow] record and only retain the latest version with the update type of Workflow. For information about update sets, see Update Sets. Workflow update set migration use case - simpleCreate a new workflow with no dependencies and then migrate the workflow in an update set. Workflow update set migration use case - subflow dependency (success)Successfully edit and migrate an existing workflow and its dependent subflow.Workflow update set migration use case - subflow dependency (failure)Edit and migrate an existing workflow from a test instance to a production instance that fails to run on the production instance because of a missing dependent subflow.Workflow.Input variable movementYou can add input variables to existing workflows and add them to update sets. On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/helsinki-servicenow-platform/page/administer/workflow-administration/concept/c_WorkflowMovementWithUpdateSets.html | 2019-09-15T10:38:40 | CC-MAIN-2019-39 | 1568514571027.62 | [] | docs.servicenow.com |
Rules for Data Masking
Requests to a web application may contain sensitive data that should not be transferred outside of the server on which it is processed.
Typically, this category includes authorization (cookies, tokens, passwords), personal data and payment credentials.
Wallarm Node supports data masking in requests. The real values will be replaced by
* and will not be accessible either in the Wallarm Cloud or in the local post-analysis module. This method ensures that the protected data cannot leak outside the trusted environment.
It can affect the display of attacks, active attack (threat) verification, and the detection of brute force attacks.
Example: Masking of a Cookie Value
If the following conditions take place:
- the application is accessible at the domain example.com
- the application uses a PHPSESSID cookie for user authentication
- security policies deny access to this information for employees using Wallarm
Then, to create a data masking rule for this cookie, the following actions should be performed:
- Go to the Rules tab
- Find the branch for
example.com/**/*.*and click Add rule
- Choose Mark as sensitive data
- Select the Header parameter and enter its value
COOKIE; select the cookie parameter and enter its value
PHPSESSIDafter in this part of request
- Click Create
| https://docs.wallarm.com/en/user-guides/cloud-ui/rules/sensitive-data-rule.html | 2019-09-15T09:40:55 | CC-MAIN-2019-39 | 1568514571027.62 | [array(['../../../../images/en/user-guides/cloud-ui/rules/sensitive-data-rule.png',
'Marking sensitive data'], dtype=object) ] | docs.wallarm.com |
Can I cancel my subscription(s)?
Yes, your subscription can be cancelled at anytime from your account page at. You will retain access to support and updates until your license key expires, one year from the purchase date.
Can I reactivate my subscription(s) later?
Yes, the license can be renewed on. Renewing it will reactivate the subscription. | https://docs.easydigitaldownloads.com/article/1239-can-i-cancel-my-subscriptions | 2019-09-15T10:03:45 | CC-MAIN-2019-39 | 1568514571027.62 | [] | docs.easydigitaldownloads.com |
Metrics
XAP provides a framework for collecting and reporting metrics from the distributed runtime environment into a metric repository of your choice. These metrics be analyzed and used to identify trends in the system behavior.
This section describes the core metrics concepts and how to configure metrics collection. XAP contains a set of predefined metrics, and you can also create custom user-defined metrics.
You can configure XAP to report metrics to InfluxDB and CA APM Introscope, or you can create a custom metrics reporter as per the requirements of your specific application and XAP environment. | https://docs.gigaspaces.com/14.2/admin/metrics-overview.html | 2019-09-15T09:47:14 | CC-MAIN-2019-39 | 1568514571027.62 | [] | docs.gigaspaces.com |
Installing and Configuring Windows Server AppFabric
You perform component-based installation and configuration of Windows Server AppFabric by using two wizards: an installation wizard and a configuration wizard. The installation wizard installs the Hosting Services, Caching Service and Cache Client, Hosting Administration, and Cache Administration. The configuration wizard configures the monitoring and persistence components of the Hosting Services, and the Caching Service.
Separate installation and configuration wizards give you flexibility in setting up your system. You can choose to enter the configuration wizard immediately after the installation wizard has finished, or you can enter the configuration wizard at a later point. Both the installation and configuration wizards are available from the Start menu, so you can install or configure components after initial installation. For more information, see Install Windows Server AppFabric and Configure Windows Server AppFabric.
You can also run an automated (silent) installation that requires no user interaction. For more information, see Automated Installation.
If you run the AppFabric setup program on a computer that already has a previous version of AppFabric on it, you will be able to upgrade AppFabric through the setup wizard. For more information, see Upgrade Windows Server AppFabric.
Security is an important consideration in the setup and configuration process of AppFabric. For information about AppFabric security, see “Security and Protection” in the AppFabric Core Help ().
Hosting Services Installation and Configuration
Installing and configuring the AppFabric hosting features consists of the following tasks: installing AppFabric hosting features on a server, configuring AppFabric monitoring, preparing the monitoring store, configuring AppFabric persistence, and preparing the persistence store. The primary tasks are summarized in the following list:
Installing the AppFabric hosting features: Install the Hosting Services and the Hosting Administration tools. This step is performed with the AppFabric Setup Wizard or by using automated setup. For more information about automating the installation of AppFabric, see Automated Installation.
Configuring AppFabric monitoring, including configuring the account for the Event Collection service, and initializing and registering the monitoring store. This step is performed with the AppFabric Configuration Wizard.
Configuring AppFabric persistence, including configuring the account for the Workflow Management service, and initializing and registering the persistence store. This step is performed with the AppFabric Configuration Wizard.
Caching Services Installation and Configuration
Installing and configuring the AppFabric caching features consists of the following tasks: installing AppFabric caching features on a cache server, preparing the Caching Service configuration store, configuring the AppFabric Caching Service, and preparing a Cache Client to use the cache cluster. The primary tasks are summarized in the following list:
Installing the AppFabric caching features: Install the Caching Service, the Cache Client, and the Windows PowerShell-based Cache Administration tool. This step is performed with the AppFabric Setup Wizard or by using automated setup. For more information about automating the installation of AppFabric, see Automated Installation.
Preparing the Caching Service configuration store by creating and designating a shared network folder, or by creating and registering a database to store the Caching Service configuration settings.
Configuring the Caching Service configuration settings in the Caching Service configuration store by using the AppFabric Configuration Wizard or the Cache Administration and Configuration cmdlets for Windows PowerShell.
Installing the AppFabric Cache Client on computers that will be running applications programmed to take advantage of caching features.
The caching features of AppFabric can be installed by using the AppFabric Setup Wizard or command prompt parameters. Both means of installation require the same input information and perform the same tasks. For more information about automating the installation of AppFabric, see Automated Installation.
After the Caching Service configuration store is configured and ready, you can install the caching features on as many cache servers as you would like in the cache cluster. You may experience some degree of contention on the Caching Service configuration store during parallel installations.
Note
To minimize contention issues on the Caching Service configuration storage location during parallel installation of caching features, we recommend that you use SQL Server to store the cluster configuration settings. Use SQL Server to store the Caching Service configuration settings by specifying the SQL Server AppFabric Caching Service Configuration Store Provider on the Configure Caching Service page of the AppFabric Configuration Wizard. When using a shared folder for Caching Service configuration settings, serial installation of servers with caching features is advised.
For the installation to succeed, the security identity of the person performing the installation must have the appropriate permissions on the Caching Service configuration store. Set the identity from the Configure Caching Service dialog box in the AppFabric Configuration Wizard.
In This Section
Install Windows Server AppFabric
Configure Windows Server AppFabric
-
Remove Features and Uninstall
-
Upgrade Windows Server AppFabric
Enable the Caching Service in a Workgroup Scenario
Configure a Server Farm for Windows Server AppFabric
Walkthrough: Deploying Windows Server AppFabric in a Single-Node Development Environment
Walkthrough: Deploying Windows Server AppFabric in a Workgroup Environment | https://docs.microsoft.com/en-us/previous-versions/appfabric/ff637707(v=azure.10)?redirectedfrom=MSDN | 2019-09-15T10:43:24 | CC-MAIN-2019-39 | 1568514571027.62 | [] | docs.microsoft.com |
.
To commit the
destroyoperations to the server, save the changes. Clicking Delete does not immediately hit the server. | https://docs.telerik.com/kendo-ui/knowledge-base/grid-bound-to-asp-net-mvc-action-methods---crud-operations | 2019-09-15T09:38:34 | CC-MAIN-2019-39 | 1568514571027.62 | [] | docs.telerik.com |
Migrating your Bronze website
Updated on 02-April-2019 at 2:48 PM
Business Catalyst End of life announcement - find out more details.
This article covers the migration procedure for a website on the Bronze plan created in Muse and published on Business Catalyst. Depending on the hosting platform you choose to move your website to you might first need to create an A record on the Business Catalyst side first. Some hosting platforms require this, others don't. To be on the safe side, let's create a A record just in case we need it later. Here is how to create it:
- log into your Business Catalyst website and go to the Site Domains section
- click the "More Actions" drop down and select the A record option, this popup window will appear
- choose a name for your record, it can be anything - for this example we have chosen "testing'. Click the "External IP address" option and fill in the IP address 188.121.43.37 as per this document. If you are using another hosting platform than GoDaddy please get in touch with their support, they will need to tell you which IP address to use
Once the A record is created you are ready to proceed to the actual website migration.
Register for an online hosting platform
The first step is to register for a hosting patform. For this tutorial we will use GoDaddy as an example. To create an account please go to this link and register. For this tutorial we will use the "Economy Hosting Plan" package:
During the GoDaddy setup phase you will need to enter your domain. At this point enter the A record we have created before in Business Catalyst. This will serve as a temporary URL for your new website.
When the migration is complete we will permanently remove the domain from BusinessCatalyst and add it to the new website but until then we need to use this temporary A record.
During the migration process your Business Catalyst website will continue to be live and available to your customers.
Once yourGoDaddy account is created, the first thing you need to do is create a FTP account. This account will be used to push the site to GoDaddy.
Do you have the .muse file?
If you do, the publishing process can be completed from Muse. Here is what you need to do:
- open the .muse file in Muse
- go to Publish, select the FTP option
- enter the hostname and the FTP username and password you have created
- enter the OTHER DETAILS and click PUBLISH (screenshots here)
Muse will now connect to your new hosting platform and publish your website. Here is more information on how to publish a file from Muse to a third-party hosting service.
Don't have the .muse file any longer?
If this is the case you will need to follow the steps below:
- download and install a FTP software. There are alot of free ones you can use, we will go with FileZilla for this tutorial., here is the download link
- after you have installed the FTP program you need to connect to your current Business Catalyst website to download all the website files to your computer. Here is how to do this: SCREENSHOTS HERE
- once all the site files are downloaded to your computer you need to upload them to your new hosting platform. Here is how to do this: STEPS HERE
You can find more information on how to connect to GoDaddy using a FTP program in this article.
Now we are almost done, time for the final touch-ups. Make sure the website pages are working, make sure they all look the same. At this step you might need to login to your Typekit account and make sure this temporary A record that we are using for the new website is authorized. If it is not, your fonts will not look correctly.
Finish the migration
When you are happy with your new website.
The first step to do this is. Here are the nameservers for GoDaddy.
- after the new nameservers are in place log into your new website hosting platform and add your domain name
This is the final step that concludes your website migration from Business Catalyst to your new website platform.
Need help?
If you need help to republish your website to another platform please get in touch with one of our partners listed here. | http://docs.businesscatalyst.com/user-manual/site-settings/migrating-your-bronze-website | 2019-09-15T10:09:58 | CC-MAIN-2019-39 | 1568514571027.62 | [] | docs.businesscatalyst.com |
ResortCerts.com Vacation Certificates
Through Lodgix.com, we are able to purchase bulk vacation certificates through ResortCerts.com. Purchasing these certificates from ResortCerts.com directly costs $599.00 for a 7 night stay at a resort of your choice. Thus the retail value of the certificates is $599.00. However, that doesn't mean you can expect $599.00 a week type of accommodations. In many instances your certificate will score you a spacious condo or town home that would rent for $150 to $300 a night at other locations. There really are some great resorts available IF YOU DO YOUR RESEARCH AND FOLLOW THE INSTRUCTIONS IN THIS ARTICLE!
What exactly are resort vacation certificates?What exactly are resort vacation certificates?
Basically it's a way to move excess inventory at timeshare resorts. The timeshare resorts would rather make something than leave the units sit empty and make nothing. The certificate are always good for a weekly stay typically from Friday to Friday or Saturday to Saturday.
Typically there are very few additional costs, outside of the cost of the certificate. However, many times they will ask for a $50.00 to $150.00 deposit and most resorts will charge a daily or a weekly fee for use of the Internet. In some locations there may be a fee to park your car.
From their website:
"A Resort Vacation Certificate (“Certificate(s)”) is a certificate redeemable for 7-night stays at many resort properties worldwide*.The Certificate(s) is designed to be used for off-peak/off‑season travel, although accommodations may be available throughout the year. Peak seasons can vary according to destination. IMPORTANT: Certificate(s) are for promotional purposes and cannot be resold for cash or other consideration.
I don't get it, what's the catch?I don't get it, what's the catch?
The catch, if there is one, is two-fold:
- Inventory during peak travel times may be limited
- Typically they will ask you to sit through a time share presentation in exchange for cash, gas cards, gifts, etc. Just say "No Thanks."
The good news is that with some research, you can typically find some great units at any time of the year and turning down the timeshare presentation is just a matter of saying "thanks, but no thanks". As the owner of Lodgix my family uses these certificates twice a year and we've never attended a timeshare presentation. Typically after you register and check- in, the front desk will point to another desk (manned by an appointment scheduler) and they'll require you go to that desk to get your keys, parking pass, etc.. They aren't dumb, so they make sure that some important component of your stay must be picked up there.
At that time, they'll tell you about the great gifts that you can obtain by attending their timeshare presentation Personally, I can't stand those presentations. I always say "No thanks", and they say "ok, no problem, here's your parking pass or discount card, etc. and that's it. You are done and fully checked in. Sometimes they'll call and sweeten the deal during your stay, but since they call the room phone, we never answer. If you do decide to attend the presentation, just beware it's a hard sell. You will have to tell them "No" many times. The gifts they offer many times are worth hundreds of dollars, so it can be hard to turn down, but I just don't like someone selling me something on my vacation!
Do resort vacation certificates expire?Do resort vacation certificates expire?
Yes, they expire if accommodations are not booked before the “Book by Date” printed on the Certificate(s), which can be up to 12 months from the date of purchase. IMPORTANT: Certificate(s) cannot be extended beyond the “BOOK BY DATE”.
How do I redeem my certificate?How do I redeem my certificate?
Visit, enter your email address, choose a password and enter the 12 character Certificate number. The Certificate will remain valid until: (1) it is redeemed by booking an accommodation, or (2) it expires.
How do I search through available resorts?How do I search through available resorts?
- Select a destination. The great thing about these certificates is that they are all over the US and the World. If you are flexible in where you want to go and when you want to go, and if you do your research, you can have an incredible time for almost nothing.
- Select travel dates. Some dates will have little or no inventory. Other dates will only offer "Upgrade" options. Inventory changes every week and varies according to seasonality and demand. Check often!
- No Charge Versus Upgrade. No charge means just that, there are no additional nightly charges for reserving those rooms. Upgrade means you can pay an extra price to get the room you want.
How to get incredible deals using ResortCert.com Vacation Certificates!How to get incredible deals using ResortCert.com Vacation Certificates!
The most important item to getting a great deal is to check TripAdvisor reviews. Typically just type "(insert name of resort) reviews" into Google and look for the listing for TripAdvisor.
The example above is for the "Silver Lake Resort" in Orlando, FL (actually it's in Kissimmee. The first four results are results from TripAdvisor. Read the reviews and check their rating. the guests will give you some valuable insight into the quality of the resort, how updated the units are, great places to eat and get groceries, things to do, etc. Our family will only stay at resort that have high number of great reviews from other guests. Keep in mind, people by their nature are far more prone to post about poor experiences rather than incredible experiences. So the reviews are skewed a little bit. However, the great resorts still have far more positive reviews than negative reviews. If you seen review after review which has nothing good to say, odds are good that you might have the same experience.
My family stayed at the Silver Lake Resort in Kissimmee from Dec. 10 -17. We got a huge 1500 square foot, two bedroom condo, with flat screen TVs, granite counter tops, king size beds, etc. all at the no charge rate! The resort had a pool bar, incredible pools, kiddie pool., movie theater, tennis courts, game room, kids activities and was five minutes from Disney World. It was also a gated community so we all felt safe. My parents went with us, and we called ahead and requested two units next to each other, which is great for the kids to run next door to see grandma and grandpa!
We research probably 15 resorts and this one by far had the best reviews. We were asked very politely if we wanted to save $150 on Disney tickets and sit through a timeshare presentation. We declined and that was it. No pressure the rest of the week.
What accommodations can I typically get using my Certificate(s)?What accommodations can I typically get using my Certificate(s)?
All accommodations are for 7-night stays with a fixed check-in day and range in size from studios to 2+ bedrooms. Many offer living rooms, dining rooms, fully-equipped kitchens and laundry facilities. Resort locations are worldwide and based on availability. The Certificate(s) is designed to be used for off-peak/off season travel, although accommodations may be available throughout the year. Peak seasons can vary according to destination.
Typical search result display:
- TripAdvisor Rating (make sure to sort by TripAdvisor rating)
- Shows whether there are No Charge and / or upgrade accommodations available
- Click"view this resort" to view more images, etc. on the resort.
Some resorts we've stayed at that have been great!Some resorts we've stayed at that have been great!
- Massanutten Resort in McGaheysville, Virginia
- Silver Lake Resort in Kissimmee, FL
- The Historic Powhatan Resort in Williamsburg, VA | http://docs.lodgix.com/m/tutorials/l/54197-resortcerts-com-vacation-certificates | 2019-09-15T10:05:07 | CC-MAIN-2019-39 | 1568514571027.62 | [array(['https://media.screensteps.com/image_assets/assets/000/971/195/original/358792f9-4061-47e4-8104-5ea27150a81c.png',
'What exactly are resort vacation certificates?'], dtype=object)
array(['https://media.screensteps.com/image_assets/assets/000/971/197/original/a0044f3a-8cc6-427f-989d-e3e2be820ab4.png',
'How do I redeem my certificate?'], dtype=object)
array(['https://media.screensteps.com/image_assets/assets/000/971/193/original/media_1327961943289.png',
'How to get incredible deals using ResortCert.com Vacation Certificates!'],
dtype=object) ] | docs.lodgix.com |
While you can configure and edit parts of HPE Consumption Analytics Portal in different sequences, HPE recommends that you follow the sequence listed below. Each step in the sequence includes a link to the article containing detailed information about that configuration step.
To help you navigate the articles in this process, the following clickable map appears at the top of each article:
The following video shows an overview of the configuration process:
How-to articles describe steps for completing an end-user task. To add a new how-to article, follow these steps:
Reference pages list essential facts about a feature or system. To add a new reference article, follow these steps: | https://docs.consumption.support.hpe.com/CCS/050Configuring_the_HPE_Consumption_Analytics_Portal | 2019-09-15T10:42:19 | CC-MAIN-2019-39 | 1568514571027.62 | [] | docs.consumption.support.hpe.com |
Cumulative flow, lead time, and cycle time guidance
Azure DevOps Services | Azure DevOps Server 2019 | TFS 2018 | TFS 2017 | TFS 2015 | TFS 2013
You use cumulative flow diagrams (CFD) to monitor the flow of work through a system. The two primary metrics to track, cycle time and lead time, can be extracted from the chart.
Or, you can add the Lead time and cycle time control charts to your dashboards.
To configure or view CFD charts, see Configure a cumulative flow chart.
Sample charts and primary metrics
Chart metrics
CFD charts display the count of work items grouped by state/Kanban column over time. The two primary metrics to track, cycle time and lead time, can be extracted from the chart.
Note:
- The CFD widget (Analytics) and built-in CFD chart (work tracking data store) do not provide discrete numbers on Lead Time and Cycle Time. However, the Lead Time and Cycle Time widgets do provide these numbers.
There is a very tight, well defined correlation between Lead Time/Cycle Time and Work in Progress (WIP). The more work in progress, the longer the cycle time which leads to longer lead times. The opposite is also true—the less work in progress, the shorter the cycle and lead time is because the development team can focus on fewer items. This is a key reason why you can and should set Work In Progress limits on the Kanban board.
The count of work items indicates the total amount of work on a given day. In a fixed period CFD, a change in this count indicates scope change for a given period. In a continuous flow CFD, it indicates the total amount of work in the queue and completed for a given day.
Decomposing this work into specific Kanban board columns provides a view into where work is in the process. This provides insights on where work is moving smoothly, where there are blockages and where no work is being done at all. It's difficult to decipher a tabular view of the data, however, the visual CFD chart provides clear evidence that something is happening in a given way.
Identify issues, take appropriate actions
The CFD answers several specific questions and based on the answer, actions can be taken to adjust the process to move work through the system. Let's look at each of those questions here.
Will the team complete work on time?
This question applies to fixed period CFDs only. You gain an understanding of this by looking at the curve (or progression) of work in the last column of the Kanban board.
In this scenario it may be appropriate to reduce the scope of work in the iteration if it's clear that work, at a steady pace, is not being completed quickly enough. It may indicate the work was under estimated and should be factored into the next sprints planning.
There may however be other reasons which can be determined by looking at other data on the chart.
How is the flow of work progressing?
Is the team completing work at a steady pace? One way to tell this is to look at the spacing between the different columns on the chart. Are they of a similar or uniform distance from each other from beginning to end? Does a column appear to flat-line over a period of multiple days? Or, does it seem to "bulge"?
Two problems show up visually as flat lines and as bulges.
Mura, the lean term for flat lines and bulges, means unevenness and indicates a form of waste (Muda) in the system. Any unevenness in the system will cause bulges to appear in the CFD.
Monitoring the CFD for flat lines and bulges supports a key part of the Theory of Constraints project management process. Protecting the slowest area of the system is referred to as the drum-buffer-rope process and is part of how work is planned.
How do you fix flow problems?
You can solve the problem of lack of timely updates through daily stand-ups, other regular meetings, or scheduling a daily team reminder email.
Systemic flat-line problems indicate a more challenging problem (although you should rarely if ever see this). This problem means that work across the system has stopped. This may be the result of process-wide blockages, processes taking a very long time, or work shifting to other opportunities that aren't captured on the board.
One example of systemic flat-line can occur with a features CFD. Feature work can take much longer than work on user stories because features are composed of several stories. In these situations, either the slope is expected (as in the example above) or the issue is well known and already being raised by the team as an issue, in which case, problem resolution is outside the scope of this article to provide guidance.
Teams can proactively fix problems that appear as CFD bulges. Depending on where the bulge occurs, the fix may be different. As an example, let's suppose that the bulge occurs in the development process because running tests is taking much longer than writing code, or testers are finding may be finding a large number of bugs and continually transition the work back to the developers so the developers have a growing list of active work.
Two potentially easy ways to solve this problem are: 1) Shift developers from the development process to the testing process until the bulge is eliminated or 2) change the order of work such that work that can be done quickly is interwoven with work that takes longer to do. Look for simple solutions to eliminate the bulges.
Note
Because many different scenarios can occur which cause work to proceed unevenly, it's critical that you perform an actual analysis of the problem. The CFD will tell you that there is a problem and approximately where it is but you must investigate to get to the root cause(s). The guidance provided here indicate recommended actions which solve specific problems but which may not apply to your situation.
Did the scope change?
Scope changes apply to fixed period CFDs only. The top line of the chart indicates the scope of work because a sprint is pre-loaded with the work to do on the first day, this becomes a set level of work. Changes to this top line indicate worked was added or removed.
The one scenario where you can't track scope changes with a CFD occurs when the same number of works are added as removed on the same day. The line would continue to be flat. This is the primary reason why several charts should be used in conjunction with one another to monitor for specific issues. For example, the sprint burndown chart can also show scope changes.
Too much work in progress?
You can easily monitor whether WIP limits have been exceed from the Kanban board. However, you can also see monitor it from the CFD.
Not so oddly, a large amount of work in progress usually shows up as a vertical bulge. The longer there is a large amount of work in progress, the bulge will expand to become an oval which will indicate that the work in progress is negatively affecting the cycle and lead time.
A good rule of thumb for work in progress is that there should be no more than two items in progress per team member at any given time. The main reason for two items versus stricter limits is because reality frequently intrudes on any software development process.
Sometimes it takes time to get information from a stakeholder, or it takes more time to acquire necessary software. There are any number of reasons why work might be halted so having a secondary item to switch to provides a little bit of leeway. If both items are blocked, it's time to raise a red flag to get something unblocked—not just switch to yet another item. As soon as there are a large number of items in progress, the person working on those items will have difficulty context switching, are more likely to forget what they were doing, and likely incur mistakes.
Lead time versus cycle time
The diagram below illustrates how lead time differs from cycle time. Lead time is calculated from work item creation to entering a Completed state. Cycle time is calculated from first entering an In Progress state to entering a Completed state.
Illustration of lead time versus cycle time
If a work item enters a Completed state and then is reactivated and moved out of that state, then any additional time it spends in a Proposed/In Progress state will contribute to its lead/cycle time when it enters a Completed state for the second time.
If your team uses the Kanban board, you'll want to understand how your Kanban columns map to workflow states. For more information on configuring your Kanban board, see Add columns.
To learn more about how the system uses the state categories—Proposed, In Progress, and Completed—see Workflow states and state categories.
Plan using estimate delivery times based on lead/cycle times
You can use the average lead/cycle times and standard deviations to estimate delivery times.
When you create a work item, you can use your team's average lead time to estimate when your team will complete that work item. Your team's standard deviation tells you the variability of the estimate. Likewise, you can use cycle time and its standard deviation to estimate the completion of a work item once work has begun.
In the following chart, the average cycle time is 8 days. The standard deviation is +/- 6 days. Using this data, we can estimate that the team will complete future user stories about 2-14 days after they begin work. The narrower the standard deviation, the more predictable your estimates.
Example Cycle Time widget
Identify process issues
Review your team's control chart for outliers. Outliers often represent an underlying process issue. For example, waiting too long to complete pull request reviews or not resolving an external dependency in a timely manner.
As you can see in the following chart, which shows several outliers, several bugs took significantly longer to complete than the team's average. Investigating why these bugs took longer may help uncover process issues. Addressing the process issues can help reduce your team's standard deviation and improve your team's predictability.
Example Cycle Time widget showing several outliers
You can also see how process changes affect your lead and cycle time. For example, on May 15th the team made a concerted effort to limit the work in progress and address stale bugs. You can see that the standard deviation narrows after that date, showing improved predictability.
Try this next
Try this next
Feedback | https://docs.microsoft.com/en-us/azure/devops/report/dashboards/cumulative-flow-cycle-lead-time-guidance?view=azure-devops | 2019-09-15T09:52:46 | CC-MAIN-2019-39 | 1568514571027.62 | [array(['_img/cfd-incomplete.png?view=azure-devops',
"Sample CFD with a half completed chart, dotted lines show the work won't be completed"],
dtype=object)
array(['_img/cycle-lead-time-concept-intro.png?view=azure-devops',
'Conceptual image of how cycle time and lead time are measured'],
dtype=object)
array(['_img/cycle-time-planning.png?view=azure-devops',
'Cycle Time widget'], dtype=object)
array(['_img/cycle-time-outliers.png?view=azure-devops',
'Cycle Time widget showing several outliers'], dtype=object)] | docs.microsoft.com |
Securing the console
Configuration is required to enable additional security around connections from the Diffusion™ console.
Allow the console to connect only on a specific connector
We strongly recommend that you only allow the console to connect to Diffusion through a single connector. The port this connector listens on can be blocked from connections from outside of your organization by your load balancer.
- In your etc/Connectors.xml configuration file, wherever the line <web-server>default<web-server> appears in a connector that receives external connections, replace it with a web server definition that contains only a client-service definition. For example:
<web-server <!-- This section enables HTTP-type clients for this Web Server --> <client-service <!-- This parameter is used to re-order out-of-order messages received over separate HTTP connections opened by client browsers. It is rarely necessary to set this to more than a few tens of seconds. If you attempt to set this value to more than one hour, a warning is logged and a timeout of one hour is used. --> <message-sequence-timeout>4s</message-sequence-timeout> <!-- This is used to control access from client web socket to diffusion. This is a REGEX pattern that will match the origin of the request (.*) matches anything so all requests are allowed --> <websocket-origin>.*</websocket-origin> <!-- This is used to control cross-origin resource sharing client connection to Diffusion This is a REGEX pattern that will match the origin of the request (.*) matches anything --> <cors-origin>.*</cors-origin> <!-- Enable compression for HTTP responses (Client and File). If the response is bigger than threshold --> <compression-threshold>256</compression-threshold> </client-service> </web-server>
- Create a new connector in your etc/Connectors.xml configuration file that defines a specific port that you use for internal connections to the console.
In this connector, set the value of the web-server element to default.
- In your load balancer, prevent outside traffic from having access to the port specified in the new connector.
- If required, apply additional connection restrictions.
- You can use a connection validation policy. For more information, see .
- You can set these restrictions in your load balancer.
Disable console features in the configuration (as required)
The actions that a user can perform using the console are controlled by roles and permissions. The principal that the user uses to log in to the console must have a role with the permissions required to perform an action in the console.
A principal with the ADMINISTRATOR or OPERATOR role can use all of the functions of the Diffusion console.
To restrict users to using a smaller set of console features, ensure they use a principal with a more restrictive set of roles and permissions. For more information, see Pre-defined roles.
This page last modified: 2018/10/28 | https://docs.pushtechnology.com/docs/6.2.2/manual/html/designguide/security/console_security.html | 2019-09-15T10:51:07 | CC-MAIN-2019-39 | 1568514571027.62 | [] | docs.pushtechnology.com |
Configuring custom report templates
The application includes a variety of built-in templates for creating reports. These templates organize and emphasize asset and vulnerability data in different ways to provide multiple looks at the state of your environment’s security. Each template includes a specific set of information sections.
If you are new to the application, you will find built-in templates especially convenient for creating reports. To learn about built-in report templates and the information they include, see Report templates and sections.
As you become more experienced with the application and want to tailor reports to your unique informational needs, you may find it useful to create or upload custom report templates.
Fine-tuning information with custom report templates
Creating custom report templates enables you to include as much, or as little, scan information in your reports as your needs dictate. For example, if you want a report that lists assets organized by risk level, a custom report might be the best solution. This template would include only the Discovered System Information section. If you want a report that only lists vulnerabilities, you may create a document template with the Discovered Vulnerabilities section or create a data export template with vulnerability-related attributes.
You can also upload a custom report template that has been created by Rapid7 at your request to suit your specific needs. For example, custom report templates can be designed to provide high-level information presented in a dashboard format with charts for quick reference that include asset or vulnerability information that can be tailored to your requirements. Contact your account representative for information about having custom report templates designed for your needs. Templates that have been created for you will be provided to you.
After you create or upload a custom report template, it appears in the list of available templates on the Template section of the Create a report panel.
You must have permission to create a custom report template. To find out if you do, consult your Global Administrator. To create a custom report template, take the following steps:
- Click the Reports icon in the Web interface. OR Click the Create tab at the top of the page and then select Report from the drop-down list.
- Click Manage report templates. The Manage report templates panel appears.
- Click New. The Security Console displays the Create a New Report Template panel.
Editing report template settings
- Enter a name and description for the new template on the General section of the Create a New Report Template panel.
If you are a Global Administrator, you can find out if your license enables a specific feature. Click the Administration tab and then the Manage link for the Security Console. In the Security Console Configuration panel, click the Licensing link.
- Select the template type from the Template type drop-down list:
- With a Document template you will generate section-based, human-readable reports that contain asset and vulnerability information. Some of the formats available for this template type—Text, PDF, RTF, and HTML—are convenient for sharing information to be read by stakeholders in your organization, such as executives or security team members tasked with performing remediation.
- With an export template, the format is identified in the template name, either comma-separated-value (CSV) or XML files. CSV format is useful for integrating check results into spreadsheets, that you can share with stakeholders in your organization. Because the output is CSV, you can further manipulate the data using pivot tables or other spreadsheet features. See Using Excel pivot tables to create custom reports from a CSV file. To use this template type, you must have the Customizable CSV export featured enabled. If it is not, contact your account representative for license options. Note: Discovery scan results are not exportable in CSV format.
- With the Upload a template file option you can select a template file from a library. You will select the file to upload in the Content section of the Create a New Report Template panel.
The Vulnerability details setting only affects document report templates. It does not affect data export templates.
- Select a level of vulnerability details from the drop-down list in the Content section of the Create a New Report Template panel. Vulnerability details filter the amount of information included in document report templates:
- None excludes all vulnerability-related data.
- Minimal (title and risk metrics) excludes vulnerability solutions.
- Complete except for solutions includes basic information about vulnerabilities, such as title, severity level, CVSS score, and date published.
- Complete includes all vulnerability-related data.
- Select your display preference:
- Display asset names only
- Display asset names and IP addresses
- Select the sections to include in your template and click Add. See Report templates and sections. Set the order for the sections to appear by clicking the up or down arrows.
- (Optional) Click Remove to take sections out of the report.
- (Optional) Add the Cover Page section to include a cover page, logo, scan date, report date, and headers and footers. See Adding a custom logo to your report for information on file formats and directory location for adding a custom logo.
- (Optional) Clear the check boxes to Include scan data and Include report date if you do not want the information in your report.
- (Optional) Add the Baseline Comparison section to select the scan date to use as a baseline. See Selecting a scan as a baseline for information about designating a scan as a baseline.
- (Optional) Add the Executive Summary section to enter an introduction to begin the report.
- Click Save.
Creating a custom report template based on an existing template
NOTE
Only certain legacy reports templates can be copied for customization purposes. Newer report templates do not support this feature.
You can create a new custom report template based on a legacy template or existing custom report template. This allows you to take advantage of some of a template's useful features without having to recreate them as you tailor a template to your needs.
To create a custom template based on an existing template, take the following steps:
- Click the Reports icon in the Web interface.
- Click Manage report templates. The Manage report templates panel appears.
- From the table, select a template that you want to base a new template on. OR If you have a large number of templates and don't want to scroll through all of them, start typing the name of a template in the Find a report template text box. The Security Console displays any matches. The search is not case-sensitive.
- Hover over the tool icon of the desired template. If it is a legacy built-in template, you will have the option to copy and then edit it. If it is a custom template, you can edit it directly unless you prefer to edit a copy. Select an option.
The Security Console displays the Create a New Report Template panel.
- Edit settings as described in Editing report template settings. If you are editing a copy of a template, give the template a new name.
- Click Save. The new template appears in the template table.
Adding a custom logo to your report
By default, a document report cover page includes a generic title, the name of the report, the date of the scan that provided the data for the report, and the date that the report was generated. It also may include the Rapid7 logo or no logo at all, depending on the report template. See Cover Page. You can easily customize a cover page to include your own title and logo.
Logos can be JPEG and PNG logo formats.
To display your own logo on the cover page:
- Copy the logo file to the designated directory of your installation:
- Windows -
C:\Program Files\Rapid7\nexpose\shared\reportImages\custom\silo\default
- Linux -
/opt/rapid7/nexpose/shared/reportImages/custom/silo/default
- Go to the Cover Page Settings section of the Create a New Report Template panel.
- Enter the name of the file for your own logo, preceded by the word “image:” in the Add logo field. Example: image:file_name.png. Do not insert a space between the word “image:” and the file name.
- Enter a title in the Add title field.
- Click Save.
- Restart the Security Console. Make sure to restart before you attempt to create a report with the custom logo. | https://docs.rapid7.com/nexpose/configuring-custom-report-templates/ | 2021-01-16T03:52:29 | CC-MAIN-2021-04 | 1610703499999.6 | [array(['/areas/docs/_repos//product-documentation__master/b157b8886c548d94cd89fa31b5cbbad9e6d0c00d/nexpose/images/s_Create_a_new_report_template.jpg',
None], dtype=object)
array(['/areas/docs/_repos//product-documentation__master/b157b8886c548d94cd89fa31b5cbbad9e6d0c00d/nexpose/images/s_nx_report_copy_template.jpg',
None], dtype=object) ] | docs.rapid7.com |
Follow this guide to start using the Veezla | AT powered by Lob
Install AT from AppExchange. Click on the Get It Now button and select the desired destination (sandbox or production).
There are two out of the box permissions sets (Address Admin & Address User) that can be assigned to team members. Each permission set provides a level of access to the AT app. You can clone any of the permission sets to personalize the access based on your company's requirements.
An app called "Veezla Settings" is provided to help admins get set up and configure AT.
Navigate to Address Settings.
Lightning: Navigate to the app launcher, select "Veezla Settings" app, and click on the "Address Settings" tab.
Classic: Navigate to the application drop-down button (blue button in the upper right-hand corner), select "Veezla Settings" app, and click on the "Address Settings " tab
Every.
Lob API keys
Test and live API keys are located in Lob account's dashboard. Click on the dashboard button (top right corner of the page), your username, and select the API Keys tab.
Connect Accounts
Create, edit, and delete service provider accounts in Address Settings. If you have multiple accounts, you can connect all of them easily to the AT package. Then you can either set a default account for address management features or use a picklist on address related objects with all available accounts.
By default, we provide setup that only requires API key for one test key and one live key.
Connect with Test key: Copy the key from your Lob account and paste it to the Test Environment field in Veezla Settings.
Connect with Live key: Copy the key from your Lob account, paste it to the Live Environment field, and enable the Live mode checkbox in Veezla Settings
This is an optional step depending on which address blocks would you like to use. The package provides additional layouts for the Account and Contact objects that include all the necessary fields for address management. Page layout names are Veezla Account (Address) and Veezla Contact (Address).
This is an optional step depending on which object would you like to use for address verification. Address Central is a customizable and fast Lightning Component that makes managing addresses seamless. By default, Address Central is available for our Address object, Accounts, and Contacts. Also it can be deployed on any object through Lightning App Builder.
Veezla's Address Management & Verification solution is powered by Lob API. It is organized around REST and designed to have predictable, resource-oriented URLs and uses HTTP response codes to indicate any API errors.
Lob account creation is required to use Veezla Address Management & Verification through their API keys.
Navigate to Lob's sign up page.
Enter the information below and click "sign up free."
Follow our Quick Start guide to connect your Lob account to Veezla AT. | https://docs.veezla.io/address-tools/getting-started-1 | 2021-01-16T01:51:33 | CC-MAIN-2021-04 | 1610703499999.6 | [] | docs.veezla.io |
Automatic installation is the easiest option — WordPress will handles the file transfer, and you won’t need to leave your web browser. To do an automatic install of YITH Proteo, log in to your WordPress dashboard, navigate to the Themes menu, and click “Add New.”
In the search field type “YITH Proteo” then click “Search Themes”. Once you’ve found the theme, you can view details about and install it by click “Install Now” and WordPress will take it from there. | https://docs.yithemes.com/yith-proteo/category/installation/ | 2021-01-16T02:11:50 | CC-MAIN-2021-04 | 1610703499999.6 | [] | docs.yithemes.com |
Asynchronous Persistency - Mirror Service
The GigaSpaces Mirror Service (write behind) provides reliable asynchronous persistency which. Hibernate Space Persistency for details on how to use the built-in
HibernateSpaceSynchronizationEndpoint.
The Data Grid Processing Unit
The
cluster-config.mirror-service
space settings specify the interaction between the data grid primary spaces and the Mirror Service. The
mirrored="true"
space element tag enables the replication mechanism from the data grid Primary spaces to the Mirror Service. Once
mirrored="true" has been specified, all data grid members will be Mirror aware and will be delegating their activities to the Mirror service. The data grid
mirrored="true" with the Data-Grid PU, you should use the following property instead:
cluster-config.mirror-service.enabled=true
The data grid.> Hibernate Space Persistency for full details about the EDS properties the you may configure.
You must use a Data-Grid cluster schema that includes a backup (i.e.
partitioned) when running a Mirror Service. Without having backup, the Primary data grid Spaces will not replicate their activities to the Mirror Service. For testing purposes, in case you don't want to start backup spaces, you can use the
partitioned
Mirror tag. The Mirror Service itself is not a regular Space. It is dispatching the operations which have been replicated from the data grid undeployed, the mirror service must be undeployed last. This will ensure that all data is persisted properly through mirror async persistency. Before primary space is undeployed data grid data grid , this may improve the database update rate (since multiple partitions will be sending their updates to the Mirror, which can batch all cumulative updates to the database), but this will impact the data grid transaction latency.
You might want to tune the data grid and the Mirror activity to push data into the database faster. Here are some recommendations you should consider:
- Optimize the Space Class structure to include fewer fields. Less fields means less overhead when the data grid):
<prop key="cluster-config.mirror-service.bulk-size">10000</prop> <prop key="cluster-config.mirror-service.interval-millis">5000</prop> <prop key="cluster-config.mirror-service.interval-opers">50000</prop>.
- Use stateless session with the Hibernate Space Persistency configuration. See the
StatelessHibernateSpaceSynchronizationEndpoint.
-:
<prop key="cluster-config.mirror-service.supports-partial-update">true</prop>
If you are using a custom implementation of the data source you also need to implement how the partial update is persisted. See com.gigaspaces.sync.DataSyncOperation for more details. the Mirror Monitor JMX utility for graphical mirror service monitoring via JMX. | https://docs.gigaspaces.com/latest/dev-java/asynchronous-persistency-with-the-mirror.html | 2021-01-16T03:32:12 | CC-MAIN-2021-04 | 1610703499999.6 | [array(['/attachment_files/data-grid-async-persist.jpg',
'data-grid-async-persist.jpg'], dtype=object)
array(['/attachment_files/mirror_ui_stats.jpg', 'image'], dtype=object)] | docs.gigaspaces.com |
Load document from local disk
GroupDocs.Parser provides the functionality to extract data from documents on the local disk.
The following example shows how to load the document from the local disk:
// Set the filePath String filePath = Constants.SamplePdf; // Create an instance of Parser class with the filePath try (Parser parser = new Parser(filePath)) { //. | https://docs.groupdocs.com/parser/java/load-document-from-local-disk/ | 2021-01-16T03:40:02 | CC-MAIN-2021-04 | 1610703499999.6 | [] | docs.groupdocs.com |
LikeCoin Community —— The Republic of Liker Land —— governance by Liquid Democracy
Anyone who owes LikeCoin is a stakeholder, every stakeholder can have a stake in the governance of the Republic of Liker Land.
LikeCoin is a token, LikeCoin chain is it's blockchain, adding together is a DAO (decentralized autonomous/democratic organization). The way to come into group decisions by the DAO is called "Liquid Democracy".
In Liquid Democracy, all the holders of LikeCoin are stakeholders. The amount of LikeCoin owned by an individual or organization reflects their stakes. Liker delegates LikeCoin to a validator that he/she trusts, just the same as representative democracy that people vote for a representative. But liquid democracy is one step ahead, stakeholder decides who to choose and the level of empowerment because more LikeCoin delegation represent more voting power, and also multiple delegations to different validators is allowed. Moreover the delegation to a validator can be changed at any time, there is no fixed period for the term of office, stakeholder can choose another representative at any time. Validator has to be responsible for their stakeholders all the time. Therefore come to this stage, the foundation acts as a government and its power and executions are controlled by validators who are the representative of citizens. | https://docs.like.co/user-guide/liquid-democracy | 2021-01-16T03:42:07 | CC-MAIN-2021-04 | 1610703499999.6 | [] | docs.like.co |
The OpenStack Image service (Glance) provides a REST API for storing and managing virtual machine images and snapshots. Glance requires you to configure a back end for storing images.
MCP supports the following options as Glance back end:
A highly scalable distributed object storage that is recommended as an Image storage for environments with a large number of images and/or snapshots. If used as a back end for both image storage and ephemeral storage, Ceph can eliminate caching of images on compute nodes and enable copy-on-write of disk images, which in large clouds can save a lot of storage capacity.
A distributed network file system that allows you
to create a reliable and redundant data storage for image files.
This is the default option for an Image store with the
File
back end in MCP.
The default back end used in Mirantis Reference Architecture is Ceph cluster.
See also | https://docs.mirantis.com/mcp/q4-18/mcp-ref-arch/openstack-environment-plan/storage-plan/image-storage-plan.html | 2021-01-16T01:58:05 | CC-MAIN-2021-04 | 1610703499999.6 | [] | docs.mirantis.com |
Feature: #85256 - Install TYPO3 on SQLite¶
See Issue #85256
Description¶
The TYPO3 web installer allows to install the system on
SQLite DBMS.
This platform can be selected if
pdo_sqlite is available in PHP, which is
often the case. SQLite can be a nice DBMS for relatively small instances and has
the advantage that no further server side daemon is needed.
Administrators must keep an eye on security if using this platform:
In SQLite, a database is stored in a single file. In TYPO3, its default location
is the var/sqlite path of the instance which is derived from environment variable
TYPO3_PATH_APP. If that variable is not set which is
often the case in non-composer instances, the database file will end up in the
web server accessible document root directory :file:`typo3conf/`!
To prevent guessing the database name and simply downloading it, the installer appends
a random string to the database filename during installation. Additionally, the demo
Apache
_.htaccess file prevents downloading
.sqlite files. The demo
MicroSoft IIS web server configuration in file
_web.config comes with the same
restriction.
Administrators installing TYPO3 using the SQLite platform should thus test if the database is downloadable from the web and take measures to prevent this by either configuring the web server to deny this file, or - better - by moving the config folder out of the web root, which is good practice anyway. | https://docs.typo3.org/c/typo3/cms-core/master/en-us/Changelog/9.4/Feature-85256-InstallTYPO3OnSQLite.html | 2021-01-16T03:26:05 | CC-MAIN-2021-04 | 1610703499999.6 | [] | docs.typo3.org |
This is version 2.3 of the AWS Elemental Delta documentation. This is the latest version. For prior versions, see the Previous Versions section of AWS Elemental Delta Documentation.
About This Guide
This guide is intended for engineers who are performing the initial configuration on an AWS Elemental Delta cluster.
The full suite of configuration topics for AWS Elemental Delta is described in the following table.
Phase 2 of Installation
This guide provides detailed information on phase 2 for setting up the AWS Elemental Delta cluster:
Configure other Ethernet interfaces as required.
Configure DNS server, NTP servers, and firewall.
Add mount points to access remote servers.
Create AWS credentials, if applicable.
Configure backup of the database.
Configure SNMP traps.
Enable user authentication so that users must log in to use the Delta product.
Configure the two Delta nodes into a redundant cluster.
Prerequisite Knowledge
We assume that you know how to:
Connect to the AWS Elemental Delta web interface using your web browser.
Log in to a remote terminal (Linux) session in order to work via the command line interface.
To receive assistance with your AWS Elemental appliances and software
products, see the forums and other helpful tools on the AWS Elemental Support Center | https://docs.aws.amazon.com/elemental-delta/latest/configguide/about-config.html | 2021-01-16T03:46:21 | CC-MAIN-2021-04 | 1610703499999.6 | [] | docs.aws.amazon.com |
-
Application Switching and Traffic Management Features
Application Acceleration Features
Application Security and Firewall Features
Application Visibility Feature
- NetScaler NetScaler NetScaler appliances. The bandwidth and number of concurrent sessions can be improved significantly.
For more information, see Load Balancing.
Traffic Domains
Traffic domains provide a way to create logical ADC partitions within a single NetScaler NetScaler appliance. Enabling NAT on the appliance enhances the security of your private network, and protects it from a public network such as the Internet, by modifying your network’s source IP addresses when data passes through the NetScaler appliance.
The NetScaler appliance supports the following types of network address translation:
INAT: In Inbound NAT (INAT), an IP address (usually public) configured on the NetScaler NetScaler appliance and the IP address of the server.
RNAT: In Reverse Network Address Translation (RNAT), for a session initiated by a server, the NetScaler NetScaler NetScaler appliance. A stateful NAT64 configuration involves an NAT64 rule and an NAT64 IPv6 prefix.
For more information, see Configuring Network Address Translation.
Multipath TCP Support
NetScaler/netscaler/11 NetScaler NetScaler appliance or NetScaler virtual appliance on a cloud instance to a NetScaler NetScaler. | https://docs.citrix.com/en-us/netscaler/11-1/getting-started-with-netscaler/features/switching-and-traffic-management-features.html | 2021-01-16T03:04:07 | CC-MAIN-2021-04 | 1610703499999.6 | [] | docs.citrix.com |
Managing and Using Numbers in Atmosphere® CPaaS
Atmosphere® offers a number of channels that brands can use to tailor their communications and reach customers where they are, from SMS notifications to chatbots in social messaging platforms. Your ‘numbers’ are the items that allow those communications to happen – everything from phone numbers to your Facebook Messenger credentials.
My Numbers
The Numbers menu option in the navigation at the top of the page takes you to the ‘My Numbers’ page. This is where you can browse numbers in your account. ‘Numbers’ refer to any item that might be used for communications: phone numbers, toll-free numbers, short codes, alphanumeric sender IDs for one-way text messaging, Facebook Messenger credentials, and WhatsApp IDs show up on this screen.
If you have any flows assigned to the number, it will be listed accordingly underneath the Voice Flow or Text Flow section. The Status column indicates, with an eye icon, if the number is ready to be used. A number has to be CPaaS enabled in order for it to be used in Atmosphere®. This can be done by clicking the CPaaS toggle on the far right.
Pro and Enterprise Users: Order a New Phone Number
Before you can experience everything the Atmosphere® CPaaS platform has to offer, you’ll probably need a phone number. Numbers can be assigned to flows to trigger automated processes when they are called or texted; they can also be used to send voice or text campaigns to your audiences. Here’s how you can get one for yourself:
Click the Add New button to the right of the Phone Number table. The Order New Numbers page opens. Follow the steps below.
Note: All phone numbers come with a small monthly fee. Only U.S.-based phone numbers and toll-free numbers are currently available. Short codes, and other number types are coming soon!
- Select the desired country.
- Enter your search pattern. For the search pattern filter, four digits are required.
- Tip: Can be used to search for a specific area code by prepending ‘1’ for the U.S. country code.
- Click Search to display numbers matching your criteria.
- Select one or more numbers.
- Scroll to the bottom of the page and click Add Numbers.
- You will see a Confirm Order alert. Click Yes.
- The Order Placed confirmation alert opens. Click Done.
- Click the Numbers menu option at the top of the page to navigate back to My Numbers, where you can see the new number, check its provisioning status, and use it in flows when it is ready.
Pro and Enterprise Users: Assign a Flow to a Number
To assign a flow to a number, first go into the SmartFlows' Number Assignment page. Select the number you want to assign and then click the Assign Flow button.
Note: Only numbers that are CPaaS enabled will appear here. To enable a number, go into the Atmosphere® Portal My Number page, and switch the number’s CPaaS toggle to On.
Any flows you have developed will appear in the Assign Flow window.
Select the flow you want the number assigned to and click Assign.
Once it’s assigned, call or text to try it out! | https://docs.intelepeer.com/Atmosphere/Content/Getting-Started/Managing-and-Using-Numbers-in-Atmosphere-CPaaS.htm | 2021-01-16T03:11:49 | CC-MAIN-2021-04 | 1610703499999.6 | [array(['../Resources/Images/Atmosphere Portal My Numbers Phone Number section features.png',
None], dtype=object)
array(['../Resources/Images/Atmosphere Portal SmartFlows Number Assignment Assign Flow.png',
None], dtype=object)
array(['../Resources/Images/Atmosphere Portal SmartFlows Assign Flow page.png',
None], dtype=object) ] | docs.intelepeer.com |
Contracts Architecture
Features such as contract upgrades and Ethereum Package linking involve interacting with a number of smart contracts. While regular usage of the CLI does not require awareness of these low-level details, it is still helpful to understand how everything works under the hood.
The following sections describe the contract achitecture behind both upgrades and Ethereum Packages.
Upgrades
ProxyAdmin
ProxyAdmin is a central admin for all proxies on your behalf, making their management as simple as possible.
As an admin of all proxy contracts it is in charge of upgrading them, as well as transferring their ownership to another admin. This contract is used to complement the Transparent Proxy Pattern, which prevents an admin from accidentally triggering a proxy management function when interacting with their instances.
ProxyAdmin is owned by its deployer (the project owner), and exposes its administrative interface to this account.
A
ProxyAdmin is only deployed when you run an
oz create (or
oz create2) command for the first time. You can force the CLI to deploy one by running
oz push --deploy-proxy-admin.
Ownership Transfer
The
oz set-admin CLI command is used to transfer ownership, both of any single contract or of the entire project (by transferring the ownership of the
ProxyAdmin contract itself).
A contract’s ownership is transferred by providing its address and the new admin’s:
$ npx oz set-admin [MYCONTRACT_ADDRESS] [NEW_ADMIN_ADDRESS]
To instead transfer the whole project, just provide the new admin address:
$ npx oz set-admin [NEW_ADMIN_ADDRESS]
Contract Upgrades via
ProxyAdmin
The
ProxyAdmin.sol also responsible for upgrading our contracts. When you run the
oz upgrade command, it goes through
ProxyAdmin’s
upgrade method. The
ProxyAdmin contract also provides another method
getProxyImplementation which returns the current implementation of a given proxy.
You can find your
ProxyAdmin contract address in
.openzeppelin/<network>.json under the same name.
// .openzeppelin/<network.json> "proxyAdmin": { "address": <proxyAdmin-address> }
ProxyFactory
ProxyFactory is used when creating contracts via the
oz create2 command, as well as when creating minimal proxies. It contains all the necessary methods to deploy a proxy through the
CREATE2 opcode or a minimal non-upgradeable proxy.
This contract is only deployed when you run
openzeppelin create2 or
openzeppelin create --minimal for the first time. You can force the CLI to deploy it by running
openzeppelin push --deploy-proxy-factory.
Ethereum Packages
App
App is the project’s main entry point. Its most important function is to manage your project’s "providers". A provider is basically an Ethereum Package identified by a name at a specific version. For example, a project may track your application’s contracts in one provider named "my-application" at version "0.0.1", an OpenZeppelin Contracts provider named "@openzeppelin/contracts-ethereum-package" at version "2.0.0", and a few other providers. These providers are your project’s sources of on-chain logic.
The providers are mapped by name to
ProviderInfo structs:
// App.sol ... mapping(string => ProviderInfo) internal providers; struct ProviderInfo { Package package; uint64[3] version; } ...
When you upgrade one of your application’s smart contracts, it is your application provider named "my-application" that is bumped to a new version, e.g. from "0.0.1" to "0.0.2". On the other hand, when you decide to use a new version of the OpenZeppelin Ethereum Package in your project, it is the "@openzeppelin/contracts-ethereum-package" provider which is now pointed at the "2.0.1" version of the package, instead of "2.0.0".
An Ethereum Package is defined by the
Package contract, as we’ll see next.
Package
A
Package contract tracks all the versions of a given Ethereum Package. Following the example above, one package could be the "application package" associated to the name "my-application" containing all the contracts for version "0.0.1" of your application, and all the contracts for version "0.0.2" as well. Alternatively, another package could be an Ethereum Package associated to the name "@openzeppelin/contracts-ethereum-package" which contains a large number of versions "x.y.z" each of which contains a given set of contracts.
The versions are mapped by a semver hash to
Version structs:
// Package.sol ... mapping (bytes32 => Version) internal versions; struct Version { uint64[3] semanticVersion; address contractAddress; bytes contentURI; } ...
ImplementationDirectory
A version’s
contractAddress is an instance of the A
ImplementationDirectory contract, which is basically a mapping of contract aliases (or names) to deployed implementation instances. Continuing the example, your project’s "my-application" package for version "0.0.1" could contain a directory with the following contracts:
Directory for version "0.0.1" of the "my-application" package
Alias: "MainContract", Implementation: "0x0B06339ad63A875D4874dB7B7C921012BbFfe943"
Alias: "MyToken", Implementation: "0x1b9a62585255981c85Acec022cDaC701132884f7"
While version "0.0.2" of the "my-application" package could look like this:
Directory for version "0.0.2" of the "my-application" package
Alias: "MainContract", Implementation: "0x0B06339ad63A875D4874dB7B7C921012BbFfe943"
Alias: "MyToken", Implementation: "0x724a43099d375e36c07be60c967b8bbbec985dc8" ←-- this changed
Notice how version "0.0.2" uses a new implementation for the "MyToken" contract.
Likewise, different versions of the "@openzeppelin/contracts-ethereum-package" Ethereum Package could contain different implementations for persisting aliases such as "ERC20", "ERC721", etc.
An
ImplementationDirectory is a contract that adopts the
ImplemetationProvider interface, which simply requires that for a given contract alias or name, the deployed address of a contract is provided. In this particular implementation of the interface, an
ImplementationDirectory can be frozen, indicating that it will no longer be able to set or unset additional contracts and aliases. This is helpful for making official releases of Ethereum Packages, where the immutability of the package is guaranteed.
Other implementations of the interface could provide contracts without such a limitation, which makes the architecture pretty flexible, yet secure. | https://docs.openzeppelin.com/cli/2.6/contracts-architecture | 2021-01-16T02:26:15 | CC-MAIN-2021-04 | 1610703499999.6 | [] | docs.openzeppelin.com |
Caution
Starting from the MCP 2019.2.9 update, the Gainsight integration service is considered as deprecated.
Gainsight is a customer relationship management (CRM) tool and extension for Salesforce. Gainsight integration service queries Prometheus for the metrics data and sends the data to Gainsight. Mirantis uses the collected data for further analysis and reports to improve the quality of customer support. For more information, see MCP Reference Architecture: StackLight LMA components.
Note
Gainsight formats the data using
Single Quote for
Quote Char
and commas as separators.
To enable Gainsight integration service on an existing StackLight LMA deployment:
Log in to the Salt Master node.
Update the system level of your Reclass model.
Add the classes and parameters to
stacklight/client.yml as required:
For OpenStack environments, add the default Openstack-related metrics:
- system.prometheus.gainsight.query.openstack
Add the main Gainsight class:
- system.docker.swarm.stack.monitoring.gainsight
Specify the following parameters:
parameters: _param: docker_image_prometheus_gainsight: docker-prod-local.artifactory.mirantis.com/openstack-docker/gainsight:${_param:mcp_version} gainsight_csv_upload_url: <URL_to_Gainsight_API> gainsight_account_id: <customer_account_ID_in_Salesforce> gainsight_environment_id: <customer_environment_ID_in_Salesforce> gainsight_app_org_id: <Mirantis_organization_ID_in_Salesforce> gainsight_access_key: <Gainsight_access_key> gainsight_job_id: <Gainsight_job_ID> gainsight_login: <Gainsight_login> gainsight_csv_retention: <retention_in_days>
Note
To obtain the values for the above parameters, contact Mirantis Customer Success Team through [email protected].
The retention period for CSV files is set to 180 days by default.
Optional. Customize the frequency of CSV uploads to Gainsight by
specifying the
duration parameter in the
prometheus block.
Example:
_param: prometheus: gainsight: crontab: duration: '0 0 * * *'
Refresh Salt grains and pillars:
salt '*' state.sls salt.minion.grains && salt '*' mine.update && salt '*' saltutil.refresh_pillar
Deploy the configuration files for the Gainsight Docker service:
salt -C 'I@docker:swarm' state.sls prometheus.gainsight
Deploy the new
monitoring_gainsight Docker service:
salt -C 'I@docker:swarm:role:master' state.sls docker.client | https://docs.mirantis.com/mcp/q4-18/mcp-operations-guide/lma/add-new-features-to-existing-deployment/enable-gainsight-integration.html | 2021-01-16T02:40:02 | CC-MAIN-2021-04 | 1610703499999.6 | [] | docs.mirantis.com |
This feature is only available in the PRO version.
The code for this integration is under the following folder:
src/app/firebase
We are going to use AngularFire plugin in order to connect our Ionic Angular application with Firebase.
In order to install the plugin we have to run the following command in our ionic project:
npm install firebase @angular/fire --save
Now, you need a firebase project to connect to this app. Go to firebase console and start a new project (or select an existing one). Once you have created the firebase project you will be redirected to/:
environment.tsexport.
In Ionic 5 Full Starter App PRO Version you will find Firebase Authentication and Firebase CRUD features.
For the CRUD integration we will use Cloud Firestore Database.
Cloud Firestore is a NoSQL, document-oriented database..
Learn more about Data Model in Firebase Firestore. | https://ionic-5-full-starter-app-docs.ionicthemes.com/firebase-integration | 2021-01-16T02:47:37 | CC-MAIN-2021-04 | 1610703499999.6 | [] | ionic-5-full-starter-app-docs.ionicthemes.com |
Proteo theme adds some options to page and posts.
Title icon
Display an icon in page/post title. The icon can be chosen from a list showing the preview of the chosen icon.
Sidebar position
Sidebar position can be selected globally or specifically for the individual post and page. Use
inherit value to use global settings.
Sidebar chooser
Choose the sidebar you like from a list of registered sidebars.
Header and footer
Use this option to hide header and footer or the page. This is very usefull if you want to achieve a particular and unique design of a page (landing page, squeeze page and similar).
This option is also useful if you want to use block menus instead of standard navigation menus.
Header slider
If you use YITH Slider for page builders plugin (it’s free on wordpress.org) you have another option named Header slider.
This option helps to integrate previously created sliders into your page header section.
Please note that this option is available only for pages and not for posts. | https://docs.yithemes.com/yith-proteo/category/page-post-options/ | 2021-01-16T02:58:51 | CC-MAIN-2021-04 | 1610703499999.6 | [] | docs.yithemes.com |
Use.
Where: This change applies to all communities accessed through Lightning Experience and Salesforce Classic in Enterprise, Performance, Unlimited, and Developer editions.
Who: Users with the following licenses or their login equivalents can use sharing sets.
- Customer Community
- Customer Community Plus
- Partner Community
- Lightning External Apps
- Lightning External Apps Plus
How: In Setup, enter Communities Settings in the Quick Find box, then select Communities Settings. In the Sharing Sets related list, click New. Under Configure Access, set up the access mapping. For the user, select Contact.RelatedAccount, and for the target object, select Account.
| http://releasenotes.docs.salesforce.com/en-us/spring19/release-notes/rn_networks_sharing_sets_contacts_multiple.htm | 2019-03-18T16:43:21 | CC-MAIN-2019-13 | 1552912201455.20 | [array(['release_notes/networks/images/rn_networks_sharing_sets_multiple_ga_setup.png',
'Configure Access setup section'], dtype=object) ] | releasenotes.docs.salesforce.com |
Simple Microservices Architecture on AWS
In the past, typical monolithic applications were built using different layers, for example, a user interface (UI) layer, a business layer, and a persistence layer. A central idea of a microservices architecture is to split functionalities into cohesive “verticals”—not by technological layers, but by implementing a specific domain. The following figure depicts a reference architecture for a typical microservices application on AWS.
User Interface
Modern web applications often use JavaScript frameworks to implement a single-page application that communicates with a RESTful API. Static web content can be served using Amazon Simple Storage Service (Amazon S3) and Amazon CloudFront.
Note
CloudFront is a global content delivery network (CDN) service that accelerates delivery of your websites, APIs, video content, and other web assets.
Since clients of a microservice are served from the closest edge location and get responses either from a cache or a proxy server with optimized connections to the origin, latencies can be significantly reduced. However, microservices running close to each other don’t benefit from a CDN. In some cases, this approach might even add more latency. It is a best practice to implement other caching mechanisms to reduce chattiness and minimize latencies.
Microservices
The API of a microservice is the central entry point for all client requests. The application logic hides behind a set of programmatic interfaces, typically a RESTful web services API. This API accepts and processes calls from clients and might implement functionality such as traffic management, request filtering, routing, caching, and authentication and authorization.
Many AWS customers use the Elastic Load Balancing (ELB) Application Load Balancer together with Amazon EC2 Container Service (Amazon ECS) and Auto Scaling to implement a microservices application. The Application Load Balancer routes traffic based on advanced application-level information that includes the content of the request.
Note
ELB automatically distributes incoming application traffic across multiple Amazon EC2 instances.
The Application Load Balancer distributes incoming requests to Amazon ECS container instances running the API and the business logic.
Note
Amazon EC2 is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers.
Amazon EC2 Container Service (Amazon ECS) is a highly scalable, high performance container management service that supports Docker containers and allows you to easily run applications on a managed cluster of Amazon EC2 instances.
Amazon ECS container instances are scaled out and scaled in, depending on the load or the number of incoming requests. Elastic scaling allows the system to be run in a cost-efficient way and also helps protect against denial of service attacks.
Note
Auto Scaling helps you maintain application availability and allows you to scale your Amazon EC2 capacity up or down automatically according to conditions you define.
Containers
A common approach to reducing operational efforts for deployment is container-based deployment. Container technologies like Docker have increased in popularity in the last few years due to the following benefits:
Portability – Container images are consistent and immutable, that is, they behave the same no matter where they are run (on a developer’s desktop as well as in a production environment).
Productivity – Containers increase developer productivity by removing cross-service dependencies and conflicts. Each application component can be broken into different containers running a different microservice.
Efficiency – Containers allow the explicit specification of resource requirements (CPU, RAM), which makes it easy to distribute containers across underlying hosts and significantly improve resource usage. Containers also have only a light performance overhead compared to virtualized servers and efficiently share resources on the underlying operating system.
Control – Containers automatically version your application code and its dependencies. Docker container images and Amazon ECS task definitions allow you to easily maintain and track versions of a container, inspect differences between versions, and roll back to previous versions.
Amazon ECS eliminates the need to install, operate, and scale your own cluster management infrastructure. With simple API calls, you can launch and stop Docker-enabled applications, query the complete state of your cluster, and access many familiar features like security groups, load balancers, Amazon Elastic Block Store (Amazon EBS) volumes, and AWS Identity and Access Management (IAM) roles.
After a cluster of EC2 instances is up and running, you can define task definitions and services that specify which Docker container images to run on the cluster. Container images are stored in and pulled from container registries, which may exist within or outside your AWS infrastructure. To define how your applications run on Amazon ECS, you create a task definition in JSON format. This task definition defines parameters for which container image to run, CPU, memory needed to run the image, how many containers to run, and strategies for container placement within the cluster. Other parameters include security, networking, and logging for your containers.
Amazon ECS supports container placement strategies and constraints to customize how Amazon ECS places and terminates tasks. A task placement constraint is a rule that is considered during task placement. You can associate attributes, essentially key/value pairs, to your container instances and then use a constraint to place tasks based on these attributes. For example, you can use constraints to place certain microservices based on instance type or instance capability, such as GPU-powered instances.
Docker images used in Amazon ECS can be stored in Amazon EC2 Container Registry (Amazon ECR). Amazon ECR eliminates the need to operate and scale the infrastructure required to power your container registry.
Note
Amazon EC2 Container Registry (Amazon ECR) is a fully-managed Docker container registry that makes it easy for developers to store, manage, and deploy Docker container images. Amazon ECR is integrated with Amazon EC2 Container Service (Amazon ECS), simplifying your development to production workflow.
Data Store
The data store is used to persist data needed by the microservices. Popular stores for session data are in-memory caches such as Memcached or Redis. AWS offers both technologies as part of the managed Amazon ElastiCache service.
Note
Amazon ElastiCache is a web service that makes it easy to deploy, operate, and scale an in-memory data store or cache in the cloud. The service improves the performance of web applications by allowing you to retrieve information from fast, managed, in-memory caches, instead of relying entirely on slower disk-based databases.
Putting a cache between application servers and a database is a common mechanism to alleviate read load from the database, which, in turn, may allow resources to be used to support more writes. Caches can also improve latency.
Relational databases are still very popular for storing structured data and business objects. AWS offers six database engines (Microsoft SQL Server, Oracle, MySQL, MariaDB, PostgreSQL, and Amazon Aurora) as managed services via Amazon Relational Database Service (Amazon RDS).
Note
Amazon RDS makes it easy to set up, operate, and scale a relational database in the cloud. It provides cost-efficient and resizable capacity while managing time-consuming database administration tasks, freeing you to focus on applications and business.
Relational databases, however, are not designed for endless scale, which can make it very hard and time-intensive to apply techniques to support a high number of queries.
NoSQL databases have been designed to favor might be well suited to NoSQL persistence. It is important to understand that NoSQL-databases have different access patterns than relational databases. For example, it is not possible to join tables. If this is necessary, the logic has to be implemented in the application.
Note
Amazon DynamoDB is a fast and flexible NoSQL database service for all applications that need consistent, single-digit millisecond latency at any scale.
You can use Amazon DynamoDB and fast performance.
DynamoDB is designed for scale and performance. In most cases, DynamoDB response times can be measured in single-digit milliseconds. However, there are certain use cases that require response times in microseconds. For these use cases, DynamoDB Accelerator (DAX) provides caching capabilities for accessing eventually consistent data. DAX does all the heavy lifting required to add in- memory acceleration to your DynamoDB tables, without requiring developers to manage cache invalidation, data population, or cluster management.
DynamoDB provides an auto scaling feature to dynamically adjust provisioned throughput capacity on your behalf, in response to actual traffic patterns.
Provisioned throughput is the maximum amount of capacity that an application can consume from a table or index. When the workload decreases, Application
Auto Scaling decreases the throughput so that you don't pay for unused provisioned capacity. | https://docs.aws.amazon.com/aws-technical-content/latest/microservices-on-aws/simple-microservices-architecture-on-aws.html | 2019-03-18T16:11:06 | CC-MAIN-2019-13 | 1552912201455.20 | [array(['images/typical-microservices-application-aws.png', None],
dtype=object) ] | docs.aws.amazon.com |
JTAG pins:
Watchpoint Unit registers, accessible in Supervisor and Emulator modes:
Two operations implement instruction watchpoints:
Two operations implement data watchpoints:
p n, to gdbproxy.
handle_read_single_register_command.
handle_read_single_registercommand calls into
bfin_read_single_register.
EMUDAT = Rn;is generated and scanned into EMUIR register.
Source Code of Blackfin gdbproxy
Document of Blackfin gdbproxy
Another very interesting JTAG application
Thank you for your attention.
Any questions? | https://docs.blackfin.uclinux.org/doku.php?id=presentations:toolchain:gdbproxy | 2019-03-18T15:41:09 | CC-MAIN-2019-13 | 1552912201455.20 | [] | docs.blackfin.uclinux.org |
From an existing Bugzilla
If you have an existing Bugzilla you’d like to move to devZing we are happy to do that at no additional charge. We’ll even do it more than once to give you a chance to try us out with your data without having to commit. Please contact support and we’ll let you know how to get a backup of your current Bugzilla and get it over to us.
From a different bug tracker
There isn’t a straightforward way to import bugs into Bugzilla from another bug tracking system or from a spreadsheet. That said, we’ve developed a number of internal tools to assist with migrating your data into Bugzilla. Please contact support to discuss what might be appropriate for your situation. | https://docs.devzing.com/importing-bugs-into-bugzilla/ | 2019-03-18T15:45:48 | CC-MAIN-2019-13 | 1552912201455.20 | [] | docs.devzing.com |
Versions Compared
Old Version 15
changes.mady.by.user Javier Lopez
Saved on
New Version Current
changes.mady.by.user Javier Lopez
Saved on
Key
- This line was added.
- This line was removed.
- Formatting was changed.:
Edit this file with any text editor and set the rest → host attribute to ""::
Getting information about projects
You can use the command projects to query projects data.
For getting all metadata from a particular project you can use the info subcommand. For example, getting all metadata for the cancer_grch37 project:
For getting all metadata from all studies associated to a particular project yo ucan use the studies subcommand. For example, getting all studies and their metadata for the cancer_grch37 project:):
For getting summary data from a particular study you can use the summary subcommand. For example, getting summary data for study 1kG_phase3 which is framed within project reference_grch37:
For getting all available metadata for a particular study you can use the info command. For example, getting all metadata for study GONL which is framed within the project reference_grch37::
Table of Contents: | http://docs.opencb.org/pages/diffpagesbyversion.action?pageId=328220&selectedPageVersions=16&selectedPageVersions=15 | 2019-03-18T16:22:53 | CC-MAIN-2019-13 | 1552912201455.20 | [] | docs.opencb.org |
The browser-support interface
browser-support allows access to various APIs needed by modern web browsers.
Auto-connect: no when
allow-sandbox: true, yes otherwise
Attributes:
allow-sandbox: true|false(defaults to
false)
This interface is intended to be used when using an embedded Chromium Content API or using the sandboxes in major browsers from vendors like Google and Mozilla. The
allow-sandbox attribute may be used to give the necessary access to use the browser’s sandbox functionality.
This is a snap interface. See Interface management and Supported interfaces for further details on how interfaces are used.
Last updated 5 months ago. Help improve this document in the forum. | https://docs.snapcraft.io/the-browser-support-interface/7775 | 2019-03-18T16:06:10 | CC-MAIN-2019-13 | 1552912201455.20 | [] | docs.snapcraft.io |
You can change the trigger of alert queries, enable or disable the notifications that a query sends, or change the notification method (email, webhook, or send to vRealize Operations Manager.
About this task
Alert queries are user specific. You can manage only your own alerts. You must be assigned a Super Admin role to manage other users alerts.
Content pack alert queries are read-only. To save changes to a content pack alert, you have to save the alert to your custom content.
You can apply your changes to one or more alerts at the same time.
Prerequisites
Verify that you are logged in to the vRealize Log Insight Web user interface. The URL format is, where log_insight-host is the IP address or host name of the vRealize Log Insight virtual appliance.
Verify that an administrator has configured SMTP to enable email notifications. See Configure the SMTP Server for Log Insight.
Verify that an administrator has configured the connection between vRealize Log Insight and vRealize Operations Manager to enable alert integration. See Configure Log Insight to Send Notification Events to vRealize Operations Manager.
If you are using webhooks, verify that a webserver has been configured to receive webhook notifications.
Procedure
- Navigate to the Interactive Analytics tab.
- From the Create or manage alerts menu on the right of the Search button, click
and select Manage Alerts.
- In the Alerts list, select one or more alert query that you want to modify, and change the query parameters as needed.
You can find queries by entering a string as a filter. Queries are labeled as enabled or disabled and whether they are a Content Pack query.Note:
If you deselect all notification options, the alert query is disabled.
- Save your changes. | https://docs.vmware.com/en/vRealize-Log-Insight/4.6/com.vmware.log-insight.user.doc/GUID-5FC80DB0-F1B4-49BB-BCC4-58D1BEEE00E5.html | 2019-03-18T15:38:13 | CC-MAIN-2019-13 | 1552912201455.20 | [] | docs.vmware.com |
These steps are in continiuation from step2 in the section deploying.
Step2 continued…
There are 2 ways to launch.
- Using a Public Repo URL
- I have a Github account
Using a Public Repo URL
If you don’t have an id, then type these
Public repo url in the text box.
I have a Github id.
Choose your Github repository.
Adding a Procfile. optional
The Procfile is a mechanism for declaring what commands are run by your application that is used to install additional build steps in the launched application that is deemed fit for your launched app.
Procfile is used to maintain a start your application.
For your convenience the sample public repos has baked in Procfile as needed.
The Procfile resides under the parent root directory.
Now that you have chosen the git repo, Go to step3 to launch.
For example,The Procfile is added in verticeapps/scala_fiddle.git
web: sh server/target/universal/stage/bin/server -Dplay.crypto.secret="QCY?tAnfk?aZ?iwrNwnxIlR6CTf:G3gf:90Latabg@5241ABR5W:1uDFN];Ik@n"
The Procfile is added in verticeapps/play-scala-fileupload-example.git
web: sh target/universal/stage/bin/fileupload -Dplay.crypto.secret="QCY?tAnfk?aZ?iwrNwnxIlR6CTf:G3gf:90Latabg@5241ABR5W:1uDFN];Ik@n"
In all the play application to be start correctly must give that *-Dplay.crypto.secret= * argument.
Working with Play App code optional
To make changes in the code verticeapps/scala_fiddle.git ensure that you have the build tools like git, sbt installed.
cd scala_fiddle
Push your changes to Github optional
Once you are done testing the changes, push the changes to Github.
cd scala_fiddle git push master Username for '': verticeuser Password for '[email protected]': To 1d26d24..5cabacb master -> master | https://docs.megam.io/customapps/play/ | 2019-03-18T15:44:41 | CC-MAIN-2019-13 | 1552912201455.20 | [] | docs.megam.io |
To delete a cluster, navigate to
Manage cluster and choose the cluster that you would like to delete. On the top right is a button
delete cluster:
To confirm the deletion, type the name of the cluster into the text box:
The cluster will switch into deletion state afterwards, and will be removed from the list when the deletion succeeded. | https://docs.syseleven.de/metakube/de/tutorials/delete-a-cluster | 2019-03-18T16:52:53 | CC-MAIN-2019-13 | 1552912201455.20 | [] | docs.syseleven.de |
Physics and Analysis Capabilities¶
This section of the documentation presents a brief overview of the physics modules and analysis tools implemented in Enzo, links to the relevant journal articles where those modules and/or capabilities are described in detail, guidance as to practical usage of the modules and tools, and some of the common Enzo parameters associated with them. | https://enzo.readthedocs.io/en/latest/physics/index.html | 2019-03-18T16:11:04 | CC-MAIN-2019-13 | 1552912201455.20 | [] | enzo.readthedocs.io |
Share the Power of Territory Forecasts
Forecast managers can now share their territory forecasts with any Salesforce user at their company. Sharing ensures that everyone who needs to can view, adjust, and report on forecasts. Previously, sharing was available for all forecast types except territory forecasts. If territory forecasts are set up, no steps are required to get this enhancement.
Where: This change applies to Lightning Experience in Performance and Developer editions and in Enterprise and Unlimited editions with the Sales Cloud.
Who: Forecast managers assigned to territories can share their territory forecasts.
Why: Here’s an example of how easy it is to share territory forecasts.
Jessica, a forecast manager, views her territory forecasts (1) on the forecasts page. She clicks the Share button (2) and selects Brendan to share with (3). Now Brendan can see (4), but not adjust, Jessica’s territory forecasts.
To view the forecasts that Jessica shared, Brendan selects them from the My Shared Forecasts list (5) on the forecasts page. | http://releasenotes.docs.salesforce.com/en-us/spring19/release-notes/rn_sales_features_core_forecasting_share_territories.htm | 2019-03-18T15:52:41 | CC-MAIN-2019-13 | 1552912201455.20 | [array(['release_notes/images/218_territory_forecast_sharing.png',
'Sharing territory forecasts'], dtype=object)
array(['release_notes/images/218_territory_forecasts_select1.png',
'Selecting the shared territory forecasts on the forecasts page'],
dtype=object) ] | releasenotes.docs.salesforce.com |
HTML5 Support
Amazon Silk provides broad support for the emerging HTML5 standard and for related media features. In this section of the Silk Developer Guide, you'll find information on feature support across Silk versions, and examples of supported features and APIs. Though the documentation aims to be comprehensive, the feature set described here is not all-inclusive. Look for HTML5 support to expand and improve as Silk evolves.
You can easily identify what HTML5 features are supported on your version of Silk. On your device, go to HTML5Test.
Note
Feature support for Silk generation 1 differs significantly from generation 2 and 3. In general, feature support for Silk generation 1 is the same as that for Android WebView 2.3. | https://docs.aws.amazon.com/silk/latest/developerguide/html.html | 2019-03-18T16:10:20 | CC-MAIN-2019-13 | 1552912201455.20 | [array(['images/HTML5-logo.png', None], dtype=object)] | docs.aws.amazon.com |
Extending / Intermediate / Back-end Menu Items
Note: You are currently reading the documentation for Bolt 3.1. Looking for the documentation for Bolt 3.6 instead?
Bolt allows extensions to insert submenus under the
Extras = new MenuEntry('koala-menu', 'koala'); $menu->setLabel('Koala Catcher') ->setIcon('fa:leaf') ->setPermission('settings') ;
Note: Menu entries are mounted on extend/, because they show up under the 'Extras' menu option. When adding an accompanying route for a new menu item, make sure to catch it correctly. For the above example, it should match /extend/koala.
Couldn't find what you were looking for? We are happy to help you in the forum, on Slack or on IRC. | https://docs.bolt.cm/3.1/extensions/intermediate/admin-menus | 2019-03-18T15:40:03 | CC-MAIN-2019-13 | 1552912201455.20 | [] | docs.bolt.cm |
Private Cloud Editions (OpenSource/Licensed) Features OpenSource Enterprise Virtual Machines Yes Yes 100’s of Bitnami apps in a second Yes Yes 1000’s of Docker registry using an intuitive search Yes Yes Apps, services(db, queue, nosql..) Yes Yes Custom apps with integration to Github Yes Yes Snapshots Yes Yes Block Storage for VM Yes Yes Real-time monitor/Logging Yes Yes White-labelling No Yes Multi region data center No Yes Integrated billing with WHMCS No Yes Cloud Storage like S3 No Yes Secure Containers coming-soon No Yes Monthly Cost Free (Community Support) Plans from $100/mo Public Cloud Editions (Licensed) Vertice Minified edition Vertice Complete edition Cloud Virtual Machines One-Click Applications Extendable Platform Simple Storage Service (Block/Object) - Elastic Virtual Machines - soon Cloud-Native Applications - Automated Application Scaling (Load Balancing) - soon Micro-Services Docker - Virtual Private Cloud - soon Customizable UI - Increased Publicity & Sales - Offer Cloud Bursting - Mentioned in Providers Section - Integrated with 3rd party tools Layers of High Availability Incremental Offshore Backups Load Balancing - VM Replication on Failure - Incremental Offshore Backups Software Delivery SaaS (or) On-Premise SaaS (or) On-Premise Migration from KVM (SolusVM/OnApp/ProxMox/etc.) soon soon Billing WHMCS WHMCS Monthly Cost $0.25 per GB of RAM - Deployed - (Up-to $16/month/node) $0.5 per GB of RAM - Deployed - (Up-to $32/month/node) | https://docs.megam.io/pricing | 2019-03-18T15:44:56 | CC-MAIN-2019-13 | 1552912201455.20 | [] | docs.megam.io |
Introducing the Ellipse Modern Pendant Collection by Niche
Ever wonder how the Ellipse Grand and Petite Pendants are made? Take a look at our behind the scenes footage of the making of these showstoppers from start to finish! The Ellipse Grand and Ellipse Petite pendants embrace the harmony of a circle. Pleasing to the eye, the Ellipse Grand and Ellipse Petite make up Niche’s new modern lighting collection. The Ellipse modern pendant light. | https://docs.nichemodern.com/youtube-all-videos/niche-introducing-the-ellipse-modern-pendant-collection | 2019-03-18T16:45:26 | CC-MAIN-2019-13 | 1552912201455.20 | [] | docs.nichemodern.com |
Report a bug¶
In the case you have issues, please report them to the project bug tracker:
It is important to provide all the required information with your report:
- Linux kernel version
- Python version
- Specific environment, if used – gevent, eventlet etc.
Sometimes it is needed to measure specific system parameters. There is a code to do that, e.g.:
$ sudo make test-platform
Please keep in mind, that this command will try to create and delete different interface types.
It is possible also to run the test in your code:
from pprint import pprint from pyroute2.config.test_platform import TestCapsRtnl pprint(TestCapsRtnl().collect()) | https://docs.pyroute2.org/report.html | 2018-10-15T12:33:54 | CC-MAIN-2018-43 | 1539583509196.33 | [] | docs.pyroute2.org |
Schema Mapper
Schema Mapper lets you map incoming event types to tables in your destinations while providing you a bunch of functionality to choose what data you want to replicate.
- Introduction to Schema Mapper
- How to use Schema Mapper
- How to map a Source Event Type with a Destination Table using Schema Mapper
- How to map a Source Event Type Field with a Destination Table Column
- Mapping Statuses
- How to fix an Incomplete Event Type
- Auto-Mapping Event Types
- How to Resize String Columns in the Destination | https://docs.hevodata.com/hc/en-us/sections/360000196873-Schema-Mapper | 2018-10-15T14:14:39 | CC-MAIN-2018-43 | 1539583509196.33 | [] | docs.hevodata.com |
SaaS Referral Program Guide - Outline
An overview of what the structure of your referral program will look like, an outline of the technical components that will be involved in setting up your SaaS business's referral program, and how each of these components tie together to create a full referral loop.
The focus of this SaaS found in the How Referral SaaSquatch Works article found in our Success Center.
A high level look at referral programs for SaaS business can be found back in the SaaS Referral program Guide Intro.
Further details, including code examples, on how to integrate this referral program into your product can be found in the companion Installation Guide in our technical documentation.
Engagement
Existing customers can enroll in the referral program in two different ways:
Referral Widget on the website behind a login [Desktop & Mobile]
Placing the Referral Widget behind a login will allow your existing customers to interact with the referral program. Use the Squatch JS library to load the Referral Widget.
Broadcast email to notify existing customers of the program. Provide Referral SaaSquatch with a list of users you would like to enroll in the Referral Program. Referral SaaSquatch registers these users and provides an CSV export file with each user’s unique sharelinks. This CSV file can be imported into an email platform, like Mailchimp, to inform your users about the referral program and provide them with their unique sharelink. Please contact support to find out the required format for the CSV user data you would like to import.
The above steps will allow the customers to refer their friends by sharing their unique share link.
Identification & Attribution
When a friend clicks on the unique share link we can redirect them to the landing page for your referral program. This can be a dedicated page or simply your website’s homepage. On iOS or Android we can redirect them, using our Branch integration, to the app store to install your app.
After the referred user fills out the signup form, identify this referred user to Referral SaaSquatch using one of these methods:
Load the Squatch JS init call on the page [Desktop]
This code snippet will identify the user to SaaSquatch. This call will also automatically check for a referral cookie with the referrer’s referral code. This automatically takes care of creating the referral connection. (Attribution)
Use the SaaSquatch REST API to Create the Account & Create the User [Desktop & Mobile]
When using the API, the referral connection (Attribution) has to happen by capturing the referral code from the referral cookie and include it when creating the Account. The Squatch JS autofill function can help facilitate this.
Load the Mobile Widget or use the Mobile SDK inside the mobile app [Mobile]
The Mobile Widget and Mobile SDK register new participants and show the referral widget to allow users to make referrals. The Branch integration stores the referral code of the person who referred them to easily build the referral connection.
Conversion
Once the referred user hits the goalpost, Referral SaaSquatch should be informed so we can mark this referral as complete in our system and generate a reward for the referrer.
This step can be done in three different ways:
- Manually through the Referral SaaSquatch Portal by looking up the user and changing their status
- Using the Squatch JS init call and adding the
'account_status': 'PAID'parameter
- Through the REST API by updating the Account and setting the status to
PAID.
Fulfillment
Since we are running a double sided referral program we need to fulfill the rewards for both the referrer and the referred user.
Whenever a reward is created by the Referral SaaSquatch system there is a webhook event generated. You can subscribe to these webhooks and listen for the
reward.created event to know when to go about rewarding your users.
The rewards for both parties will be as following:
Referred User
This reward is generated when a Referred User signs up through your platform and you identify them to us. The webhook event
reward.createdhas a
rewardSourceparameter that will says
“REFERRED”. Use this webhook event to kick off reward fulfillment inside your payment payment. | https://docs.referralsaasquatch.com/guides/saas-outline/ | 2018-10-15T12:23:14 | CC-MAIN-2018-43 | 1539583509196.33 | [] | docs.referralsaasquatch.com |
Pricing and Payment Last updated Save as PDF Share Share Share Tweet PaymentNo image availableBuying via the AWS MarketplaceCan payment receipts be sent to our accounting department?Do you accept business credit cards?Free trial - Do you need my credit card details?PricingNo image availableHow does pricing and subscription work?What if I have over 500 AWS servers? | https://docs.druva.com/CloudRanger/Pricing_and_Payment | 2018-10-15T12:39:48 | CC-MAIN-2018-43 | 1539583509196.33 | [] | docs.druva.com |
Bulk update custom fields and create project sites from a workflow in Project Online
To help customers get the most out of Project Online and improve our service extensibility and flexibility, we've added two methods to the client-side object model that you can use in Project Online apps and workflows..
Note
To learn more about calling REST APIs from SharePoint 2013 workflows, see Using SharePoint REST services from workflow with POST method and Calling the SharePoint 2013 Rest API from a SharePoint Designer Workflow.
Bulk update project custom fields from a workflow
Previously, workflows could only update one custom field at a time. Updating project custom fields one at a time can result in a poor end-user experience when users transition between Project Detail Pages. Each update required a separate server request using the Set Project Field action, and updating multiple custom fields on a high-latency, low-bandwidth network resulted in a non-trivial overhead. To resolve this issue, we added the UpdateCustomFields method to the REST API that lets you bulk update custom fields. To use UpdateCustomFields, you pass in a dictionary that contains the names and values of all the custom fields you want to update.
The REST method can be found at the following endpoint:
https://<site-url>/_api/ProjectServer/Projects('<guid>')/Draft/UpdateCustomFields()
Note
Replace the
<site-url> placeholder in the examples with the URL of your Project Web App (PWA) site and the
<guid> placeholder with your project UID.
This section describes how to create a workflow that bulk updates custom fields for a project. The workflow follows these high-level steps:
Wait for the project that you want to update to get checked in
Build a data set that defines all your custom field updates for the project
Call UpdateCustomFields to apply the custom field updates to the project
Log relevant information to the workflow history list (if required)
Publish the project
Check in the project
The final, end-to-end workflow looks like this:
To create a workflow that bulk updates custom fields
Optional. Store the full URL of your project in a variable that you can use throughout the workflow.
Add the Wait for Project Event action to the workflow and choose the When a project is checked in event.
Create a requestHeader dictionary using the Build dictionary action. You'll use the same request header for all the web service calls in this workflow.
Add the following two items to the dictionary.
Create a requestBody dictionary using the Build dictionary action. This dictionary stores all the field updates that you want to apply.
Each custom field update requires four rows: the field's (1) metadata type, (2) key, (3) value, and (4) value type.
__metadata/type The field's metadata type. This record is always the same and uses the following values:
Name: customFieldDictionary(i)/__metadata/type (where i is the index of each custom field in the dictionary, starting with 0)
Type: String
Value: SP.KeyValue
Key The internal name of the custom field, in the format: Custom_ce23fbf43fa0e411941000155d3c8201
You can find the internal name of a custom field by navigating to it's InternalName endpoint:
https://<site-url>/_api/ProjectServer/CustomFields('<guid>')/InternalName
If you created your custom fields manually, the values will differ from site to site. If you plan to reuse the workflow across multiple sites, make sure the custom field IDs are correct.
Value The value to assign to the custom field. For custom fields that are linked to lookup tables, you need to use the internal names of the lookup table entries instead of the actual lookup table values.
You can find the internal name of the lookup table entry at the following endpoint:
https://<site-url>/_api/ProjectServer/CustomFields('<guid>')/LookupEntries('<guid>')/InternalName
If you have a lookup table custom field set up to accept multiple values, use
;#to concatenate values (as shown in the example dictionary below).
ValueType The type of the custom field you are updating.
For Text, Duration, Flag, and LookupTable fields, use Edm.String
For Number fields, use Edm.Int32, Edm.Double, or any other OData-accepted number type
For Date fields, use Edm.DateTime
The example dictionary below defines updates for three custom fields. The first is for a multiple value lookup table custom field, the second is for a number field, and the third is for a date field. Note how the customFieldDictionary index increments.
Note
These values are for illustration purposes only. The key-value pairs you'll use depend on your PWA data.
Add a Call HTTP Web Service action to check the project out.
Edit the properties of the web service call to specify the request header. To open the Properties dialog box, right-click the action and choose Properties.
Add a Call HTTP Web Service action to call the UpdateCustomFields method.
Note the
/Draft/segment in the web service URL. The full URL should look like this:
https://<site-url>/_api/ProjectServer/Projects('<guid>')/Draft/UpdateCustomFields()
Edit the properties of the web service call to bind the RequestHeader and RequestContent parameters to the dictionaries you created. You can also create a new variable to store the ResponseContent.
Optional. Read from the response dictionary to check the state of the queue job and log the information in the workflow history list.
Add a web service call to the Publish endpoint to publish the project. Always use the same request header.
Add a final web service call to the Checkin endpoint to check the project in.
Create a Project site from a workflow
Every project can have its own dedicated SharePoint sites where team members can collaborate, share documents, raise issues, and so on. Previously, sites could only be created automatically on first publish or manually by the project manager in Project Professional or by the administrator in PWA settings, or they could be disabled.
We've added the CreateProjectSite method so you can choose when to create project sites. This is particularly useful for organizations who want to create their sites automatically when a project proposal reaches a specific stage in a pre-defined workflow, rather than on first publish. Postponing project site creation significantly improves the performance of creating a project.
Prerequisite: Before you can use CreateProjectSite, the Allow users to choose setting must be set for project site creation in PWA Settings > ** Connected SharePoint Sites ** > Settings.
To create a workflow that creates a Project site
Create or edit an existing workflow and select the step where you want to create your Project sites.
Create a requestHeader dictionary using the Build dictionary action.
Add the following two items to the dictionary.
Add the Call HTTP Web Service action. Change the request type to use POST, and set the URL using the following format:
https://<site-url>/_api/ProjectServer/Projects('<guid>')/CreateProjectSite('New web name')
Pass the name of the Project site to the CreateProjectSite method as a string. To use the project name as the site name, pass an empty string. Be sure to use unique names so the next project site you create will work.
Edit the properties of the web service call to bind the RequestHeader parameter to the dictionary you created. | https://docs.microsoft.com/en-us/office/client-developer/project/bulk-update-custom-fields-and-create-project-sites-from-workflow-in-project | 2018-10-15T13:57:13 | CC-MAIN-2018-43 | 1539583509196.33 | [array(['media/8c0741f9-7f76-409d-8c00-e7a8c3ddb89f.png',
'End-to-end workflow End-to-end workflow'], dtype=object)
array(['media/6c6c8175-eb10-431d-8056-cea55718fdb4.png',
'Setting Allow users to choose in PWA settings Setting "Allow users to choose" in PWA settings'],
dtype=object) ] | docs.microsoft.com |
Using the AirWatch Console, you can edit a device profile that has already been installed to devices in your fleet. There are two types of changes you can make to any device profile.
- General – General profile settings serve to manage the profile distribution: how the profile is assigned, by which organization group it is managed, to/from which smart group it is assigned/excluded.
- Payload – Payload profile settings affect the device itself: passcode requirement, device restrictions such as camera use or screen capture, Wi-Fi configs, VPN among others.. | https://docs.vmware.com/en/VMware-AirWatch/9.2/aw-mdm-guide-92/GUID-014MDM-DeviceProfileEditing.html | 2018-10-15T13:22:38 | CC-MAIN-2018-43 | 1539583509196.33 | [] | docs.vmware.com |
Publish and subscribe
These functions control how Meteor servers publish sets of records and how clients can subscribe to those sets.
Meteor.publish(name, func)
Publish a record set.
Arguments
- name String or Object
If String, name of the record set. If Object, publications Dictionary of publish functions by name. If
null, the set has no name, and the record set is automatically sent to all connected clients.
- func Function
Function called on the server each time a client subscribes. Inside the function,
thisis the publish handler object, described below. If the client passed arguments to
subscribe, the function is called with the same arguments.
To publish records to clients, call
Meteor.publish on the server with
two parameters: the name of the record set, and a publish function
that Meteor will call each time a client subscribes to the name.
Publish functions can return a
Collection.Cursor, in which case Meteor
will publish that cursor’s documents to each subscribed client. You can
also return an array of
Collection.Cursors, in which case Meteor will
publish all of the cursors.
If you return multiple cursors in an array, they currently must all be from different collections. We hope to lift this restriction in a future release.
A client will see a document if the document is currently in the published
record set of any of its subscriptions. If multiple publications publish a
document with the same
_id to the same collection the documents will be
merged for the client. If the values of any of the top level fields
conflict, the resulting value will be one of the published values, chosen
arbitrarily.
// Server: Publish the `Rooms` collection, minus secret info... Meteor.publish('rooms', function () { return Rooms.find({}, { fields: { secretInfo: 0 } }); }); // ...and publish secret info for rooms where the logged-in user is an admin. If // the client subscribes to both publications, the records are merged together // into the same documents in the `Rooms` collection. Note that currently object // values are not recursively merged, so the fields that differ must be top // level fields. Meteor.publish('adminSecretInfo', function () { return Rooms.find({ admin: this.userId }, { fields: { secretInfo: 1 } }); }); // Publish dependent documents and simulate joins. Meteor.publish('roomAndMessages', function (roomId) { check(roomId, String); return [ Rooms.find({ _id: roomId }, { fields: { secretInfo: 0 } }), Messages.find({ roomId }) ]; });
Alternatively, a publish function can directly control its published record set
by calling the functions
added (to add a new document to the
published record set),
changed (to change or clear some
fields on a document already in the published record set), and
removed (to remove documents from the published record
set). These methods are provided by
this in your publish function.
If a publish function does not return a cursor or array of cursors, it is
assumed to be using the low-level
added/
changed/
removed interface, and it
must also call
ready once the initial record set is
complete.
Example (server):
// Publish the current size of a collection. Meteor.publish('countsByRoom', function (roomId) { check(roomId, String); let count = 0; let initializing = true; // `observeChanges` only returns after the initial `added` callbacks have run. // Until then, we don't want to send a lot of `changed` messages—hence // tracking the `initializing` state. const handle = Messages.find({ roomId }).observeChanges({ added: (id) => { count += 1; if (!initializing) { this.changed('counts', roomId, { count }); } }, removed: (id) => { count -= 1; this.changed('counts', roomId, { count }); } // We don't care about `changed` events. }); // Instead, we'll send one `added` message right after `observeChanges` has // returned, and mark the subscription as ready. initializing = false; this.added('counts', roomId, { count }); this.ready(); // Stop observing the cursor when the client unsubscribes. Stopping a // subscription automatically takes care of sending the client any `removed` // messages. this.onStop(() => handle.stop()); }); // Sometimes publish a query, sometimes publish nothing. Meteor.publish('secretData', function () { if (this.userId === 'superuser') { return SecretData.find(); } else { // Declare that no data is being published. If you leave this line out, // Meteor will never consider the subscription ready because it thinks // you're using the `added/changed/removed` interface where you have to // explicitly call `this.ready`. return []; } });
Example (client):
// Declare a collection to hold the count object. const Counts = new Mongo.Collection('counts'); // Subscribe to the count for the current room. Tracker.autorun(() => { Meteor.subscribe('countsByRoom', Session.get('roomId')); }); // Use the new collection. const roomCount = Counts.findOne(Session.get('roomId')).count; console.log(`Current room has ${roomCount} messages.`);
Meteor will emit a warning message if you call
Meteor.publishin a project that includes the
autopublishpackage. Your publish function will still work.
Read more about publications and how to use them in the Data Loading article in the Meteor Guide.
Access inside the publish function. The id of the logged-in user, or
null if no user is logged in.
This is constant. However, if the logged-in user changes, the publish function is rerun with the new value, assuming it didn’t throw an error at the previous run.
Call inside the publish function. Informs the subscriber that a document has been added to the record set.
Arguments
- collection String
The name of the collection that contains the new document.
- id String
The new document's ID.
- fields Object
The fields in the new document. If
_idis present it is ignored.
Call inside the publish function. Informs the subscriber that a document in the record set has been modified.
Arguments
- collection String
The name of the collection that contains the changed document.
- id String
The changed document's ID.
- fields Object
The fields in the document that have changed, together with their new values. If a field is not present in
fieldsit was left unchanged; if it is present in
fieldsand has a value of
undefinedit was removed from the document. If
_idis present it is ignored.
Call inside the publish function. Informs the subscriber that a document has been removed from the record set.
Arguments
- collection String
The name of the collection that the document has been removed from.
- id String
The ID of the document that has been removed.
Call inside the publish function. Informs the subscriber that an initial, complete snapshot of the record set has been sent. This will trigger a call on the client to the
onReady callback passed to
Meteor.subscribe, if any.
Call inside the publish function. Registers a callback function to run when the subscription is stopped.
Arguments
- func Function
The callback function
If you call
observe or
observeChanges in your
publish handler, this is the place to stop the observes.
Call inside the publish function. Stops this client's subscription, triggering a call on the client to the
onStop callback passed to
Meteor.subscribe, if any. If
error is not a
Meteor.Error, it will be sanitized.
Arguments
- error Error
The error to pass to the client.
Call inside the publish function. Stops this client's subscription and invokes the client's
onStop callback with no error.
Access inside the publish function. The incoming connection for this subscription.
Meteor.subscribe(name, [arg1, arg2...], [callbacks])
stop() and
ready() methods.
Arguments
- name String
Name of the subscription. Matches the name of the server's
publish()call.
- arg1, arg2... EJSON-able Object
Optional arguments passed to publisher function on server.
- callbacks Function or Object
Optional. May include
onStopand
onReadycallbacks. If there is an error, it is passed as an argument to
onStop. If a function is passed instead of an object, it is interpreted as an
onReadycallback.
When you subscribe to a record set, it tells the server to send records to the
client. The client stores these records in local Minimongo
collections, with the same name as the
collection
argument used in the publish handler’s
added,
changed, and
removed
callbacks. Meteor will queue incoming records until you declare the
Mongo.Collection on the client with the matching
collection name.
// It's okay to subscribe (and possibly receive data) before declaring the // client collection that will hold it. Assume 'allPlayers' publishes data from // the server's 'players' collection. Meteor.subscribe('allPlayers'); ... // The client queues incoming 'players' records until the collection is created: const Players = new Mongo.Collection('players');.
The
onReady callback is called with no arguments when the server marks the
subscription as ready. The
onStop callback is called with
a
Meteor.Error if the subscription fails or is terminated by
the server. If the subscription is stopped by calling
stop on the subscription
handle or inside the publication,
onStop is called with no arguments.
Meteor.subscribe returns a subscription handle, which is an object with the
following properties:
- stop()
Cancel the subscription. This will typically result in the server directing the client to remove the subscription’s data from the client’s cache.
- ready()
True if the server has marked the subscription as ready. A reactive data source.
- subscriptionId
The
idof the subscription this handle is for. When you run
Meteor.subscribeinside of
Tracker.autorun, the handles you get will always have the same
subscriptionIdfield. You can use this to deduplicate subscription handles if you are storing them in some data structure.
If you call
Meteor.subscribe within a reactive computation,
for example using
Tracker.autorun, the subscription will automatically be
cancelled when the computation is invalidated or stopped; it is not necessary
to call
stop on
subscriptions made from inside
autorun. However, if the next iteration
of your run function subscribes to the same record set (same name and
parameters), Meteor is smart enough to skip a wasteful
unsubscribe/resubscribe. For example:
Tracker.autorun(() => { Meteor.subscribe('chat', { room: Session.get('currentRoom') }); Meteor.subscribe('privateMessages'); });
This subscribes you to the chat messages in the current room and to your private
messages. When you change rooms by calling
Session.set('currentRoom',
'newRoom'), Meteor will subscribe to the new room’s chat messages,
unsubscribe from the original room’s chat messages, and continue to
stay subscribed to your private messages. | https://docs.meteor.com/api/pubsub.html | 2018-10-15T12:58:37 | CC-MAIN-2018-43 | 1539583509196.33 | [] | docs.meteor.com |
Center the string within the variable
Member of Unicode Intrinsic Functions (PRIM_LIBI.IUnicodeIntrinsics)
Center allows a string to be centered within a given length, and will optionally pad the result with a supplied character. Leading and trailing spaces are significant and will be evaluated as part of the string to center. If the string is longer than the target variable, the first n bytes of the string are used. Where the result of centering the string results is an uneven number of bytes, the extra byte will be allocated to the right-hand side. Typically, centering is used to center a value within a target variable. By using the Length parameter you can control how the string is centered and padded. Center can only be used to center a string within a target string. This does not guarantee that the result will be visually centered if the font being used in non-proportional.
In this example, if #String is a 40 byte variable that contains a value of ?Centered Text?, the result is ?***Centered Text****?. The remaining 20 bytes of #string will be null
#Target := #string.Center(20 ?*?)
In this example, where string is a 20 byte variable that contains a value of ?Centered Text?, the result would be ? Centered Text ?. This is a typical centering scenario where the length of the target governs the centering of the text:
#Target := #string.Center( #Target.FieldLength)
All Component Classes
Technical Reference
Febuary 18 V14SP2 | https://docs.lansa.com/14/en/lansa016/prim_libi.iunicodeintrinsics_center.htm | 2018-10-15T12:21:39 | CC-MAIN-2018-43 | 1539583509196.33 | [] | docs.lansa.com |
January 8, 2018
Elements
Bullhorn: Added /placements and /companies resources
You can now use the Bullhorn element to view, create, update and delete: * Placements, which represent successfully filled job orders. * Companies — the ClientCorporation at Bullhorn.
Intacct.: Added /reporting-periods resources
Use the Intacct. element to get a list of reporting periods.
GoodData: Updated requests to include required headers
Starting after March 31, 2018, the GoodData REST API will require every API request to contain the User-Agent header. We updated all of our requests to include the User-Agent header. See the GoodData docs for more information.
Freshservice: Improved filtering in the /users resource
To filter Freshservice users you can now pass a query in the where clause such as
email = '[email protected]'. You can filter on the following user fields:
- mobile
- phone
- state
Box: New /revisions endpoints
We added the following endpoints to the /files resource:
- GET /files/revisions by path
- GET /files/revisions/:revisionId by path
- GET /files/:id/revisions
- GET /files/:id/revisions/:revisionId
Box: Updated Patch /files/{id}/custom-fields/{templateKey}
We removed the
template field from the request body. Enter the template key in the path of
PATCH /files/{id}/custom-fields/{templateKey}requests.
OneDrive and Microsoft OneDrive for Business: New /revisions endpoints
We added the following endpoints to the /files resource:
- GET /files/:id/revisions to list all revisions by file Id.
- GET /files/:id/revisions/:revisionId to retrieve revisions by file Id.
- GET /files/revisions to list all revisions by file path.
- GET /files/revisions/:revisionId to retrieve revisions by file path.
Zendesk: Updated transformations for the /attachments resource
In earlier versions, some fields in the /attachments resource were not transformed. You can now transform the fields in the /attachments resource.
Infusionsoft: Elements renamed to match Infusionsoft naming conventions
The Infusionsoft element names have been changed as shown below:
Quickbook Enterprise: Added /vendor-credits and /creditcard-credits resources
You can now use the Bullhorn element to view, create, update and delete /vendor-credits and /creditcard-credits resources. | https://docs.cloud-elements.com/home/cloud-elements-version-2147 | 2019-11-12T07:48:59 | CC-MAIN-2019-47 | 1573496664808.68 | [] | docs.cloud-elements.com |
Display issues in Office client applications
Symptoms
When you use Microsoft Office programs, you notice that visual features differ from one computer to another. For example, you see animations in Excel when you scroll through a worksheet on one computer, but you do not see the same animations on another computer. shutdown) is reduced.
- In Microsoft Lync, there may be video delays or slowness when you are on a video call.
Cause
You may experience these symptoms if you have a video configuration on your computer that is incompatible with the Office feature set that is responsible for displaying the application and for animations in the application.
Office 2013 and later versions use a more efficient and accelerated method to draw the Office UI and the content. This includes relying on hardware acceleration, which is managed through the operating system. The hardware acceleration function of the operating system relies on up-to-date and compatible display drivers.
Note Hardware acceleration that uses the video card is always disabled when Office is running in a Remote Desktop session, and also when the application is started in safe mode.
Resolution
The resolution varies depending on your version of Windows and the symptom you are experiencing.
For the symptom: Poorly Displayed Text in Office Documents
If your symptom is "Poorly Displayed Text in Office Documents," try the following solutions first. Otherwise, skip to the next section titled All Other Symptoms.
Step 1: Use the "ClearType Text Tuner" Setting
Windows 10, Windows 8.1, and Windows 8: On the Start Screen, search for ClearType.
Windows 7: Click Start, and then enter ClearType in the Search Programs and Files box.
Select Adjust ClearType Text.
In the ClearType Text Tuner, enable the Turn on ClearType option, and then click Next.
Tune your monitor by following the steps in the ClearType Text Tuner, and then click Finish.
If you are still experiencing a problem after you adjust the ClearType settings, go to Step 2.
Step 2: Disable the Sub-Pixel Positioning Feature
Word 2016 and Word 2013 use sub-pixel text rendering by default. While this provides optimal spacing, you may prefer the appearance of pixel-snapped text for a minor improvement in contrast. To disable the sub-pixel positioning feature in Word 2016 or Word 2013, follow these steps.
- On the File tab, click Options.
- Click Advanced.
- Under the Display group, clear the Use the subpixel positioning to smooth fonts on screen option.
- Click OK.
If you are still experiencing a problem after you turn off the sub-pixel text rendering setting, re-enable the Use the subpixel positioning to smooth fonts on screen setting, and then go to Step 3.
Step 3: On Windows 7 clients, install the Windows 8 Inter-operatibility Pack
If you are using Windows10, Windows 8.1 or Windows 8, skip this section and go to the steps under the For All Other Symptoms section.
If you are using Windows 7, install the update for improving video-related components that is available in the following Knowledge Base article:
2670838 Platform update for Windows 7 SP1 and Windows Server 2008 R2 SP1
If the previous steps did not resolve the "Poorly Displayed Text in Office Documents" symptom, continue to troubleshoot your issue by using the steps in the next section.
For all other symptoms
Update your video driver
The best way to update your video driver is to run Windows Update to see whether a newer driver is available for your computer.
To run Windows Update based on your version of Windows, follow these steps:
Windows 10, Windows 8.1 and.
If your video-related problems in Office were fixed by when you updated your video driver, you do not have to take any further steps. Go to step 2 if updating the video driver does not fix the problems.
Note
Video card manufacturers frequently release updates to their drivers to improve performance or to fix compatibility issues with new programs. If you do not find an updated video driver for your computer through Windows Update and must have the latest driver for your video card, go to the support or download section of your video card manufacturer's website for information about how to download and install the newest driver.
More Information
Automatic disabling of hardware acceleration for some video cards
By default, hardware acceleration is automatically disabled in Office programs if certain video card and video card driver combinations are detected when you start an Office program. If hardware acceleration is automatically disabled by the program, nothing indicates that this change occurred. However, if you update your video card driver and it is more compatible with Office, hardware acceleration is automatically reenabled.
The list of video card/video driver combinations that trigger this automatic disabling of hardware graphics acceleration is not documented because the list is hard-coded in the Office programs and will be constantly changing as we discover additional video combinations that cause problems in Office programs. Therefore, if you do not see the same animation functionality on one computer that you see on another computer, we recommend that you update your video driver by using the instructions provided in the "Update your video driver" section. If you still do not see the expected animation on your computer, update your video driver again soon. Microsoft is working with the major video card manufacturers on this issue, and these video card manufacturers will be releasing new video drivers as such drivers are developed.
Note
If two computers have the same video card/video driver combinations, you may still see a difference in the Office animation features between the two computers if one computer is running Windows 7 and the other computer is running Windows 8. On a computer that is running Windows 7, animations in Office are disabled if the video card/video driver combination appears on the incompatibility list. However, the same video combination on Windows 8 does not have animations disabled because of the improved video capabilities in Windows 8.
Feedback | https://docs.microsoft.com/en-gb/office/troubleshoot/settings/office-display-issues | 2019-11-12T09:42:13 | CC-MAIN-2019-47 | 1573496664808.68 | [] | docs.microsoft.com |
Configuration Editor
The Configuration Editor is available as a component of the Citrix SD-WAN Center Web Interface, and in the Citrix SD-WAN Management Web Interface running on the Master Control Node (MCN) of the SD-WAN network.
Note
You cannot push configurations to the discovered appliances directly from Citrix SD-WAN Center. You can use the Configuration Editor to edit the configuration settings and to create a configuration package. When the configuration package has been created, you can export it to the MCN and install it. The changes are then reflected in the MCN.
You have to log on with administrative rights to the Citrix SD-WAN Center appliance and the MCN, to edit the configurations on Citrix SD-WAN center and to export and install the configurations on the MCN.
For detailed instructions on using the Configuration Editor to configure your Citrix SD-WAN, see Citrix SD-WAN 10.1 documentation.
The Configuration Editor enables you to do the following:
- Add and configure Citrix SD-WAN Appliance sites and connections.
- Provision the Citrix SD-WAN appliance.
- Create and define Citrix SD-WAN Configuration.
- Define and view Network Maps of your SD-WAN system.
To open the Configuration Editor:
In the Citrix : Each tab represents a top-level section. There are six sections: Basic, Global, Sites, Connections, Optimization and Provisioning. Click a section tab to reveal the configuration tree for that section.
- View Region: For multi-region deployment, it lists all the regions configured. For single-region deployment, the default-region is displayed by default. To view the sites in a region, select a region from the drop-down list.
- View Sites: Lists the site nodes that have been added to the configuration and are currently opened in the Configuration Editor. To view the site configuration, select a site from the frop-down list.
- Network Map: Provides a schematic view of the SD-WAN network. Hover the mouse cursor over the sites or the path to view more details. Click the sites to view report options.
- Audit. | https://docs.citrix.com/en-us/citrix-sd-wan-center/10-2/deploying-sd-wan-appliance/configuring-sd-wan-appliances/configuration-editor.html | 2019-11-12T09:59:50 | CC-MAIN-2019-47 | 1573496664808.68 | [] | docs.citrix.com |
January 23, 2018
Elements
Google Suite: New element
We added the Google Suite element that you can connect to Google and manage calendars, contacts, groups, threads, and messages. Read the docs and authenticate an instance of your own.
Evernote: Added
/files endpoints
We added the following DELETE endpoints:
DELETE /files by path
DELETE /files/{id}
QuickBooks Online: Updated CEQL for
GET /reports/{id}
We updated the QuickBooks Online element so you can filter
GET /reports/{id} requests by date ranges. For example,
createdDateRange='Last Week'. Supported date ranges include
Today,
Yesterday
This Week-to-date,
Last Week,
Last Week-to-date, and more. See the API docs for details.
Facebook Lead Ads: Added
fields parameter to
GET /leads/{leadId} endpoint
Use
fields to specify the list of fields to include in the response.
Google Sheets: Fixed a bug in the Cloud Elements 2.0 API docs
When using the API docs in Cloud Elements 2.0 to make a
GET /spreadsheets/{id}/worksheets request, the UI would not accept a spreadsheet
Id. You can now use the API docs in Cloud Elements 2.0 to make the request.
Bullhorn:
fields parameter accepts comma separated list of fields including nested fields
When making GET requests, you can include a comma separated list in
fields to specify what Bullhorn returns in the response. You can include nested fields using dot notation. For example,
id,address.countryCode returns the id and
countryCode fields where
countryCode is nested within an address object.
Formulas
You can create temporary formula instances that exist for a specified amount of time. Specify the time in seconds, such as
86400 for a day or
3600 for an hour. Every hour, Cloud Elements checks for temporary formula instances that have expired and deletes them. You can create temporary formula instances only through the Cloud Elements APIs.
To create a temporary formula instance, add
"formula.instance.time.to.live": <seconds> to the
settings object in a
POST /formulas/{id}/instances request. Here's an example where the formula instance expires after one hour:
{ "active": true, "configuration": { "<key>": "string" }, "settings": { "notification.email": "string", "notification.webhook.url": "string", "api": "string", "formula.instance.time.to.live": 3600 }, "createdDate": "2018-01-23T16:33:47.431Z", "formula": { "active": true, "id": 0 }, "name": "string", "updatedDate": "2018-01-23T16:33:47.431Z" } | https://docs.cloud-elements.com/home/cloud-elements-version-2149 | 2019-11-12T07:48:53 | CC-MAIN-2019-47 | 1573496664808.68 | [] | docs.cloud-elements.com |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.