content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
A step-by-step guide to configuring WhatsApp as a marketing channel
While you will be using WebEngage to create and analyze all your WhatsApp campaigns, the message is delivered to your target audience through a WhatsApp Service Provider (WSP).
Please Ensure That Your Website & Apps Are Integrated with WebEngage Before Proceeding
(Doing so will enable real-time personalization of all WhatsApp messages sent to each user!)
Getting Started with WhatsApp Marketing
Strict guidelines laid down by Facebook control WhatsApp marketing. Hence, you must follow these steps to ensure that you can use WhatsApp uninterruptedly through WebEngage.
Step 1: Get WhatsApp Business Account Approved
The first step is to create a business account and submit it to Facebook for approval. Here's how you can go about it.
We recommend that you use a phone number exclusively for your WhatsApp Business Account.
Here are a few guidelines to help you out:
The phone number must be owned by you, preferably registered to your business address, and must include a country code. It cannot be used for another WhatsApp account.
Phone numbers can easily be migrated from one WSP to another. However, once your business account is approved, the attached phone number cannot be changed.
You should be able to receive a call/SMS on the number for channel activation. Phone numbers that have an IVR service attached cannot be used unless IVR can be disabled to receive the activation code.
Step 2: Get Message Templates Whitelisted
Message templates are pre-approved messages that can contain a maximum of 4096 characters including elements of personalization, links, emojis, and WhatsApp specific formatting. These messages can be used to initiate a conversation with opted-in users (through campaigns) and respond to user queries in the Customer Service Window.
To curb marketing spam, WhatsApp has mandated whitelisting for each message that you'd like to send to your users. Promotional marketing is restricted, and the primary focus of this new channel is to provide customer support and contextual lifecycle updates. This means that early adopters can stand out - if you do it right!
Thus, we highly recommend that you restrict WhatsApp campaigns to conveying account updates, appointment updates, payment updates, personal finance updates, reservation updates, shipping updates and providing issue resolution.
Messages that contain any of the following elements are at a high risk of being disapproved:
- Discounts, promotions, product recommendations or offers.
- Surveys, product/service reviews, and rating requests.
- Media files (videos & images).
You can submit message templates directly to Facebook or on your WSP dashboard for approval. Once they're approved, upload these templates to your WebEngage dashboard to create WhatsApp campaigns.
Whitelisting Personalized Message Templates
Depending on your use-case, there may be several aspects that you'd like to personalize in the message. It could be anything like the user's name, name of purchase product(s), order number, date of appointment, upcoming events, and so on.
This can easily be achieved by:
Step 1: Tracking these data points as Custom User Attributes, Custom Events, and Custom Event Attributes for all your app & web users.
Step 2: Creating a placeholder in your message template for links, emojis, and personalization. Let's take the example of an order confirmation message that contains the following details:
- User's First Name
- Order Number
- Order Tracking Link
- An Emoji
Thus, while creating the message template, we'll replace each element with a numbered placeholder in the format, {{x}}. Here's what the final template should look like:
Whitelisting Order Confirmation Message
Hey {{1}}, thanks for your order!
We'll keep you posted on when it's ready to be shipped! You can track order number {{2}}, here {{3}}. Stay awesome {{4}}
So, the message received by a user, Jess, will look like this👇:
WhatsApp Message Receive by User
Hey Jess, thanks for your order!
We'll keep you posted on when it's ready to be shipped! You can track order number 45360d, here. Stay awesome 🤙
Please Note
As per WhatsApp's guidelines, a message template must include at least one parameter, {{1}}. It can later be replaced with text, numbers, special characters or a link, as per your needs.
Step 3: Collect User Opt-ins
By default, all users that have a valid phone number listed in their WebEngage User Profile are considered unreachable via WhatsApp. This is because WhatsApp's Guidelines require users to provide explicit consent for receiving messages.
Hence, we recommend that you create a WhatsApp opt-in form on your app/website or add a call-to-subscription to your purchase/platform discovery flow. (Detailed Read)
Please ensure that you capture the user's country code along with the phone number. If a country code doesn't exist, then WhatsApp will try sending the message to a user by appending the country code of your business phone number.
Each time a user provides consent, you can track it as the System User Attribute,
we_whatsapp_opt_inand set the status to
true. Doing so will make the user reachable via WhatsApp in your dashboard. (Detailed Read)
Setting WhatsApp Opt-in Status for Users
Our platform integration SDKs enable you to set a user's opt-in status for WhatsApp, SMS, and Email. You can also choose to pass this data through a Rest API integration or manually upload it to your dashboard.
-
-
-
-
-
Please Note: It's extremely important that you opt-in only those users who have explicitly provided consent. Violating Facebook's user opt-in policies may put you at the risk of having your WhatsApp business account suspended.
- You can also promote WhatsApp subscription through contextual Push, In-app, SMS, On-site Notifications, Web Push, and Email campaigns that convey the value proposition of connecting with you over the channel.
WhatApp Service Provider (WSP)
You can think of WhatsApp Service Providers as middlemen that transmit the message from your WebEngage account to a user's WhatsApp account. Currently, you can leverage Infobip, Gupshup, Kaleyra, and ValueFirst to engage users via WhatsApp. We'll be adding more WSPs to the stack pretty soon!
Click to enlarge
WSP Integration
As highlighted below, access WhatsApp through the Data Platform> Integrations > Whatsapp Setup (Configure) menu on the left side of your dashboard. Click the Add WhatsApp Service Provider button to get started.
Click to enlarge
WSP Setup
Please select a WSP from the left navigation panel (Channel Configurations > WhatsApp) to continue configuration.
Uploading Whitelisted Templates
Click to enlarge
As shown above:
Step 1: Click the Add WhatsApp Template button to get started.
Step 2: Select the WSP through which you've whitelisted the template/ would like to send the campaign.
Please Note
Each template is mapped a WSP in your dashboard. Thus, while creating the campaign, the list of templates will include only those messages that have been mapped to the WSP selected at Step 3: Message.
Step 3: Add template details (may vary for each WSP):
Template Name (For your reference only).
Template Text (Paste the exact message that has been approved, do no edit).
Template Namespace (While copying from your Facebook/WSP dashboard, please ensure that you use the format,
<template_namespace>:<template_name>, for namespace)
Template Language (List of languages supported by WhatsApp).
Please get in touch with your WSP Account Manager or Facebook business support if you're unable to find these details.
Once you've added the templates, your Channels > WhatsApp section will look like this👇. You can choose to Edit and Delete a template anytime you through the Actions menu.
Click to enlarge
Congratulations!
You're now ready to engage users via WhatsApp!
Managing Configuration
Click to enlarge
As shown above, integrated WSPs are listed under the section, Your WhatsApp Service Provider List. You can choose to Edit or Delete the integration anytime you like by accessing the Actions menu placed on the extreme right.
The Actions menu also indicates the unique Webhook URL you can add to your WSP dashboard to ensure that delivery status notifications (sent, failed, queued, delivered) are tracked under campaign stats in your WebEngage dashboard.
Editing a WSP
You can choose to edit configuration details in cases where incorrect details were entered during configuration, or some details were updated at the WSP's end post-integration.
As shown below, select Edit from the Actions menu, make your changes, and click Save.
Adding more authentication headers to configuration & updating it's name (click to enlarge)
Deleting a WSP
Deleting an integration will:
Cease campaign delivery for all Running and Upcoming campaigns where the respective WSP was selected for sending it.
Cause the deletion of all WhatsApp Templates mapped to the specific WSP in your dashboard.
Thus, while doing so, please ensure that no existing campaigns are dependant on the WSP for sending WhatsApp messages to your users.
As shown below, select Delete from the Actions menu, and click the Delete button in the pop-up to confirm your decision.
Click to enlarge
Please feel free to drop in a few lines at [email protected] in case you have any queries. We're always just an email away!
Updated 11 months ago | https://docs.webengage.com/docs/whatsapp | 2022-06-25T04:48:10 | CC-MAIN-2022-27 | 1656103034170.1 | [array(['https://files.readme.io/409f47d-WSP-brand-wall.jpg',
'WSP-brand-wall.jpg Click to enlarge'], dtype=object)
array(['https://files.readme.io/409f47d-WSP-brand-wall.jpg',
'Click to close... Click to enlarge'], dtype=object)
array(['https://files.readme.io/946cf12-whatsapp-configuration.gif',
'whatsapp-configuration.gif Click to enlarge'], dtype=object)
array(['https://files.readme.io/946cf12-whatsapp-configuration.gif',
'Click to close... Click to enlarge'], dtype=object)
array(['https://files.readme.io/f48fb69-whitelisting-template.gif',
'whitelisting-template.gif Click to enlarge'], dtype=object)
array(['https://files.readme.io/f48fb69-whitelisting-template.gif',
'Click to close... Click to enlarge'], dtype=object)
array(['https://files.readme.io/5df806e-whatsapp-channel-dashboard.png',
'whatsapp-channel-dashboard.png Click to enlarge'], dtype=object)
array(['https://files.readme.io/5df806e-whatsapp-channel-dashboard.png',
'Click to close... Click to enlarge'], dtype=object)
array(['https://files.readme.io/30f3a0a-modify-wsp.png',
'modify-wsp.png Click to enlarge'], dtype=object)
array(['https://files.readme.io/30f3a0a-modify-wsp.png',
'Click to close... Click to enlarge'], dtype=object)
array(['https://files.readme.io/802fd2e-edit-wsp.gif',
"edit-wsp.gif Adding more authentication headers to configuration & updating it's name (click to enlarge)"],
dtype=object)
array(['https://files.readme.io/802fd2e-edit-wsp.gif',
"Click to close... Adding more authentication headers to configuration & updating it's name (click to enlarge)"],
dtype=object)
array(['https://files.readme.io/0e68422-delete-whatsapp-wsp.gif',
'delete-whatsapp-wsp.gif Click to enlarge'], dtype=object)
array(['https://files.readme.io/0e68422-delete-whatsapp-wsp.gif',
'Click to close... Click to enlarge'], dtype=object) ] | docs.webengage.com |
MoM Change Advisory Board Template
What is Change Advisory Board in ITIL?
The ITIL methodology aims at performing any and all changes according to a set of processes and methodologies, and the ITIL CAB (Change Advisory Board) board’s goal is to facilitate the changes by assessing the change requests and prioritizing the approved ones.
ITIL MoM Template for Change Advisory Board
Change Advisory Board Members
The CAB should be made up of professionals from both the IT and the business side of the organization, to ensure that the IT services are aligned with the business needs. This also aims at ensuring that the proposed change doesn’t overlook any requirements of other groups in the organization (accounting, HR, etc.). In order to achieve this, the change advisory board members should be made up of employees from different sections/divisions of the organization.
Change Advisory Board Template
The MoM (Minutes of Meeting) change advisory board template of a CAB meeting captures the following information –
- At the top of the page is the company logo and its name
- The first section includes the basic details of the meeting –
- When the meeting was held (date and time)
- Where it was held (physical location and any other means of video/audio conferencing)
- When the MoM was sent out (by the recorder)
- What the goal of the meeting was
- The next section is the list of participants, where each participant is recognized by their name, role, and any other comment. For example the chair of the meeting, the fact that they participated via video conferencing, etc.
- The agenda of the meeting appears next, where each topic has a specified timeframe and the presenter
- The last and most important section of the MoM is the action items table. In this table, all of the action items are recorded and assigned to various employees (not only the ones who participated in the meeting). The table includes the following information –
A number of the action item (In an ABC format). Usually a simple running number list
The action item itself. This is the main column of the table and should explain in detail exactly what needs to be done in order to solve the problem (or a part of it)
Due Date: Presents when the owner of the action item needs to complete it
Owner: Lists who is responsible for performing the action item. This may require more than one person, but shouldn’t list more than one. In this case, the one owner is accountable for the action item being completed. The owner may appear in the name, or by role.
Comments: This column may be filled in during the meeting, or before the next one in which it will be reviewed
Change Advisory Board Best Practices
The MoM should be displayed to all the participants during the meeting so that the action items are visible to all. Any presentations or other relevant material should be distributed to the participants in advance of the meeting.
Download All ITIL Templates | https://www.itil-docs.com/change-advisory-board-template/ | 2021-04-10T18:41:02 | CC-MAIN-2021-17 | 1618038057476.6 | [array(['https://secureservercdn.net/104.238.68.196/365.a75.myftpupload.com/wp-content/uploads/2018/02/ITIL-MoM-Template-for-Change-Advisory-Board.jpg',
'ITIL MoM Template for Change Advisory Board'], dtype=object) ] | www.itil-docs.com |
A newer version of this page is available. Switch to the current version.
ASPxVerticalGridRowDataEventHandler Delegate
A method that will handle the ASPxVerticalGrid.CustomUnboundRowData event.
Namespace: DevExpress.Web
Assembly: DevExpress.Web.v19.2.dll
Declaration
public delegate void ASPxVerticalGridRowDataEventHandler( object sender, ASPxVerticalGridRowDataEventArgs e );
Public Delegate Sub ASPxVerticalGridRowDataEventHandler( sender As Object, e As ASPxVerticalGridRowDataEventArgs )
Parameters
Remarks
When creating a ASPxVerticalGridRowD | https://docs.devexpress.com/AspNet/DevExpress.Web.ASPxVerticalGridRowDataEventHandler?v=19.2 | 2021-04-10T20:38:21 | CC-MAIN-2021-17 | 1618038057476.6 | [] | docs.devexpress.com |
Time to complete: 10 minutes
In this tutorial, we'll go over how to set up and move any robot using a gamepad or other controller. A mobile base can be commanded with simple velocity commands whereas for complex robots and robot arms we will need fine-grained control. Velocity commands to a mobile base will be translated from gamepad joystick axes into a Twist or TwistStamped message, which is then sent to the topic of your choice, whereas the more flexible, custom control is enabled through Joy messages.
Connect your gamepad
In SETTINGS > PILOT > CONTROLS, scroll down and enable 'use external game controller'. Hook up your gamepad to your computer thorugh bluetooth or USB and hit any button. The field below should light up green as below after a few seconds:
Using the gamepad to send Twist messages
This works out of the box with simple gamepads like the Xbox controller. In SETTINGS > PILOT > CONTROLS, set the topic type to either
Twist or
TwistStamped and choose the joystick topic name you want these velocity messages to get published to, typically
/cmd_vel. In this simple setup, the left vertical axis is translated to a linear forward velocity (linear.x) while the right horizontal axis is converted to a left-right rotation (angular.z). This setup works best if you have a mobile base with a topic set up accepting Twist messages.
Using the gamepad to send Joy messages
Joy messages essentially contain a list of all buttons and axes that are pressed or moved at any given time and so contain more information than a twist message. If you need more advanced joystick control or want to save state, or if your robot already has a topic set up accepting Joy messages, this is for you.
The axes in a Joy message are typically values between -1 and 1 which represent triggers and joysticks on your controller.
The default mapping for standard controllers is as below. We recommend using the joystick remapper node to change these values to your desired configuration on the robot side.
In the Pilot Settings page, change the topic type to sensor_msgs/Joy and set the joystick topic name to the topic you want to receive messages on, typically
/joy or
/joy_orig.
In SETTINGS > PILOT > CONTROLS, set the joystick topic type to the sensor_msgs/Joy message and choose what topic you want those messages to be received on.
Supported gamepads
Most common gamepads are supported but not all functionality might be available. We've tested on the PS4 controller and the Xbox controllers. Make sure your gamepad works as expected by checking the received messages on the robot.
Translating Joy messages to actions
If your robot doesn't have a topic already accepting
Joy messages, you will need to write something up yourself. Luckily many great examples already exist. For a ROS + c++ implementation, I recommend Fetch Robotics Joy converter node as a starting point. If you're looking for a simple skeleton for Python + ROS, copy paste and modify the code below, which publishes the joy messages to an end-effector Twist command topic:
#!/usr/bin/env python import rospy from sensor_msgs.msg import Joy, JointState, Twist from trajectory_msgs.msg import JointTrajectory, JointTrajectoryPoint class JoyToCommands(object): """Control a robot arm using Joy messages""" def __init__(self): self.twist_pub = rospy.Publisher('/twist', Twist, queue_size=1) rospy.Subscriber("/joy", Joy, callback=self.joy_cb, queue_size=1) self.linear_vel_multiplier = 0.3 self.angular_vel_multiplier = 0.2 def joy_cb(self, joy_msg): rospy.logdebug_throttle(0.5, "received joy message: {}".format(joy_msg)) twist_cmd = Twist() twist_cmd.linear.x = self.linear_vel_multiplier * joy_msg.axes[3] twist_cmd.linear.y = self.linear_vel_multiplier * joy_msg.axes[2] twist_cmd.linear.z = self.linear_vel_multiplier * joy_msg.axes[1] twist_cmd.angular.z = self.angular_vel_multiplier * joy_msg.axes[0] self.twist_pub.publish(twist_cmd) if __name__ == '__main__': rospy.init_node("joy2commands", log_level=rospy.INFO) JoyToCommands() rospy.spin()
Updated 2 months ago
What's Next
Now that you have advanced gamepad control working, let's see what we can do with your mouse! | https://docs.freedomrobotics.ai/docs/pilot-control-a-robot-arm | 2021-04-10T19:35:25 | CC-MAIN-2021-17 | 1618038057476.6 | [array(['https://files.readme.io/3c53cf6-pilot_settings_game_pad.png',
'In SETTINGS > PILOT > CONTROLS, set the joystick topic type to the sensor_msgs/Joy message and choose what topic you want those messages to be received on.'],
dtype=object) ] | docs.freedomrobotics.ai |
Administering FileVault on Computers
You can activate FileVault disk encryption using a profile.
Administering FileVault on computers involves the following steps:
Creating a recovery key.
(Optional) Export an institutional recovery key.
Create and deploy a profile with the recovery key certificate and FileVault settings.
Creating a Recovery Key
You can create a personal or institutional recovery key to unlock encrypted volumes on computers. For more information about how to create recovery keys, see the Set a FileVault recovery key for computers in your institution article from Apple's support website.
Note: After you create an institutional recovery key, download the recovery key certificate file (.cer) to upload to Jamf School.
Administering FileVault on Computers
Requirements
To administer FileVault on computers, you need:
Computers with macOS 10.7 or later
A device group of computers you want to administer FileVault on (For more information, see Device Groups.) the computer device group to the profile scope.
(Institutional recovery key only) Using the Certificates payload, click Choose file and upload the institutional recovery key certificate file (.cer).
Use the FileVault payload to configure the settings, including the following:
Ensure the Enable FileVault checkbox is selected.
Choose the recovery key type.
(Institutional recovery key only) Choose the certificate to use from the Certificate pop-up menu.
(Personal recovery key only) To ensure the personal recovery key is stored in Jamf School, select the Enable Personal Recovery Key Escrow checkbox. This allows you to view the personal recovery key in the device details.
Important: Personal recovery key escrow should be used as a last resort. It is recommended that you always set an institutional recovery key along with enabling the personal recovery key to reduce the risk of losing all recovery keys.
Configure the other payloads as needed.
Click Save.
The new profile appears in the Profiles overview.
Unlocking a FileVault-Encrypted Volume
If you want to unlock the user's encrypted startup disk, you can use the recovery key. For more information, see the Set a FileVault recovery key for computers in your institution article from Apple's support website.
Related Information
For related information about FileVault, see the Use FileVault to encrypt the startup disk on your Mac article from Apple's support website. | https://docs.jamf.com/jamf-school/deploy-guide-docs/Administering_FileVault_on_Computers.html | 2021-04-10T18:32:20 | CC-MAIN-2021-17 | 1618038057476.6 | [] | docs.jamf.com |
Get.
You can download and install Docker on multiple platforms. Refer to the following section and choose the best installation path for you.. | https://docs.master.dockerproject.org/get-docker/ | 2021-04-10T19:28:57 | CC-MAIN-2021-17 | 1618038057476.6 | [] | docs.master.dockerproject.org |
Have you ever wondered why some companies manage to thrive despite stiff competition, while some others lag behind? Is it because they have huge budgets to dedicate to marketing? Is it because they’re an established brand? Or is there some other reason behind their success? In a world where things can change in the blink of an eye, those who are able to assess current events and accurately predict future trends will prosper. As the saying goes, “the world doesn’t stop for anyone.” Change is to be welcomed, not feared. After all, humanity has always thrived on challenges. Are you ready to embrace change and take your company to another level? If you can answer this question with an emphatic YES, then you’re on the right path.
Back in the day, companies didn’t have a lot of choices when it came to marketing. They could either choose to buy ad space on newspapers and TVs, or open a store and hope customers would knock on their doors. The digital revolution changed everything: on one hand, customers had more choices. Companies, on the other hand, could take advantage of the emerging marketing technologies. It was a win-win situation for both parties involves, customers and businesses. But thing can and will change. The more companies enter the market, the more competitive it becomes. The more competitive it becomes, the more challenging it becomes to find your niche and build your way up from there. Fortunately, there are a lot of tools which can help you become a leader in your industry.
Before you launch your marketing campaigns, you have to gather insight about a lot of things. Who is your target audience? What do they need from you? Who are your competitors? Where is your industry heading? Answering these and other questions is what marketing intelligence is all about. Gaining insight into customers, trends and the competition is crucial. Knowing your customers is key, and so is knowing your competition. But how do you do that? How can you gain marketing intelligence and use it to your advantage? Enter retargeting.
You might be wondering: “how can retargeting help me gain marketing intelligence?” Here’s how. Let’s say you’re running an online marketing campaign for your e-commerce store. A visitor comes to your site, sees the product, the price and then goes to fill the shopping cart. He abandons the cart. What do you do? Do you simply let the visitor go and find the product he’s looking for elsewhere? No, you use retargeting in order to make the visitor come back to your page. Re-targeting is a cookie-based technology that allows you to track each visitor that comes to your page. But wait, there’s more! The visitor will be shown your ad as he browses through the internet- no matter the pages he might be visiting. This is the power of retargeting. Ads will be shown on Google through the AdSense technology, and on Facebook through the Facebook Ads. The two technologies are similar; only the implementation is different.
Before you do retargeting, make sure to segment your audience. Each customer is unique, that’s why you have to make sure he is shown the right ad. For example, you can make a list of those visitors who viewed sunglasses on your website. This way you can show them appropriate ads. A useful tip to make your retargeting more effective is to use branding. You can include your brand’s logo in the ad, for example. The more people see your logo, the more they remember it. This would increase brand awareness among potential customers. Also, you may want to include a Call-to-Action button to your ads. You want people to do something once they see the ad on Google or Facebook. It may an invitation to sign up for newsletters, subscribe for free one month, download a free e-book, etc.
Gathering marketing intelligence through retargeting has never been easier. In Flexie CRM, you can do retargeting from within the system. As soon as a lead enters the CRM, we inject a tracking pixel into that lead- regardless of the source they came from. This way you can track the lead and show your ad as he browses through Google, or on Facebook.
When doing retargeting, pay attention to timing. When is the best time to release your product? Timing is everything. Why? You want to show ads to the right person, at the right time. Once you segment your audience, you can proceed to launching your marketing campaign. Through retargeting, you will gather insight that will help you make better decisions. Marketing intelligence should be an integral part of your overall marketing strategy, but so is retargeting. The two go hand in hand. If you get the retargeting part right, then you will have an easier time gathering marketing intelligence. Applied knowledge is the real power.
To stay updated with the latest features, news and how-to articles and videos, please join our group on Facebook, Flexie CRM Academy and subscribe to our YouTube channel Flexie CRM. | https://docs.flexie.io/docs/marketing-trends/marketing-intelligence-using-retargeting/ | 2020-02-16T23:41:46 | CC-MAIN-2020-10 | 1581875141430.58 | [] | docs.flexie.io |
Silverlight and Opera
Although Opera is not an officially supported browser (see my previous post for more details), we do want a good Silverlight experience for Opera users (and a good Opera experience for Silverlight developers). As such, we do some level of Opera testing and look at customer reported Silverlight/Opera issues. In general, the Silverlight experience in Opera works OK, with a few exceptions and one that's easy to correct: when hosting Silverlight in Opera via the MS AJAX Silverlight control. When doing this, the Silverlight control will correctly instantiate in Opera but the Silverlight content won’t load and you’ll see a blank page. The good news is this is reasonably easy to work-around and we are investigating the root cause of this issue (a bug has been logged to Opera on this issue and the Silverlight product team will look at it as well).
The bug is manifested in Opera when loading dynamic source when there is a dynamically set onload handler:
<head>
<script type="text/javascript">
function pageLoaded() {
var obj = document.getElementById('obj1');
obj.OnLoad = xamlLoaded; // Remove and this works in Opera
obj.Source = "source.xaml";
}
function xamlLoaded() {}
</script>
</head>
<body onload="pageLoaded()">
<object id="obj1" type="application/x-silverlight-2"/>
</body>
This is the way the MS AJAX control needs to load Silverlight in order to integrate into the MS AJAX framework. The best way to work-around this issue is to replace the MS AJAX hosting with plain object tag hosting:
<body>
<!-- Runtime errors from Silverlight will be displayed here.
This will contain debugging information and should be removed or hidden when debugging is completed -->
<div id="silverlightControlHost">
<object data="data:application/x-silverlight-2," type="application/x-silverlight-2" width="100%" height="100%">
<param name="source" value="PATH_TO_YOUR_XAP.xap"/>
<param name="background" value="white" />
<param name="minRuntimeVersion" value="2.0.31005.0" />
<param name="autoUpgrade" value="true" />
<a href="" style="text-decoration: none;">
<img src="" alt="Get Microsoft Silverlight" style="border-style: none"/>
</a>
</object>
<iframe style='visibility:hidden;height:0;width:0;border:0px'></iframe>
</div>
</body> | https://docs.microsoft.com/en-us/archive/blogs/jstegman/silverlight-and-opera | 2020-02-16T23:49:11 | CC-MAIN-2020-10 | 1581875141430.58 | [] | docs.microsoft.com |
Network tracing (packet sniffing) built-in to Windows and Windows Server.
Applies to:
Windows 10, version 1803
Windows Server 1709
Windows 10, version 1709
Windows 10, version 1703
Windows Server 2016
Windows 10, version 1607
Windows 10, version 1511
Windows 10, version 1507
Windows Server 2012 R2
Windows 8.1
Windows Server 2012
Windows 8
Windows Server 2008 R2
Windows 7
Does not apply:
Windows Server 2008
Windows Vista
Windows Server 2003
Windows XP
Originally published Dec 2012. Updated June 2015, Nov. 2016, May 2018..
Step 1. WARNING: In Windows 7 and Windows Server 2008 R2, you could run into:
2582260 "0x0000000A" Stop error when you perform ETW tracing on the Afd.sys driver in Windows 7 or in Windows Server 2008 R2
Please make sure to install the hotfix above before you proceed.
Step 2. Before you capture any network trace, here are questions you should have ready when you are capturing it:
Network tracing (packet sniffing) data to provide when troubleshooting.
Step 3. Minimize the noise.
Close all the applications that are unnecessary for the issue that you are investigating.
Step Administrator” 5. Start, CMD (Run as admin)
Type “Netsh trace start scenario=NetConnection capture=yes report=yes persistent=no maxsize=1024 correlation=no traceFile=C:\Temp\NetTrace.etl” without the quotation marks and then press Enter.
Note: Details of all the options are available in the links to more information.
Note 2: You always want to take network traces from both sides (sending and receiving).
Step 6. Reproduce the issue.
Open a second CMD (Run as admin)
When you have the repro, to make the network trace with a ‘marker’ that you are done.
Type “ping 127.0.0.1” without the quotation marks and then press Enter.
Step 7. To stop the network capture
Type “netsh trace stop” without the quotation marks and then press Enter.
Once you have the nettrace.etl file, you could copy it off the server or client to your Windows client.
In your Windows client, you would use Microsoft Network Monitor 3.4 to analyze the network packets.
In your Windows machine, you could use Microsoft Message Analyzer to analyze the network packets.
More information:
Troubleshoot -related issues *Windows 10 1607 and newer only
Network Tracing in Windows 7
Network Tracing in Windows 7 (Windows)
Netsh Commands for Trace
Netsh Commands for Network Trace in Windows Server 2008 R2 and Windows 7
Event Tracing for Windows and Network Monitor
Tool: Installing the Microsoft Message Analyzer version 1.3
How to setup a local network trace using “Start Local Trace” in Message Analyzer v1.3?
How to setup a local network trace on the LAN using Message Analyzer v1.3 UI?
P.S. Getting network trace during a boot.
Type “Netsh trace start scenario=AddressAcquisition,FileSharing,LAN,Layer2,NDIS,NetConnection,WLAN capture=yes report=yes persistent=yes maxsize=1024 correlation=no traceFile=C:\temp\NetTrace.etl” without the quotation marks and then press Enter.
To stop the network capture
Type “netsh trace stop” without the quotation marks and then press Enter. | https://docs.microsoft.com/en-us/archive/blogs/yongrhee/network-tracing-packet-sniffing-built-in-to-windows-server-2008-r2-and-windows-server-2012-2 | 2020-02-16T23:46:27 | CC-MAIN-2020-10 | 1581875141430.58 | [] | docs.microsoft.com |
Visual Studio 2013 System Requirements
System requirements for the Visual Studio 2013 family of products are listed in the table below. For more information on compatibility, please see Visual Studio 2013 Platform Targeting and Compatibility.
To view system requirements for specific products, click on a bookmark below:
Download
You can download Visual Studio 2013 from My.VisualStudio.com. My.VisualStudio.com requires a free Dev Essentials subscription, or a Visual Studio Subscription. | https://docs.microsoft.com/en-us/visualstudio/productinfo/vs2013-sysrequirements-vs | 2020-02-16T22:59:19 | CC-MAIN-2020-10 | 1581875141430.58 | [] | docs.microsoft.com |
Outbound DKIM signing in Office 365
Every week I work with multiple customers that have experienced phishing attacks where their own domain has been spoofed by the attacker. The conversation always revolves around implementing SPF (Sender Policy Framework) and DMARC (Domain-based Message Authentication, Reporting, and Conformance) to secure their domain. For organizations that cannot use SPF because of its limits (the record is restricted to 10 DNS lookups), we usually discuss DKIM (DomainKeys Identified Mail).
You (yes you!) can currently enable DKIM on your Office 365 tenant through a manual process. My colleague, Terry Zink, has a great blog post on the steps to take to enable DKIM signing for outbound mail from your Office 365 tenant: Manually hooking up DKIM signing in Office 365. Please check out his article (hyperlink in the previous sentence) for the steps on enabling DKIM.
If you have out grown the limitations imposed by SPF, then it’s time to investigate implementing DKIM.
Why DKIM and not just SPF?
The biggest reason companies turn to DKIM is because of a limitation on SPF. SPF records can only contain ten DNS lookups. If you are an organization that uses a lot of third party companies to send mail on your behalf (where they spoof your domain) then your SPF record may contain more than ten DNS lookups, which essentially renders it useless for protecting your domain against spoofing. Most receiving servers will stop evaluating SPF records once they have hit the DNS lookup limit of ten and at that point will stamp “permerror” for the SPF result.
This is where DKIM comes in. First let’s look how it works, and then we’ll look at how it overcomes the DNS lookup limit imposed by SPF. From an EXTREMELY high level view, here is how DKIM works.
Prep Work
- You obtain a certificate for DKIM signing and publish the public key in your public DNS.
Outbound Mail
- Your mail server calculates a hash for all outbound messages.
- This hash is then encrypted with the private key of your DKIM certificate, and this value is placed in the header of the message
Receiving Server
- The Receiving server obtains the sending domains public DKIM key from DNS and uses this to decrypt the hash value that the sender placed in the header. We’ll call this value X.
- The receiving server then re-calculates the hash for the inbound message. We’ll call this value Y.
- If X = Y, the receiving server knows that the message was not tampered with in transit.
Even with DKIM, you will still want to publish a DMARC record to prevent attackers that spoof using different MailFrom and From headers. See my previous article Using DMARC to Prevent Spoofing.
For DMARC to pass, you need either SPF or DKIM to pass. This means that you can leave an existing SPF record in place while you implement DKIM. Third parties that don’t support DKIM can remain in your SPF record, while third parties that do support DKIM only need to be giving your DKIM private key.
One quick note, EOP does not allow exporting of your private DKIM keys. You would need to set up DKIM with a 3rd party gateway separately. Or, if routing through Office 365 from the gateway, set up a connector using TLS-based authentication (preferable) or IP/domain-based authentication. In that case, Office 365 would attribute the message to the organization and assign the DKIM signature.
Wrap up
The above explanation of DKIM is from 60,000 feet. It is very high level and there are many configuration options and decisions that you’ll need to consider before implementing DKIM. For technical specs on DKIM, DMARC, and SPF, see the links in the Resources section of the article.
Resources
Manually hooking up dkim signing in Office 365 How Office 365 does automatic DKIM key rotation
Using DMARC to prevent spoofing
DKIM
SPF
DMARC | https://docs.microsoft.com/en-us/archive/blogs/eopfieldnotes/outbound-dkim-signing-in-office-365 | 2020-02-16T23:43:50 | CC-MAIN-2020-10 | 1581875141430.58 | [] | docs.microsoft.com |
Catch the Wave—Using the Net for patient care
Technorati Tags: eHealth,telemedicne,internet,health reform,Obama,Wall Street Journal,Microsoft,American Well,HealthVault,HMSA,information therapy,healthcare costs,care quality,care accessibility | https://docs.microsoft.com/en-us/archive/blogs/healthblog/catch-the-waveusing-the-net-for-patient-care | 2020-02-16T23:26:07 | CC-MAIN-2020-10 | 1581875141430.58 | [array(['https://msdnshared.blob.core.windows.net/media/TNBlogsFS/BlogFileStorage/blogs_msdn/healthblog/WindowsLiveWriter/CatchtheWaveUsingtheNetforpatientcare_B2CE/image_thumb_2.png',
'image image'], dtype=object) ] | docs.microsoft.com |
What's new in the Visual Studio 2010 Web performance and load test documentaion (Part Two - Screenshots with callout art)
In What's new in the Visual Studio 2010 Web performance and load test documentation (Part One) , I talked about how we took a new approach to how the documentation is presented by identifying key customer scenarios to create content around. This strategy was developed by soliciting feedback from numerous customers in various venues. As part of this strategy we used a new model for our high-level introduction topics. The new topics provide much more detail and tables with information about the content contained in the section. This blog post, part two is regarding how we incorporated the use of screenshots with call-outs in procedural topics (How to and Walkthrough topics). We used screenshots with callouts for two approaches:
- For situations where the user can quickly learn the steps to accomplish the task
- For situations where the steps are complex or not intuitive
Quickly Learn the Steps
As an example for the first approach, we used screenshots with call-outs extensively for the documentation coverage of the new manual test runner. A customer scenario we identified is a company or organization hiring temporary workers to run manual tests. Having these simple quick reference type of screenshots with call-outs should help in bringing workers up to speed and provide them with a quick reference should they forget any of the steps. To evaluate these illustrations, see the various How To topics under Running Manual Tests Using Test Runner.
Here is perhaps one of the simplest examples I created for the topic How to: Insert Additional Scenarios to an Existing Load Test. In this case, the reader can immediately identify the steps required to perform the steps to add a new scenario to their load test without having to take the time to read the full procedures.
Note We still include the full detailed list of procedural steps below the illustrations for readers who are visually impaired and to include additional information.
%5B1%5D.png)
Complex Steps
For the second approach, we added screenshots with call-outs to steps or procedures that would benefit from having illustrations. For example, here is the screenshot with call-outs I created for the topic How to: Run a Load Test Containing Web Performance Tests that Collects ASP.NET Profiler Data:
%5B1%5D.png)
Please let us know if you like these, dislike these or have other suggestions for the documentation by clicking the Add Content button
on the MSDN library toolbar and providing us feedback.
Thanks,
Howie Hilliker
Senior Programmer Writer - Visual Studio Team Test | https://docs.microsoft.com/en-us/archive/blogs/howie_hillikers_blog/whats-new-in-the-visual-studio-2010-web-performance-and-load-test-documentaion-part-two-screenshots-with-callout-art | 2020-02-16T21:58:20 | CC-MAIN-2020-10 | 1581875141430.58 | [array(['https://msdnshared.blob.core.windows.net/media/MSDNBlogsFS/prod.evol.blogs.msdn.com/CommunityServer.Blogs.Components.WeblogFiles/00/00/01/38/28/4035.ms184803.LTest_Scenarios(en-us,VS.100',
None], dtype=object)
array(['https://msdnshared.blob.core.windows.net/media/MSDNBlogsFS/prod.evol.blogs.msdn.com/CommunityServer.Blogs.Components.WeblogFiles/00/00/01/38/28/0435.Ff805009.LoadTestASPNETProfilerTestSettings(en-us,VS.100',
None], dtype=object) ] | docs.microsoft.com |
In order to take full advantage of Unity’s humanoid animation system and retargeting, you need to have a rigged and skinned humanoid type mesh.
A character Model and skin your own character from scratch using a 3D modeling application.
This is the process of creating your own humanoid Mesh-pose applications use different methods. For example, you can assign individual vertices and paint the weighting of influence per bone onto the Mesh.
The initial setup is typically automated: for example, by finding the nearest influence or using heatmaps..
Unity imports a number of different generic and native 3D file formats. FBX is the recommended format for exporting and verifying your Model since you can use it to: | https://docs.unity3d.com/es/2019.2/Manual/UsingHumanoidChars.html | 2020-02-16T23:29:14 | CC-MAIN-2020-10 | 1581875141430.58 | [] | docs.unity3d.com |
Why are my State Miles Showing on the Wrong Date?
Problem:
State Mileage information is showing up on the wrong date
Solution:
State Crossing report shows events that occurred in the timezone based on the company group that the truck is assigned
To check for this:
- Verify the timezone of the Terminal Group that the driver is assigned to
- This will determine the time and date on the driver's logs
- Verify the timezone of the Company Group that the driver is assigned to
- This will determine the time and date of the State Crossing Report
- Create a breadcrumb trail for the truck using the mapping tool with the timezone of the company group the truck is assigned to
- If the newly created breadcrumb trail shows a State Crossing that does NOT show up on the Report, please call in to Technical Support
Related articles | http://docs.drivertech.com/pages/viewpage.action?pageId=26378599 | 2020-02-16T22:19:56 | CC-MAIN-2020-10 | 1581875141430.58 | [] | docs.drivertech.com |
All content with label batching+client_server+demo+docs+getting+gridfs+infinispan+mvcc+notification+read_committed+repeatable_read+setup+userguide+wcm+websocket.
Related Labels:
expiration, publish, datagrid, coherence, interceptor, server, replication, transactionmanager, dist, release, query, future, lock_striping, nexus, guide, schema, editor, listener, cache,
amazon, s3, memcached, grid, test, jcache, api, xsd, ehcache, maven, documentation, page, write_behind, ec2, 缓存, s, hibernate, aws, templates, custom_interceptor, clustering, eviction, template, out_of_memory, concurrency, jboss_cache, examples, tags, import, index, events, hash_function, batch, configuration, buddy_replication, loader, write_through, cloud, remoting, tutorial, jbosscache3x, distribution, composition, started, cachestore, data_grid, cacheloader, hibernate_search, resteasy, cluster, development, transaction, async, interactive, xaresource, build, gatein, categories, searchable, scala, installation, client, migration, non-blocking, filesystem, jpa, design, tx, user_guide, gui_demo, eventing, post, content, testng, infinispan_user_guide, standalone, webdav, hotrod, snapshot, tasks, consistent_hash, store, jta, faq, 2lcache, as5, downloads, jsr-107, jgroups, lucene, locking, rest, uploads, hot_rod
more »
( - batching, - client_server, - demo, - docs, - getting, - gridfs, - infinispan, - mvcc, - notification, - read_committed, - repeatable_read, - setup, - userguide, - wcm, - websocket )
Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today. | https://docs.jboss.org/author/label/batching+client_server+demo+docs+getting+gridfs+infinispan+mvcc+notification+read_committed+repeatable_read+setup+userguide+wcm+websocket | 2020-02-16T22:10:18 | CC-MAIN-2020-10 | 1581875141430.58 | [] | docs.jboss.org |
You can add custom CSS rules to WordPress to style your directories and location maps without having to wait for a custom Store Locator Plus® style or a theme that supports full CSS rules. WordPress has a built-in CSS customization tool that applies to all pages on the site.
You can use this custom CSS tool to do things like format the MySLP SaaS, or WPSLP plugin, directory listings.
How To Add Custom Rules
Login to your site as a site administrator.
Click on the Customize menu entry in the admin toolbar.
Click on Additional CSS in the sidebar.
Enter a fully qualified CSS rule.
This example is used to style the MySLP SaaS directory listings on the site. MySLP is a pure JavaScript embed implementation and requires the site hosting the embed to add extra CSS rules. The output changes the MySLP Directory state selector from a vertical list to a horizontal list as shown on the MySLP Directory page. | https://docs.storelocatorplus.com/blog/tag/formatting/ | 2020-02-16T22:56:23 | CC-MAIN-2020-10 | 1581875141430.58 | [array(['https://docs.storelocatorplus.com/wp-content/uploads/2020/01/Customize-CSS-in-WordPress.png',
None], dtype=object)
array(['https://docs.storelocatorplus.com/wp-content/uploads/2020/01/Additional-CSS-2020-01-17_10-01-20.png',
None], dtype=object)
array(['https://docs.storelocatorplus.com/wp-content/uploads/2020/01/Adding-A-CSS-Rule-2020-01-17_10-02-10.png',
None], dtype=object) ] | docs.storelocatorplus.com |
You are looking at documentation for an older release. Not what you want? See the current release documentation.:
Necessary information about planning your backup.
Instructions on how to back up and restore eXo Platform.
Backup MongoDB database for eXo chat
Instructions on how to back up eXo chat Database. | https://docs-old.exoplatform.org/public/topic/PLF50/PLFAdminGuide.Backup.html | 2020-02-16T23:25:39 | CC-MAIN-2020-10 | 1581875141430.58 | [] | docs-old.exoplatform.org |
Cable cells¶
Warning
The interface for building and modifying cable cell objects has changed significantly; some of the documentation below is out of date.
The C++ cable cell documentation should have the same structure as the Python cable cell documentation.
Cable cells, which use the
cell_kind cable,
represent morphologically-detailed neurons as 1-d trees, with
electrical and biophysical properties mapped onto those trees.
A single cell is represented by an object of type
cable_cell.
Properties shared by all cable cells, as returned by the recipe
get_global_properties method, are described by an
object of type
cable_cell_global_properties.
The
cable_cell object¶
Cable cells are constructed from a
morphology; an optional
label_dict that associates names with particular points
(
locset objects) or subsets (
region objects) of the
morphology; and an optional decor.
Morphologies are constructed from a
segment_tree, but can also
be generated via the
stitch_builder, which offers a slightly
higher level interface. Details are described in The stitch-builder interface.
Each cell has particular values for its electrical and ionic properties. These
are determined first by the set of global defaults, then the defaults
associated with the cell, and finally by any values specified explicitly for a
given subsection of the morphology via the
paint interface of the decor
(see Electrical properties and ion values and Overriding properties locally).
Ion channels and other distributed dynamical processes are also specified
on the cell via the
paint method; while synapses, current clamps,
gap junction mechanisms, and the site for testing the threshold potential
are specified via the
place method. See Cell dynamics, below.
Cell dynamics¶
Each segment in a cell may have attached to it one or more density mechanisms, which describe biophysical processes. These are processes that are distributed in space, but whose behaviour is defined purely by the state of the cell and the process at any given point.
Cells may also have point mechanisms, describing the dynamics at post-synaptic sites. And junction mechanisms, describing the dynamics at each site of the two sites of a gap-junction connection.
A fourth type of mechanism, which describes ionic reversal potential behaviour, can be specified for cells or the whole model via cell parameter settings, described below.
Mechanisms are described by a
mechanism_desc object. These specify the
name of the mechanism (used to find the mechanism in the mechanism catalogue)
and parameter values for the mechanism that apply within a segment.
A
mechanism_desc is effectively a wrapper around a name and
a dictionary of parameter/value settings.
Mechanism descriptions can be constructed implicitly from the
mechanism name, and mechanism parameter values then set with the
set method. Relevant
mechanism_desc methods:
- mechanism_desc::mechanism_desc(std::string name)¶
Construct a mechanism description for the mechanism named name.
- mechanism_desc &mechanism_desc::set(const std::string &key, double value)¶
Sets the parameter associated with key in the description. Returns a reference to the mechanism description, so that calls to set can be chained in a single expression.
density,
synapse and
junction objects are thin wrappers
around a
mechanism_desc, needed for painting and placing mechanisms on a
decor.
Relevant methods:
- density::density(mechanism_desc mech, const std::unordered_map<std::string, double> ¶ms)¶
For each
{key, value}pair in params, set the parameter associated with
keyto
valueon mechanism
mech, then construct a density wrapper from the mechanism mech.
- synapse::synapse(mechanism_desc mech, const std::unordered_map<std::string, double> ¶ms)¶
For each
{key, value}pair in params, set the parameter associated with
keyto
valueon mechanism
mech, then construct a synapse wrapper from the mechanism mech.
- junction::junction(mechanism_desc mech, const std::unordered_map<std::string, double> ¶ms)¶
For each
{key, value}pair in params, set the parameter associated with
keyto
valueon mechanism
mech, then construct a junction wrapper from the mechanism mech.
Density mechanisms are associated with a cable cell object with:
Point mechanisms, which are associated with connection end points on a cable cell, are placed on a set of locations given by a locset. The group of generated items are given a label which can be used to create connections in the recipe. Point mechanisms are attached to a cell with:
Gap-junction mechanisms, which are associated with gap-junction connection end points on a cable cell, are placed on a single location given by a locset (locsets with multiple locations will raise an exception). The generated item is given a label which can be used to create gap-junction connections in the recipe. Gap-junction mechanisms are attached to a cell with:
Todo
TODO: describe other
place-able things: current clamps, threshold potential measurement point.
Electrical properties and ion values¶
On each cell segment, electrical and ion properties can be specified by the
parameters field, of type
cable_cell_local_parameter_set.
The
cable_cell_local_parameter_set has the following members,
where an empty optional value or missing map key indicates that the corresponding
value should be taken from the cell or global parameter set.
- class cable_cell_local_parameter_set¶
The keys of this map are names of ions, whose parameters will be locally overridden. The struct
cable_cell_ion_datahas three fields:
init_int_concentration,
init_ext_concentration, and
init_reversal_potential.
Internal and external concentrations are given in millimolars, i.e. mol/m³. Reversal potential is given in millivolts.
Initial membrane potential in millivolts.
Local temperature in Kelvin.
Local resistivity of the intracellular medium, in ohm-centimetres.
Local areal capacitance of the cell membrane, in Farads per square metre.
Method by which CV boundaries are determined when the cell is discretised. See Discretisation and CV policies.
Default parameters for a cell are given by the default_parameters
field in the
cable_cell object. This is a value of type
cable_cell_parameter_set,
which extends
cable_cell_local_parameter_set by adding an additional
field describing reversal potential computation:
- class cable_cell_parameter_set : public cable_cell_local_parameter_set¶
Maps the name of an ion to a ‘reversal potential’ mechanism that describes how it should be computed. When no mechanism is provided for an ionic reversal potential, the reversal potential will be kept at its initial value.
Default parameters for all cells are supplied in the
cable_cell_global_properties
struct.
Global properties¶
- class cable_cell_global_properties¶
all mechanism names refer to mechanism instances in this mechanism catalogue. by default, this is set to point to global_default_catalogue(), the catalogue that contains all mechanisms bundled with arbor.
if set, check to see if the membrane voltage ever exceeds this value in magnitude during the course of a simulation. if so, throw an exception and abort the simulation.
when synapse dynamics are sufficiently simple, the states of synapses within the same discretised element can be combined for better performance. this is true by default.
every ion species used by cable cells in the simulation must have an entry in this map, which takes an ion name to its charge, expressed as a multiple of the elementary charge. by default, it is set to include sodium “na” with charge 1, calcium “ca” with charge 2, and potassium “k” with charge 1.
- cable_cell_parameter_set default_parameters¶
the default electrical and physical properties associated with each cable cell, unless overridden locally. in the global properties, every optional field must be given a value, and every ion must have its default values set in default_parameters.ion_data.
- add_ion(const std::string &ion_name, int charge, double init_iconc, double init_econc, double init_revpot)¶
convenience function for adding a new ion to the global ion_species table, and setting up its default values in the ion_data table.
- add_ion(const std::string &ion_name, int charge, double init_iconc, double init_econc, mechanism_desc revpot_mechanism)¶
As above, but set the initial reversal potential to zero, and use the given mechanism for reversal potential calculation.
For convenience, neuron_parameter_defaults is a predefined
cable_cell_local_parameter_set
value that holds values that correspond to NEURON defaults. To use these values,
assign them to the default_parameters field of the global properties
object returned in the recipe.
Reversal potential dynamics¶
If no reversal potential mechanism is specified for an ion species, the initial reversal potential values are maintained for the course of a simulation. Otherwise, a provided mechanism does the work, but it subject to some strict restrictions. A reversal potential mechanism described in NMODL:
May not maintain any STATE variables.
Can only write to the “eX” value associated with an ion.
Can not be given as a POINT mechanism.
Essentially, reversal potential mechanisms must be pure functions of cellular and ionic state.
If a reversal potential mechanism writes to multiple ions, then if the mechanism is given for one of the ions in the global or per-cell parameters, it must be given for all of them.
Arbor’s default catalogue includes a “nernst” reversal potential, which is parameterized over a single ion, and so can be assigned to e.g. calcium in the global parameters via
cable_cell_global_properties gprop; // ... gprop.default_parameters.reversal_potential_method["ca"] = "nernst/ca";
This mechanism has global scalar parameters for the gas constant R and Faraday constant F, corresponding to the exact values given by the 2019 redefinition of the SI base units. These values can be changed in a derived mechanism in order to use, for example, older values of these physical constants.
mechanism_catalogue mycat(global_default_catalogue()); mycat.derive("nernst1998", "nernst", {{"R", 8.314472}, {"F", 96485.3415}}); gprop.catalogue = &mycat; gprop.default_parameters.reversal_potential_method["ca"] = "nernst1998/ca";
Overriding properties locally¶
Todo
TODO: using
paint to specify electrical properties on subsections of
the morphology. | https://docs.arbor-sim.org/en/latest/cpp/cable_cell.html | 2022-09-24T22:28:36 | CC-MAIN-2022-40 | 1664030333541.98 | [] | docs.arbor-sim.org |
On October 22, 2021, Splunk Light will reach its end of life. After this date, Splunk will no longer maintain or develop this product.
Generate.
Last modified on 21 February, 2017
This documentation applies to the following versions of Splunk® Light (Legacy): 7.3.0, 7.3.1, 7.3.2, 7.3.3, 7.3.4, 7.3.5, 7.3.6
Feedback submitted, thanks! | https://docs.splunk.com/Documentation/SplunkLight/7.3.6/GettingStarted/GeneratingdashboardPDFsandprinting | 2022-09-24T23:39:09 | CC-MAIN-2022-40 | 1664030333541.98 | [array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'],
dtype=object) ] | docs.splunk.com |
On the Home tab of the AdminTool, you can see all the needed information about your TSplus server: tunneling.
The session manager is located right below the RDP port:
You can display your server's task manager, and you have the possibilities to active a remote control, disconnect, logoff or send a message to your users.
You can activate the remote control via a remote session with an admin account on the following Operating Systems:
See this documentation for more information.
Windows Server 2016 introduced a new "Per user service", which makes services start all processes per users, which slows the users logons time.
Since TSplus 11.70 release, you can disable per user services in order to speed up users logons.
- You can also launch the "Server Properties" tab to have an overview of the control panel.
- You can see all the services on your server and their status on the Services tile.
The session opening preference allows you to choose your shell session preference, your logon preferences, the background color of your sessions, add your own logo and rename it to your liking.
By default, on these logon preferences are enabled:
You can also set a full Desktop for all your users and get a display the last connected users by ticking the corresponding boxes. You can customize your users sessions by adding a new Background Color, another logo or none and use the session name of your choice.
You can backup or restore your server parameters by clicking on the tile of the same name, on the Advanced tab:
Click on the Backup button to make a backup, which will be dated and added to the list of your restore points:
The backup file can be found on the C:\Backupparam folder:
Reboot your server
The "Reboot the server tab" allows you to reboot your server. | https://docs.terminalserviceplus.com/tsplus-lts-12/server-management | 2022-09-24T23:39:21 | CC-MAIN-2022-40 | 1664030333541.98 | [] | docs.terminalserviceplus.com |
Details¶
support for simple lists as mapping keys by transforming these to tuples
!!omapgenerates ordereddict (C) on Python 2, collections.OrderedDict on Python 3, and
!!omapis generated for these types.
Tests whether the C yaml library is installed as well as the header files. That library doesn’t generate CommentTokens, so it cannot be used to do round trip editing on comments. It can be used to speed up normal processing (so you don’t need to install
ruyamland
PyYaml). See the section Optional requirements.
Basic support for multiline strings with preserved newlines and chomping ( ‘
|’, ‘
|+’, ‘
|-’ ). As this subclasses the string type the information is lost on reassignment. (This might be changed in the future so that the preservation/folding/chomping is part of the parent container, like comments).
anchors names that are hand-crafted (not of the form``idNNN``) are preserved
merges in dictionaries are preserved
adding/replacing comments on block-style sequences and mappings with smart column positioning
collection objects (when read in via RoundTripParser) have an
lcproperty that contains line and column info
lc.lineand
lc.col. Individual positions for mappings and sequences can also be retrieved (
lc.key('a'),
lc.value('a')resp.
lc.item(3))
preservation of whitelines after block scalars. Contributed by Sam Thursfield.
In the following examples it is assumed you have done something like::
from ruyaml import YAML yaml = YAML()
if not explicitly specified.
Indentation of block sequences¶
Although ruyaml doesn’t preserve individual indentations of block sequence items, it does properly dump:
x: - b: 1 - 2
back to:
x: - b: 1 - 2
if you specify
yaml.indent(sequence=4) (indentation is counted to the
beginning of the sequence element).
PyYAML (and older versions of ruyaml) gives you non-indented scalars (when specifying default_flow_style=False):
x: - b: 1 - 2
You can use
mapping=4 to also have the mappings values indented.
The dump also observes an additional
offset=2 setting that
can be used to push the dash inwards, within the space defined by
sequence.
The above example with the often seen
yaml.indent(mapping=2, sequence=4, offset=2)
indentation:
x: y: - b: 1 - 2
The defaults are as if you specified
yaml.indent(mapping=2, sequence=2, offset=0).
If the
offset equals
sequence, there is not enough
room for the dash and the space that has to follow it. In that case the
element itself would normally be pushed to the next line (and older versions
of ruyaml did so). But this is
prevented from happening. However the
indent level is what is used
for calculating the cumulative indent for deeper levels and specifying
sequence=3 resp.
offset=2, might give correct, but counter
intuitive results.
It is best to always have
sequence >= offset + 2
but this is not enforced. Depending on your structure, not following
this advice might lead to invalid output.
Inconsistently indented YAML¶
If your input is inconsistently indented, such indentation cannot be preserved. The first round-trip will make it consistent/normalize it. Here are some inconsistently indented YAML examples.
b indented 3,
c indented 4 positions:
a: b: c: 1
Top level sequence is indented 2 without offset, the other sequence 4 (with offset 2):
- key: - foo - bar
Positioning ‘:’ in top level mappings, prefixing ‘:’¶
If you want your toplevel mappings to look like:
library version: 1 comment : | this is just a first try
then set
yaml.top_level_colon_align = True
(and
yaml.indent = 4).
True causes calculation based on the longest key,
but you can also explicitly set a number.
If you want an extra space between a mapping key and the colon specify
yaml.prefix_colon = ' ':
- : 23445 # ^ extra space here - : 944
If you combine
prefix_colon with
top_level_colon_align, the
top level mapping doesn’t get the extra prefix. If you want that
anyway, specify
yaml.top_level_colon_align = 12 where
12 has to be an
integer that is one more than length of the widest key.
Document version support¶
In YAML a document version can be explicitly set by using:
%YAML 1.x
before the document start (at the top or before a
---). For
ruyaml x has to be 1 or 2. If no explicit
version is set version 1.2
is assumed (which has been released in 2009).
The 1.2 version does not support:
sexagesimals like
12:34:56
octals that start with 0 only: like
012for number 10 (
0o12is supported by YAML 1.2)
Unquoted Yes and On as alternatives for True and No and Off for False.
If you cannot change your YAML files and you need them to load as 1.1
you can load with
yaml.version = (1, 1),
or the equivalent (version can be a tuple, list or string)
yaml.version = "1.1"
If you cannot change your code, stick with ruyaml==0.10.23 and let me know if it would help to be able to set an environment variable.
This does not affect dump as ruyaml never emitted sexagesimals, nor octal numbers, and emitted booleans always as true resp. false
Round trip including comments¶
The major motivation for this fork is the round-trip capability for comments. The integration of the sources was just an initial step to make this easier.
adding/replacing comments¶
Starting with version 0.8, you can add/replace comments on block style collections (mappings/sequences resuting in Python dict/list). The basic for for this is:
from __future__ import print_function import sys import ruyaml yaml = ruyaml.YAML() # defaults to round-trip inp = """\ abc: - a # comment 1 xyz: a: 1 # comment 2 b: 2 c: 3 d: 4 e: 5 f: 6 # comment 3 """ data = yaml.load(inp) data['abc'].append('b') data['abc'].yaml_add_eol_comment('comment 4', 1) # takes column of comment 1 data['xyz'].yaml_add_eol_comment('comment 5', 'c') # takes column of comment 2 data['xyz'].yaml_add_eol_comment('comment 6', 'e') # takes column of comment 3 data['xyz'].yaml_add_eol_comment('comment 7', 'd', column=20) yaml.dump(data, sys.stdout)
Resulting in:
abc: - a # comment 1 - b # comment 4 xyz: a: 1 # comment 2 b: 2 c: 3 # comment 5 d: 4 # comment 7 e: 5 # comment 6 f: 6 # comment 3
If the comment doesn’t start with ‘#’, this will be added. The key is the element index for list, the actual key for dictionaries. As can be seen from the example, the column to choose for a comment is derived from the previous, next or preceding comment column (picking the first one found).
Config file formats¶
There are only a few configuration file formats that are easily readable and editable: JSON, INI/ConfigParser, YAML (XML is to cluttered to be called easily readable).
Unfortunately JSON doesn’t support comments, and although there are some solutions with pre-processed filtering of comments, there are no libraries that support round trip updating of such commented files.
INI files support comments, and the excellent ConfigObj library by Foord and Larosa even supports round trip editing with comment preservation, nesting of sections and limited lists (within a value). Retrieval of particular value format is explicit (and extensible).
YAML has basic mapping and sequence structures as well as support for ordered mappings and sets. It supports scalars various types including dates and datetimes (missing in JSON). YAML has comments, but these are normally thrown away.
Block structured YAML is a clean and very human readable format. By extending the Python YAML parser to support round trip preservation of comments, it makes YAML a very good choice for configuration files that are human readable and editable while at the same time interpretable and modifiable by a program.
Extending¶
There are normally six files involved when extending the roundtrip capabilities: the reader, parser, composer and constructor to go from YAML to Python and the resolver, representer, serializer and emitter to go the other way.
Extending involves keeping extra data around for the next process step, eventuallly resulting in a different Python object (subclass or alternative), that should behave like the original, but on the way from Python to YAML generates the original (or at least something much closer).
Smartening¶
When you use round-tripping, then the complex data you get are already subclasses of the built-in types. So you can patch in extra methods or override existing ones. Some methods are already included and you can do:
yaml_str = """\ a: - b: c: 42 - d: f: 196 e: g: 3.14 """ data = yaml.load(yaml_str) assert data.mlget(['a', 1, 'd', 'f'], list_ok=True) == 196 | https://ruyaml.readthedocs.io/en/latest/detail.html | 2022-09-24T22:10:37 | CC-MAIN-2022-40 | 1664030333541.98 | [] | ruyaml.readthedocs.io |
survive.Breslow¶
- class
survive.
Breslow(*, conf_type='log', conf_level=0.95, var_type='aalen', tie_break='discrete')[source]¶
Breslow nonparametric survival function estimator.
See also
survive.NelsonAalen
- Nelson-Aalen cumulative hazard function estimator.
Notes
The Breslow estimator is a nonparametric estimator of the survival function of a time-to-event distribution defined as the exponential of the negative of the Nelson-Aalen cumulative hazard function estimator \(\widehat{A}(t)\):\[\widehat{S}(t) = \exp(-\widehat{A}(t)).\]
This estimator was introduced in a discussion [1] following [2]. It was later studied by Fleming and Harrington in [3], and it is sometimes called the Fleming-Harrington estimator.
The parameters of this class are identical to the parameters of
survive.NelsonAalen. The Breslow survival function estimates and confidence interval bounds are transformations of the Nelson-Aalen cumulative hazard estimates and confidence interval bounds, respectively. The variance estimate for the Breslow estimator is computed using the variance estimate for the Nelson-Aalen estimator using the Nelson-Aalen estimator’s asymptotic normality and the delta method:\[\widehat{\mathrm{Var}}(\widehat{S}(t)) = \widehat{S}(t)^2 \widehat{\mathrm{Var}}(\widehat{A}(t))\]
Comparisons of the Breslow estimator and the more popular Kaplan-Meier estimator (cf.
survive.KaplanMeier) can be found in [3] and [4]. One takeaway is that the Breslow estimator was found to be more biased than the Kaplan-Meier estimator, but the Breslow estimator had a lower mean squared error.
References
Methods
fit(time, **kwargs)[source]¶
Fit the Breslow estimator to survival data.
See also
survive.SurvivalData
- Structure used to store survival data.
survive.NelsonAalen
- Nelson-Aalen cumulative hazard estimator.. | https://survive-python.readthedocs.io/generated/survive.Breslow.html | 2022-09-24T22:41:13 | CC-MAIN-2022-40 | 1664030333541.98 | [] | survive-python.readthedocs.io |
get-cors CORS policy
The following
get-cors-policy example displays the cross-origin resource sharing (CORS) policy that is assigned to the specified container.
aws mediastore get-cors-policy \ --container-name ExampleContainer \ --region us-west-2
Output:
{ "CorsPolicy": [ { "AllowedMethods": [ "GET", "HEAD" ], "MaxAgeSeconds": 3000, "AllowedOrigins": [ "" ], "AllowedHeaders": [ "" ] } ] }
For more information, see Viewing a CORS Policy in the AWS Elemental MediaStore User Guide.
CorsPolicy -> (list)
The CORS policy assignedobject).
Each CORS rule must have at least one
AllowedOriginselement.sand one
AllowedOriginselement.
(string)
AllowedHeaders -> (list)
Specifies which headers are allowed in a preflight
OPTIONSrequest through the
Access-Control-Request-Headersheader. Each header name that is specified in
Access-Control-Request-Headersmustondselement.
ExposeHeaders -> (list)
One or more headers in the response that you want users to be able to access from their applications (for example, from a JavaScript
XMLHttpRequestobject).
This element is optional for each rule.
(string) | https://docs.aws.amazon.com/zh_cn/cli/latest/reference/mediastore/get-cors-policy.html | 2022-09-24T22:29:27 | CC-MAIN-2022-40 | 1664030333541.98 | [] | docs.aws.amazon.com |
Azure Pipelines
Covered in this doc
Integrating Percy with your Azure Pipeline Build environment
Configuring environment variables
Step 1: Add PERCY_TOKEN to your pipeline as a secret:
Start by configuring
PERCY_TOKEN, our project-specific, write-only API token. It can be found in your Percy project settings and set in your pipeline's variables as a secret. Remember to click the padlock to encrypt the variable.
In your Azure pipeline:
- First
edityour pipeline.
- Click on the kebab menu and choose variables.
- Add a variable named
PERCY_TOKEN, with the value set to the write-only token from your Percy project. This token can be found in each Percy project's settings. Remember to click the padlock to encrypt it.
- Save your changes.
Step 2: Add PERCY_TOKEN to your build configuration:
Azure's docs show how the PERCY_TOKEN secret can be added to your azure-pipelines.yml file in the env section. i.e.:
- script: | npm install npm run test env: PERCY_TOKEN: $(PERCY_TOKEN)
Keep your Percy token secret
Anyone with access to your token can add builds to your project, though they cannot read data.
More information
If you're working on a public open source project, you may want Percy to run on builds from forks of your repository. The Azure Pipelines documentation titled Validate contributions from forks will be of interest to you.
Updated about 1 year ago
If you haven't installed and configured an SDK or source code integration, those are your next steps to getting started with visual testing. | https://docs.percy.io/docs/azure-pipelines | 2022-09-24T23:13:22 | CC-MAIN-2022-40 | 1664030333541.98 | [] | docs.percy.io |
.
-. For more information about host customizations, see the vSphere Host Profiles documentation.
Host customization was called answer file in earlier releases of vSphere Auto Deploy. | https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.esxi.upgrade.doc/GUID-9A827220-177E-40DE-99A0-E1EB62A49408.html | 2022-09-24T22:58:03 | CC-MAIN-2022-40 | 1664030333541.98 | [array(['images/GUID-763D0435-1E33-4B34-A78D-2960323F6BF2-high.png',
'VIBs and image profiles, the rule engine, and the Auto Deploy Server are the main components of Auto Deploy'],
dtype=object) ] | docs.vmware.com |
Information for "LDAP/id" Basic information Display titleLDAP Default sort keyLDAP/id Page length (in bytes)452 Page ID34377 Page content languageid - Indonesianw1Rianto (talk | contribs) Date of page creation09:23, 24 May 2014 Latest editorFuzzyBot (talk | contribs) Date of latest edit09:38, 6 April 2020 Total number of edits14 Total number of distinct authors2 Recent number of edits (within past 30 days)0 Recent number of distinct authors0 Page properties Transcluded templates (2)Templates used on this page: Template:JVer (view source) (semi-protected)Chunk:LDAP/id (view source) Retrieved from "" | https://docs.joomla.org/index.php?title=LDAP/id&action=info | 2022-09-24T23:05:39 | CC-MAIN-2022-40 | 1664030333541.98 | [] | docs.joomla.org |
jsearch.values( index as cts.reference ) as ValuesSearch
Creates a ValuesSearch object to define and execute a search for a list of indexed values.
Use the methods of
ValuesSearch to further refine
your query, including scoping the values to those occurring in
documents that match a documents search, defining a sort order,
or computing an aggregate over the values.
// Query the values of the "title" JSON property in all documents // in the directory "/books". const jsearch = require('/MarkLogic/jsearch.sjs'); jsearch.values('title') .where(cts.directoryQuery('/books/')) .result() /* Result: A list of values from the "title" element range index that satisfy the query, similar to the following: ["Adventures of Huckleberry Finn", "Adventures of Tom Sawyer", "Collected Works", "East of Eden", "Of Mice and Men", "The Grapes of Wrath"] */
Stack Overflow: Get the most useful answers to questions from the MarkLogic community, or ask your own question. | https://docs.marklogic.com/jsearch.values | 2022-09-24T22:31:14 | CC-MAIN-2022-40 | 1664030333541.98 | [] | docs.marklogic.com |
Improvements
Cassandra
This release adds slow query tracking to Cassandra queries made via the Datastax driver. You’ll now see slow CQL queries on the Databases page and within Transaction Traces. You must add the following to your newrelic.yml configuration to enable this feature in High security mode:transaction_tracer:slow_query_whitelist:'com.newrelic.instrumentation.cassandra-datastax-2.1.2'
MongoDB
The Java agent now reports synchronous calls made via MongoDB Java driver 2.14. You will see MongoDB represented on the Overview page, on the Databases page, and in Transaction traces. Note: The asynchronous driver is not yet supported.
Apache Tomcat
This release adds support for Tomcat 8.5
Akka
- This release adds support for Akka forwarding and Akka broadcasting. The agent will now trace messages broadcast or forwarded to actors.
- When the system sends a message to an Actor, the agent now reports the name of the actor system that sent the message. Previously, the agent reported “deadletters” under these circumstances.
Async performance
This release adds performance enhancements for asynchronous frameworks, especially Hystrix. Performance in Hystrix will noticeably improve for most applications, and will be up to 3x faster when tracing low-latency requests (response time <3ms). Performance for low-latency Play applications will be up to 30% faster.
JDBC
This release adds support for the following JDBC drivers:
- MySQL 6.0.2 and higher
- i-Net Oranxo 3.06
- i-Net MERLIA 8.04.03 and 8.06
Fixes
- Fixed a bug that could cause Akka Http instrumentation to throw a NullPointerException into customer code.
- Fixed a bug in the Spymemcached instrumentation that would report operations with the name “None” instead of the correct operation name.
- Fixed a bug that could cause highly asynchronous applications to experience a memory leak in the NewRelic TransactionService. | https://docs.newrelic.com/jp/docs/release-notes/agent-release-notes/java-release-notes/java-agent-3280 | 2022-09-24T21:49:10 | CC-MAIN-2022-40 | 1664030333541.98 | [] | docs.newrelic.com |
Considerations
The integrated backup and restore capability uses Velero as the engine. Please review the considerations below to understand constraints and limitations to help with your design and planning.
Backups¶
Objects that already exist in the Kubernetes cluster will not be overwritten.
Only a single set of credentials per provider are supported.
Volume snapshots are limited by where your provider allows you to create snapshots. For example, AWS and Azure do not allow you to create a volume snapshot in a different region than where the volume is located.
It is not yet possible to send a single backup to multiple backup storage locations at the same time, or a single volume snapshot to multiple locations simultaneously. However, users can set up multiple backups manually or schedules that differ only in the storage locations.
Cross-provider snapshots are not supported. For example, if you have a cluster with more than one type of volume (e.g. NFS and Ceph), but you only have a volume snapshot location configured for NFS, then snapshots will be taken only for the NFS volumes.
Restic data is stored under a prefix/subdirectory of the main Velero bucket and will go into the bucket corresponding backup storage location selected by the user at backup creation time.
Recovery¶
For recovery, ensure that the Kubernetes version is exactly the same as the original cluster.
Migration¶
When performing cluster migration, the new cluster number of nodes should be equal or greater than the original cluster. | https://docs.rafay.co/backup/considerations/ | 2022-09-24T22:45:01 | CC-MAIN-2022-40 | 1664030333541.98 | [] | docs.rafay.co |
# [Ride v5] ByteVector
⚠️ This is the documentation for the Standard library version 5. We recommend to use version 6. Go to version 6
ByteVector is a data type for byte array.
To assign a value to a
ByteVector variable, you can use a string in Base16, Base58, or Base64 with the appropriate prefix:
let a = base16'52696465' let b = base58'8t38fWQhrYJsqxXtPpiRCEk1g5RJdq9bG5Rkr2N7mDFC' let c = base64'UmlkZQ=='
This method, unlike the fromBase16String, fromBase58String, and fromBase64String functions, does not increase the complexity of the script, since decoding is performed by the compiler.
To convert integer, boolean and string values to a byte array use toBytes function:
let a = 42.toBytes() let b = true.toBytes() let c = "Ride".toBytes()
For more byte array functions, see the Built-in Functions.
# Limitations
The maximum size of a
ByteVector variable is 32,767 bytes.
Exception: the
bodyBytes field of transaction structure. You can pass this value as an argument to the
rsaVerify and
sigVerify verification functions (but cannot concatenate with other byte arrays in case the limit is exceeded). | https://docs.waves.tech/en/ride/v5/data-types/byte-vector | 2022-09-24T23:20:54 | CC-MAIN-2022-40 | 1664030333541.98 | [] | docs.waves.tech |
The Builder Mediator can be used to build the actual SOAP message from a message coming into the ESB profile of WSO2 Enterprise Integrator (WSO2 EI) through the Binary Relay. One usage is to use this before trying to log the actual message in case of an error. Also with the Builder Mediator in the ESB can be configured to build some of the messages while passing the others along.
In order to use the Builder mediator,
BinaryRealyBuilder should be specified as the message builder in the
<EI_HOME> the message builder that should be used to build the binary stream using the Builder mediator.
By default, Builder Mediator uses the
axis2 default Message builders for the content types. Users can override those by using the optional
messageBuilder configuration. For more information, see Working with Message Builders and Formatters.
Like in
axis2.xml, a user has to specify the content type and the implementation class of the
messageBuilder. Also, users can specify the message
formatter for this content type. This is used by the
ExpandingMessageFormatter to format the message before sending to the destination.
Syntax
<builder> <messageBuilder contentType="" class="" [formatterClass=""]/> </builder> | https://docs.wso2.com/display/EI650/Builder+Mediator | 2022-09-24T23:44:13 | CC-MAIN-2022-40 | 1664030333541.98 | [] | docs.wso2.com |
Cluster operations is a facility to perform maintenance operations on various items in operations center, such as client controllers and update centers. Different operations are applicable to various items such as performing backups or restarts on client controllers, or upgrading or installing plugins in update centers. waits for anything currently running to finish, and. | https://docs.cloudbees.com/docs/cloudbees-ci/2.332.3.2/traditional-admin-guide/cluster-operations | 2022-09-24T22:56:44 | CC-MAIN-2022-40 | 1664030333541.98 | [array(['../../../cloudbees-common/latest/_images/cluster-operations/all-view.c412a91.png',
'Figure 3. Ad-hoc cluster operation Figure 3. Ad-hoc cluster operation'],
dtype=object)
array(['../../../cloudbees-common/latest/_images/cluster-operations/select-run-adhoc.3bb4c8b.png',
'Figure 4. Ad-hoc cluster operation select run adhoc'],
dtype=object)
array(['../../../cloudbees-common/latest/_images/cluster-operations/ad-hoc-run-now.cafafc3.png',
'Figure 5. Ad-hoc cluster operation ad hoc run now'], dtype=object)
array(['../../../cloudbees-common/latest/_images/cluster-operations/ad-hoc-from-client-master.gif',
'Figure 6. Ad-hoc cluster operation ad hoc from client master'],
dtype=object) ] | docs.cloudbees.com |
Syntax
How to read and write chords, scales and lengths (durations).
Note Syntax
Notes begin with an uppercase letter
A, B, C, D, E, F, G. These can
be followed by one or more accidentals indicating a raise (sharp) or lowering (flat)
by one halftone:
D# = D sharp Eb = E flat Gb = G flat F# = F sharp F## = F double sharp
The C chromatic scale is written this way:
C, C#, D, D#, E, F, F#, G, G#, A, A#, B
Synfire assumes your MIDI equipment is tuned with well-tempered
12TET tuning (Twelve-tone Equal Temperament), which is the default
for all current hardware and software. Thus
D# and
Eb are assumed to send the same MIDI note. Still, Synfire presents and accepts the different enharmonic spellings
correctly, depending on context.
Hence, the Db Major scale is written this way:
Db, Eb, F, Gb, Ab, Bb, C
His rendered for
Band
Bfor
A#( ). Actually there is no reason to do so, unless you are doing a formal music education in Germany. This weird notation stems from a historical confusion of b with h. It is not used anywhere outside German speaking countries.
Interval Syntax
The chromatic distance between two notes is called an Interval. Examples are minor third, augmented fifth, diminished seventh, etc. For the labeling of chords and scales, Synfire uses a shorter form, though, as listed in the column Interval below.
Chord Syntax
For the designation of chords, Synfire uses the standard North American notation commonly used for Jazz. The chord name always begins with the name of the root note, whose spelling depends on the key in which the chord is used.
The root is followed by the designation of various triad forms (or nothing at all, if
it is a major triad). For example, an
m for a minor triad,
dim for a diminished triad,
aug for an
augmented triad, and so forth:
Am, Cdim, F#aug, G, Esus4
An optional numeral
6, 7, 9, 11, or 13 stands for sixth, seventh,
ninth, eleventh, and thirteenth chords:
Am9, C7, Gm9, F#13, Bmaj7, Ebmaj9
Extensions may be appended. These additional notes are numerals, optionally prefixed
with
# or
b, listed in parentheses and separated
by comma:
A7(9,#11), Cm7(b9), Emaj7(9,11), Am(7,9,13)
If only a single extension is added, an alternative notation uses the keyword
add:
A(add9), Cmaj7(add4)
Many chords allow for multiple equivalent notations, although only certain notations are commonly used. For example, these chords on each line are identical:
Am9 = Am(7,9) Am11 = Am(7,9,11) = Am7(9,11) C13 = C(7,9,13) Faug = F(#5) Fmaj7 = F(#7) Fmaj9 = Fmaj7(9) = F(#7,9)
In practice, the exact notation you choose for text input is not relevant in Synfire, as chords will be renamed automatically.
The standard chords included with the Catalog are shown below. You can add more chords to the Catalog as you need.
Slash Chords
Slash Chords are written with a bass note appended after a slash. The bass note need not necessarily be a member of the chord.
Am/F# C/A
Power Chords
Power Chords omit the third interval, playing only the root note and the
fifth. The power chord is an interpretation of the major or minor triad. It cannot
be added to a progression directly, because it has no name in the
Catalog. Writing
F(no3) doesn't help either,
because Synfire needs to consider the full triad to ensure harmonic consistency for
all instruments.
If you want a particular instrument to play power chords, use the Chord symbols of the Figure parameter to draw a chord with only two symbols for the prime and fifths.
C5. Using this in a progression would force all instruments to use only the two notes of the power chord, which is certainly not what you want.
Scale Syntax
Like chords, scales begin with the name of their root note. A period follows that separates the root note from the scale's name, which is arbitrary (i.e. not parsed like chord symbols):
Eb.hungarian-minor C.major F#.aeolian
The name may be followed by a hyphen and references to features, such as added,
altered or omitted notes. Accidentals and alterations use
#, - or
+, - respectively:
F.altered-dominant-bb7 E.locrian+2 C#.lydian-augmented B.natural-minor-b2
The character
@ followed by a digit says the object in question is
the nth inversion (or rotation) of the scale. The example below denotes G
natural minor, starting from the fourth degree, or
Mode 4 of
natural minor:
G.natural-minor@4
A dot followed by
h at the end (not unlike a file extension) denotes
a Horizontal Scale that was automatically generated by Synfire from a Vertical Scale:
[email protected] blues1.h
In the course of your work with Synfire, you will actually never be confronted with having to input scales. The program makes these decisions for you automatically.
Scale Set Syntax
Scale Sets always start with a capital letter. Otherwise, they are written like Scales. If you create your own, you are free to assign them any name.
Syntax of Durations And Times
Several inspectors in Synfire allow for text input of time offsets and lengths (durations). These are notated as fractions, denoting a note length in a format that is easily understood by every musician. The shortest supported duration is 1/128. Durations shorter than that are denoted as MIDI ticks (see below).
1/1, 1/2, 1/4, 1/8, 1/16, 1/32, 1/64, 1/128
With dotted lengths, each dot (asterisk) adds ½ of the previous length. Up to three dots are supported.
1/4*, 1/8**, 3/4***
Triplets
1/3, 1/6, 1/12, 1/24, 1/48, 1/96
Quintuplets
1/5, 1/10, 1/20, 1/40, 1/80
Of course triplets and quintuplets may be dotted, too. Odd tuplets like 1/7 and 1/9 are not currently supported, because the internal resolution can't represent them as integral numbers. This may change in a future version of Synfire.
The length
1/1 is equivalent to
4/4. It always
refers to four quarter notes regardless of time signature, e.g. in 3:4 time, a whole
note exceeds one measure. This is where you would use
1m or
2m to denote a number of measures. The actual length depends on
time signature currently in effect:
1m, 2m, 4m, 12m
Multiple expressions can be combined with
+ to break an odd length
down into smaller units:
2m+2/4 4m+1/4*
8m+1/2+240 | https://docs.cognitone.com/synfire/EN/references/Syntax.html | 2022-09-24T23:39:41 | CC-MAIN-2022-40 | 1664030333541.98 | [] | docs.cognitone.com |
Rocky Series Release Notes¶
11.0.3¶
Bug Fixes¶
Erroneously, availability_zone for host aggregate resource types was considered mandatory in heat templates.
Behaviour has been adjusted to bring this in line with the CLI and GUI, in which it is optional.
11.0.2¶
Upgrade Notes¶
The distribution name has been changed from “heat” to “openstack-heat” so that we can publish packages to pypi.org. This may have an effect on downstream package builds if they rely on asking setuptools to determine the package name.
11.0.0
The ceilometer client plugin is no longer provided, due to the Ceilometer API no longer being available from Queens and the python-ceilometerclient library being unmaintained.
The database upgrade for Heat Queens release drops ‘watch_rule’ and ‘watch_data’ tables from the heat database..
Security Issues¶
Passwords generated by the OS::Heat::RandomString resource may have had less entropy than expected, depending on what is specified in the
character_classand
character_sequenceproperties. This has been corrected so that each character present in any of the specified classes or sequences now has an equal probability of appearing at each point in the generated random string.
Bug Fixes¶
Previously, when deleting a convergence stack, the API call would return immediately, so that it was possible for a client immediately querying the status of the stack to see the state of the previous operation in progress or having failed, and confuse that with a current status. (This included Heat itself when acting as a client for a nested stack.) Convergence stacks are now guaranteed to have moved to the
DELETE_IN_PROGRESSstate before the delete API call returns, so any subsequent polling will reflect up-to-date information.
Previously, the suspend, resume, and check API calls for all stacks, and the update, restore, and delete API calls for non-convergence stacks, returned immediately after starting the stack operation. This meant that for a client reading the state immediately when performing the same operation twice in a row, it could have misinterpreted a previous state as the latest unless careful reference were made to the updated_at timestamp. Stacks are now guaranteed to have moved to the
IN_PROGRESSstate before any of these APIs return (except in the case of deleting a non-convergence stack where another operation was already in progress).
Other Notes¶
Introduce a Blazar client plugin module that will be used by Blazar resources. | https://docs.openstack.org/releasenotes/heat/rocky.html | 2022-09-24T23:29:29 | CC-MAIN-2022-40 | 1664030333541.98 | [] | docs.openstack.org |
Table of Contents
Linux Cloud Tools
last updated on Oct. 18, 2021
See also: Linux Cloud Tools Release Notes.
1 Downloading
Distributions can be downloaded here for CentOS, Debian/Ubuntu, Fedora and OpenSUSE.
Please ensure you download the correct package for your distribution.
The packages come with source so if you are using 64 bit Linux the Apps will be compiled for 64 bit.
Once downloaded you should be able to install with your relevant linux package manager by double clicking on the download.
2 Installing and Uninstalling
How to install on Ubuntu, Debian, Mint and other Linuxes based on Debian
1) Download DEB package
2) Open Terminal and go to the folder with the DEB package using cd command
3) Try to install the DEB package using the command (put your version number in this command):
sudo dpkg -i storagemadeeasy_<your_version_number>.deb
Note that this command may show errors if there is some missing dependencies. It will be fixed on next step.
4) Run next command to correct a system with broken dependencies:
sudo apt install -f
If there was some errors on previous step then this command will fix all that errors. Note that this command may work couple minutes.
How to uninstall on Ubuntu, Debian, Mint and other Linuxes based on Debian
1) Open Terminal
2) Run the command:
sudo dpkg --remove storagemadeeasy
How to install on CentOS or Fedora
1) Download RPM package for CentOS or Fedora
2) If the RPM package is archived then extract it from the archive
3) Open Terminal and go to the folder with the RPM package using cd command
4) If you are using CentOS then you need to enable EPEL repository if it's not enabled. Run next command:
sudo yum install epel-release
5) Install the RPM package using the command (put your version number in this command):
sudo yum install --nogpgcheck storagemadeeasy-<your_version_number>.noarch.rpm
Note that this command may run for a couple of minutes.
How to install on OpenSUSE
1) Download RPM package for OpenSUSE
2) Open Terminal and go to a folder with the RPM package using cd command
3) Install the RPM package using the command (put tools version number in this command):
sudo zypper install storagemadeeasy-<your_version_number>.noarch.rpm
Note that this command may run for a couple of minutes.
How to uninstall on CentOS, Fedora or OpenSUSE
1) Open Terminal
2) Run command:
sudo rpm -e storagemadeeasy
3 Post Install
If upgrading from a version prior to 5.1.3, then after install you will have new icons for the SME EFF Drive and SME EFF Sync Center.
These applications can also be run from the terminal:
- smeclient
- smeexplorer
- smesynccenter
- smemount (from the terminal only)
4 Mount folder with EFF Drive
Launching the EFF Drive when user is not signed in, you will need to follow the steps below:
- Specify proxy settings if needed
- Click on 'Sign In' button
- Sign in page will open in the browser, enter 'Sign in' credentials
- If 2FA is set up for the account, then enter 2FA code
- Once auth been completed, then you can close the browser login page and go back to the EFF Drive
- In the 'Current folder' field, specify a mount directory that must actually exist on your linux file system (The Linux convention is to use /mnt as the mount directory.)
- Click the 'Mount' button
- If the mount folder is not need anymore then it can be unmount by clicking the 'Unmount' button.
Note:
- If you are copying a large amount of files using the drive the access to the directory you mounted may slow down
- The folder is only mounted until Linux restart/shutdown
5 After Mount
6 Linux Cloud Drive
The linux cloud Drive will now show on your desktop and you should be able to double click to access it.
7 Accessing Linux Cloud Drive
You can navigate your files and folders from different clouds either by a GUI explorer tool or directly from the command line. Any files or folders that you drop into “My <cloud storage provider name>” will be uploaded / download to that storage provider. This makes changing storage providers as easy as changing directories.
Note that encrypted files are not shown from the Cloud Drive view. This is deliberate as there is no way at a drive level to ask the user for a password for the file.
8 Linux Sync Centre
The other Cloud Tool that is installed is the Linux Sync centre. Click the icon to launch the sync centre.
9 Linux Sync Centre Options
The Sync Centre is a sophisticated Desktop/Cloud synchronisation tool. It enables you to keep local files/folders in sync with files/folders stored on the Cloud. On first launch you will be asked if you want to sync with your underlying cloud. This is necessary to ensure the meta-data for your Cloud in the Storage Made Easy platform is up-to-date before initiating a sync (it is only not applicable if you are using the Storage Made Easy Cloud). If you are sure it is then you can click 'no' otherwise you should click 'yes'.
You can add files and folders to be Sync'd from your desktop to the cloud by clicking the “+” icon and mapping folders between the desktop and the cloud.
The.
10 Selecting files
After clicking to select files from the main sync centre you are able to choose which folders to select to sync.
11 First Sync
On first sync you can see whether files will be synchronised up to the cloud or down to the desktop from the arrows direction.
12 Sync in Progress
13 Sync Completed
Once the sync has completed you can see visually that there is files to sync as the arrows which define whether files are to be sync'd up or down turns to a square to represent an “in sync ” status
14 Additional Points to Note
File Sizes: For certain providers we are unable to get the file size after the first cloud sync. This is because the Cloud Storage Providers either do not provide file sizes or they store files in internal formats so the file size is unknown. The Linux File System file is then accessible as normal from the Linux OS.
Icon Warning: If you are in icon view rather than list view the file organiser pulls down data to build the icons. This can take longer to view the file lists until this caching is done.
You could choose to use a file manager such as Midnight Commander to get around this. Please see this blog article for further details.
Headless Mode: If you require the use of the Apps from the command line ie. in headless mode then please read this blog article. | https://docs.storagemadeeasy.com/linuxcloudtools | 2022-09-24T22:05:40 | CC-MAIN-2022-40 | 1664030333541.98 | [] | docs.storagemadeeasy.com |
Known Issues
Below you find a non-exhaustive list of known issues, which you should be aware of while using the current version of the software. Most of these issues are not Raiden specific, but rather apply to all blockchain applications.
Compromised user system: If the system of the user is compromised and accessed by an attacker or if a malicious application is running, then the write-ahead logging (WAL) could be accessed and valuable information leaked through it, since the WAL is not yet encrypted as such: raiden-network/raiden#579
Disk Full: The client does not properly handle the cases where the user’s disk may be full. This could lead to a loss of data due to the Raiden node crashing. In the future, we want to handle the detection of a full disk and gracefully quit the app: raiden-network/raiden#675
Blockchain Congestion: If the blockchain is congested and there is no space for the Raiden node to submit transactions on-chain, the client could end up being unable to settle the channel on-chain. The development of a gas slot based settlement timeout definition has been suggested in order to address blockchain congestion: raiden-network/raiden#383
Chain reorganizations: The client used to have an issue with edge cases of chain reorganizations. These issues have been hot fixed by only polling events that are confirmed for 5 blocks. Same applies to processing transactions, which are assumed to be valid only after a confirmation period of 5 blocks. This results in 15 blocks wait time for opening a channel (three on-chain transactions).
Database Upgrades
The database layout can change between versions. For patch and minor change releases, Raiden will do automatic upgrades in the background, as soon as you start the client with the new version. Your old database will always be kept as a backup.
However, for major change releases, this may not always be possible. If the client tells you, that the database migration was not possible, you are left with two options:
to recover all allocated funds, you should run the previous version of Raiden and close and settle all channels. Afterwards you can move your old database directory out of the way and join the network again with the same Ethereum keystore file.
if there are no funds worth recovering, you can also simply start over with a new Ethereum keystore file and start over. This should be true for all usage on testnet and in cases where your channel balances are very low on mainnet. Don’t hesitate to ask for help! | https://raiden-network.readthedocs.io/en/latest/other/known-issues.html | 2022-09-24T23:26:57 | CC-MAIN-2022-40 | 1664030333541.98 | [] | raiden-network.readthedocs.io |
Calculate DAU and MAU
One way to evaluate your app’s “stickiness” is to compare its daily active users (DAU) and monthly active users (MAU).
The ratio of DAU/MAU represents your average daily active user count / monthly active user count. The closer your ratio is to 1, the more often users are using your app, rather than logging in only sporadically.
This article demonstrates the following:
- How to build custom actor properties for daily active users and monthly active users.
- How to analyze the actor properties in Explore.
This article uses the Interana loopback usage data set.
Build the custom actor properties
To analyze DAU/MAU, create two custom actor properties:
Define daily activity and monthly activity using the same criteria except for the trailing time window.
You can create an actor property in one of two ways:
- To create an actor property that you want to save, reuse, and potentially share with other users, follow all of the instructions below.
- To create an actor property that you can use only with a single top-level query, define the property in the query palette (see Create a property in Explore with the query palette) when you define the top-level query. You will still follow steps 4-9 below.
To build a Daily Activity actor property, do the following:
- In the left menu bar, click Data, then click the Actor Properties tab.
- In the top right corner of the window, click +New Actor Property.
- At the top of the page, enter a name for the property. We named our property queries_per_user_t1d (instead of something like Daily Activity) to help someone looking at this query later quickly see what criteria we are using to measure daily activity (see step 7).
- In the left definition pane, select a user actor (or use the default). In our example, we select username.
- Accept the default method, Show, and the default aggregation, count of events.
- At Filtered to, click all events. From the dropdown, select the event property or combination of event properties that define a user action. In our example we select the action event_name. The surrounding text auto updates to say "Filtered to events with event_name that matches..."
- In the new dropdown, select the actions you want to filter to, adding additional actions with the plus sign. Select the filtered to actions for the property. Click the plus sign to choose multiple actions. For our example, we chose only one action, run_query.
For our usage loopback table data, we consider a user to be daily active if they run at least one query. This is why we filter our actor property to event_name matches run_query. Choose activities appropriate for your dataset.
- Click + time options and accept the default of Trailing window: trailing 1 day. See Specify time in a query for more information about trailing windows.
- Click Go to see your actor property results.
- Click Save.
To build a Monthly Activity actor property, do the following:
- At the top of the daily activity actor property data model page, click the duplicate icon.
This creates a copy of the daily activity actor property definition and is especially helpful if you have defined activity using complicated criteria.
- At the top of the page, enter a name for the property. We named our property queries_per_user_t28d (instead of something like Monthly Activity) to help someone looking at this query later quickly see the criteria (queries per user) and time range (28 days) we're using to define monthly activity.
- If you duplicated the daily activity actor property, you can accept the settings at Method, Show, and Filtered to. These correspond to steps 4-7 above.
- Click + time options.
- Next to Trailing, click 1 day to open the dropdown. Type 28 and select 28 days. See Specify time in a query for more information about trailing windows. In our example we use 28 days instead of 1 month, because 28 days has a consistent number of weekend and weekday days, regardless of starting date.
- Click Go to see your monthly activity actor property results.
- Click Save.
Construct the top-level query
To compare DAU and MAU, build a query in Explore. For example, your end goal might be something like this:
To create a query with DAU and MAU on the same chart:
- Navigate to the daily activity actor property (called queries_per_user_t1d in our example).
- From the daily activity actor property, click Explore at the top left.
- Click measure 1 and edit the name to DAU.
- Under Filtered to, click all username actors (or whatever your actor field is named) and select your daily activity actor property from the dropdown. For our example (where our daily activity property is named queries_per_user_t1d), the text automatically updates to read Filtered to username actors with queries_per_user_t1d that is greater than 0. Change the minimum criteria to suit your analysis needs.
- Click +measure to start creating the MAU definition.
- Click measure 2 and edit the name to MAU.
- Repeat the selections from the DAU measure above, substituting your monthly activity actor property in place of your daily activity actor property. In our example, our monthly activity actor property is called queries_per_user_t28d.
- Next to Split by, click the trash icon. The text now reads Split by none.
- Optional: To calculate DAU/MAU:
- Click +measure to add a measure 3. Name it DAU/MAU.
- Click count and type an equals sign.
- Click a left bracket [ to see a list of the measures defined above.
- Enter DAU and MAU separated by a slash. See Calculate measures and filters for information about building expressions.
- Click Go to run the query.
- Click Pin at the top right to pin the query to a board for future use. | https://docs.interana.com/User's_Guide/Enrich_your_data_with_properties/Calculate_DAU_and_MAU | 2020-09-18T23:06:07 | CC-MAIN-2020-40 | 1600400189264.5 | [array(['https://docs.interana.com/@api/deki/files/3721/Screen_Shot_2020-03-17_at_2.31.06_PM.png?revision=1',
'Screen Shot 2020-03-17 at 2.31.06 PM.png'], dtype=object) ] | docs.interana.com |
4.5 Release notes
Interana version 4.5 contains the following:
- New features
- Additional improvements
- Resolved issues
- Additional changes available in Maintenance releases below.
New features
Interana 4.5 includes the following new features:
- Clickable text in the UI is now marked with a pill treatment.The pill color indicates whether the value is default (clear pill) or whether a user has selected the value (shaded pill).
- In the Explore query builder only (not in the new property workflow), the Show, Calculate, and Use saved measure selector has been removed from the first position in the sentence.
- Show is now implicit when used with the dropdown list of aggregations.
- For Calculate, type an equals sign in the aggregations dropdown to access the expression builder.
- While building an expression, type a left bracket [ to access saved properties or measures defined locally
- You can now save board filters. See Save variants of a board with board filters in the User's Guide.
- Panels in boards can be resized.
- Improvements to privacy purge processes (for example, for GDPR compliance). In addition to the legacy ad hoc method of starting a privacy purge, you can now use an automated pipeline.
Additional improvements
Interana 4.5 includes the following additional improvements:
- The retention app now lets you choose between first and first ever. First refers to the first time that an event occurred within the time window of the retention query. First ever refers to the first time the event occurred, even if the event is outside the retention window.
- New flows have a default global timeout.
- Flow timeouts have more realistic options in the dropdown. You can still enter any value by typing. See Understand flow definition conditions in the User's Guide.
- You can now export data as a CSV file from time view, in addition to already being able to export from table view, the distribution app, and the sample data section. Measure and group values are now formatted the same way as in the charts and honor chart options formatting.
A/B view results are now bolded to correspond to significant measures (p<0.05), and the table includes a text legend.
UI improvements:
You can now right-click on "Explore/View" text in a panel or on the object name in the list view to open it in a new tab. Note that right-clicking requires you to click exactly on the text, whereas left-clicking works with less precise placement.
Fields where you can input text throughout the UI have been updated. Input boxes more clearly indicate that you can update the placeholder text. In panel, chart, and board name editing, a new component indicates when changes are ready to be saved.
Improvements to internal logging:
- Structured log events in the usage loopback table that record user interactions with boards now include the board name, as boards.title.
- Structured logs in the usage loopback table now report API calls (run query sync response id), including those made with BAQL (run query sync bql).
- Structured logs in the usage loopback table now correctly record the process_id from the query API server.
- Note that structured log improvements are only for events that occur after the upgrade, not events that happened before.
Resolved issues
Resolve issues in apps:
- New messaging in distribution view alerts you to a scope conflict when you distribute an event property and select an actor property. See Understand scope in the User's Guide for more information.
- Fixed a bug in which editing an existing A/B view analysis caused the first measure to not display.
- Fixed a bug in which the retention view time unit 0 did not start at 100% when a user's first event was filtered and split by actor property.
- Retention view Start date now displays in the time zone that the rest of the UI is set to use.
- Fixed a bug in retention view in which Chart options > Format > URL was not persisting.
Resolved UI fixes:
Fixed a bug in which some UI elements did not appear on MS Edge browser.
Fixed an issue with chart hover cards not displaying in some versions of MS Edge
Clicking a link when you set a split-by field as URL in chart controls now opens a new tab.
Fixed broken alignment of buttons in admin role management page when Announcement Banner is displayed
Fixed issues where Sample data section and Download CSV button in Explore were clickable over a wider area than intended.
Fixed an issue preventing you from using multiple lookup properties with the same name joined to separate event properties.
In the admin view:
- Fixed slowness when adding a dataset with large number of columns to the available datasets list in Admin > User management.
- Fixed an error when searching for panels in admin view.
In Explore:
- Fixed an issue that was causing the table visualization to crash when displaying a measure that returns a timestamp, with one or more split bys, transposing the table.
- In Explore, in the results pane, the chart name now correctly persists after you click Go.
Flows:
- Fixed an issue in a flow definition preventing you from using a split by in an order by.
- Fixed an issue in which splitting by "events leading up to this state" failed.
- Fixed an issue in which flow preview legend key said "count unique values of <flow>" even when counting actors in a flow.
- Added validation in Query Palette to complain when there's an invalid flow step as start or destination step.
- Fixed an issue in which creating a flow, splitting by an event path, and displaying in a bar chart displayed inaccurate information. Splitting by flow path was unaffected.
Boards and panels:
- Fixed issue preventing the editing of a newly duplicated panel.
- Fixed issue preventing the editing of chart controls from a pinned panel.
Maintenance releases
Interana version 4.5 has the following maintenance releases:
- 4.5.1, available March 11, 2020
- 4.5.2, available March 13, 2020
- 4.5.3, available March 17, 2020.
- 4.5.4, available March 20, 2020.
- 4.5.5, available March 20, 2020.
Version 4.5.1 maintenance release
Interana version 4.5.1 contains the following changes:
- A new object permission, query, in addition to the existing read and write permissions. This lets you share an object and all its associated dependent objects without populating other users' typeahead lists. See Manage users and roles with RBAC in the Admin Guide and Share an object with other users in the User's Guide.
- The expression builder now supports a limited IF function. This function does not currently support strings or validate that the "then" and "else" values have the same type, or recognize "enum" values like "January", "Monday", flow step names, or termination reasons. See Calculate measures and filters with the expression builder in the User's Guide.
- Resolved an issue introduced in 4.5.0 with searching for boards in the board list.
- Resolved an issue with CSV export of data. Now:
- Exporting timestamp data shows human readable format.
- Exporting DAY_OF_WEEK data shows human readable days (e.g., sunday, monday) rather than epoch time.
- Exporting a distribution query split by ts or DAY_OR_WEEK now shows correct headers.
- Exporting a multi measure query with split bys, and stacked by splits, now shows correctly formatted headers.
Version 4.5.2 maintenance release
Interana version 4.5.2 contains the following changes:
- Resolved an issue preventing non-numeric aggregators from being used outside Table view, including on existing panels with number view.
- Resolved an issue where flow step names were not displaying in panels.
- Resolved an issue preventing a property using the IF function from being saved.
- Resolved an issue with URL formatting in CSV export.
- Resolved an issue with typing (not selecting) "auto" in the Ending time dropdown.
- Resolved an issue navigating from the new flow builder to Data Model.
Version 4.5.3 maintenance release
Improvements
Interana version 4.5.3 includes the following improvements:
- The IF command is now fully featured, including:
- Validating that the "then" and "else" types match.
- Supporting static value literals.
- Improved property suggestions.
- The query server timeout is now configurable. An ia_admin can configure it using the query_api query_timeout_ms setting.
Resolved issues
Interana version 4.5.3 includes the following resolved issues:
- Resolved an issue using a custom event property that refers to a lookup property in a pre-filter.
- Resolved an issue in Distribution View where the x-axis was being cut off if you zoom in.
- Resolved an issue where simply clicking the description of a board caused the board to display that you were the last to update the board.
- Added splitby headers to CSV export from Table view.
- Fixed an issue in AB View where deleting an experiment group then adding a new one introduced red shading to the new experiment.
Version 4.5.4 maintenance release
Improvements
Interana maintenance release 4.5.4 contains the following improvements:
- Messaging in the UI indicates when a data whale was filtered out.
- The HTML Title of an Interana browser window now identifies location path in the Interana UI. This helps you distinguish between:
- Several open tabs in a browser.
- Several entries in the browser history.
- The number of query-api-server processes is now configurable through an application setting num_processes.
Resolved issues
Interana maintenance release 4.5.4 contains the following resolved issues:
Version 4.5.5 maintenance release
Interana version 4.5.5 contains one resolved issue:
Resolved an issue causing clusters with a large data whale to overwhelm and potentially block data ingest. | https://docs.interana.com/3.x%2F%2F4.x_Release_Notes/072_4.5_Release_notes | 2020-09-19T00:18:15 | CC-MAIN-2020-40 | 1600400189264.5 | [] | docs.interana.com |
The
y read-only property of the
LinearAccelerationSensor interface returns a double precision integer containing the acceleration of the device along the device's y axis.
If a feature policy blocks use of a feature it is because your code is inconsistent with the policies set on your server. This is not something that would ever be shown to a user. See
Feature-Policy for implementation instructions.
var yAcceleration = accelerometer.y();
© 2005–2018 Mozilla Developer Network and individual contributors.
Licensed under the Creative Commons Attribution-ShareAlike License v2.5 or later. | https://docs.w3cub.com/dom/linearaccelerationsensor/y/ | 2020-09-18T23:17:29 | CC-MAIN-2020-40 | 1600400189264.5 | [] | docs.w3cub.com |
Transport¶
- class
oslo_messaging.
Transport(driver)¶
A messaging transport.
This is a mostly opaque handle for an underlying messaging transport driver.
RPCs and Notifications may use separate messaging systems that utilize different drivers, access permissions, message delivery, etc. To ensure the correct messaging functionality, the corresponding method should be used to construct a Transport object from transport configuration gleaned from the user’s configuration and, optionally, a transport URL.
The factory method for RPC Transport objects:
def get_rpc_transport(conf, url=None, allowed_remote_exmods=None)
If a transport URL is supplied as a parameter, any transport configuration contained in it takes precedence. If no transport URL is supplied, but there is a transport URL supplied in the user’s configuration then that URL will take the place of the URL parameter.
The factory method for Notification Transport objects:
def get_notification_transport(conf, url=None, allowed_remote_exmods=None)
If no transport URL is provided, the URL in the notifications section of the config file will be used. If that URL is also absent, the same transport as specified in the user’s default section will be used.
The Transport has a single ‘conf’ property which is the cfg.ConfigOpts instance used to construct the transport object.
- class
oslo_messaging.
TransportURL(conf, transport=None, virtual_host=None, hosts=None, query=None)¶
A parsed transport URL.
Transport URLs take the form:
driver://[user:pass@]host:port[,[userN:passN@]hostN:portN]/virtual_host?query
where:
- driver
Specifies the transport driver to use. Typically this is rabbit for the RabbitMQ broker. See the documentation for other available transport drivers.
- [user:pass@]host:port
Specifies the network location of the broker. user and pass are the optional username and password used for authentication with the broker.
- user and pass may contain any of the following ASCII characters:
Alphabetic (a-z and A-Z)
Numeric (0-9)
Special characters: & = $ - _ . + ! * ( )
user may include at most one @ character for compatibility with some implementations of SASL.
All other characters in user and pass must be encoded via ‘%nn’
You may include multiple different network locations separated by commas. The client will connect to any of the available locations and will automatically fail over to another should the connection fail.
- virtual_host
Specifies the “virtual host” within the broker. Support for virtual hosts is specific to the message bus used.
- query
Permits passing driver-specific options which override the corresponding values from the configuration file.
- Parameters
conf (oslo.config.cfg.ConfigOpts) – a ConfigOpts instance
transport (str) – a transport name for example ‘rabbit’
virtual_host (str) – a virtual host path for example ‘/’
hosts (list) – a list of TransportHost objects
query (dict) – a dictionary of URL query parameters
- classmethod
parse(conf, url=None)¶
Parse a URL as defined by
TransportURLand return a TransportURL object.
Assuming a URL takes the form of:
transport://user:pass@host:port[,userN:passN@hostN:portN]/virtual_host?query
then parse the URL and return a TransportURL object.
Netloc is parsed following the sequence bellow:
It is first split by ‘,’ in order to support multiple hosts
All hosts should be specified with username/password or not at the same time. In case of lack of specification, username and password will be omitted:
user:pass@host1:port1,host2:port2 [ {"username": "user", "password": "pass", "host": "host1:port1"}, {"host": "host2:port2"} ]
If the url is not provided conf.transport_url is parsed instead.
- Parameters
conf (oslo.config.cfg.ConfigOpts) – a ConfigOpts instance
url (str) – The URL to parse
- Returns
A TransportURL
- class
oslo_messaging.
TransportHost(hostname=None, port=None, username=None, password=None)¶
A host element of a parsed transport URL.
oslo_messaging.
set_transport_defaults(control_exchange)¶
Set defaults for messaging transport configuration options.
- Parameters
control_exchange (str) – the default exchange under which topics are scoped
Forking Processes and oslo.messaging Transport objects¶
oslo.messaging can’t ensure that forking a process that shares the same transport object is safe for the library consumer, because it relies on different 3rd party libraries that don’t ensure that. In certain cases, with some drivers, it does work:
rabbit: works only if no connection have already been established.
amqp1: works | https://docs.openstack.org/oslo.messaging/train/reference/transport.html | 2020-09-19T00:33:09 | CC-MAIN-2020-40 | 1600400189264.5 | [] | docs.openstack.org |
public interface SchedulingTaskExecutor extends AsyncTaskExecutor
TaskExecutorextension exposing scheduling characteristics that are relevant to potential task submitters.
Scheduling clients are encouraged to submit
Runnables that match the exposed preferences
of the
TaskExecutor implementation in use.
Note: | https://docs.spring.io/spring-framework/docs/5.0.3.RELEASE/javadoc-api/org/springframework/scheduling/SchedulingTaskExecutor.html | 2020-09-19T00:45:56 | CC-MAIN-2020-40 | 1600400189264.5 | [] | docs.spring.io |
Networking and Security Requirements
CDH and Cloudera Manager Supported Transport Layer Security Versions
The following components are supported by the indicated versions of Transport Layer Security (TLS): Configure in Data at Rest Encryption Requirements.
- Cluster hosts must have a working network name resolution system and correctly formatted
/etc/hostsfile. All cluster hosts must have properly configured forward and reverse host resolution through DNS. The
/etc/hostsfiles must:
- Contain consistent information about hostnames and IP addresses across all hosts
- Not contain uppercase hostnames
- Not contain duplicate IP addresses
Cluster hosts must not use aliases, either in
/etc/hostsor in configuring DNS. A properly formatted
/etc/hostsfile
sudopermission. For authentication during the installation and upgrade procedures, you must either enter the password or upload a public and private key pair for the
root.
- The Cloudera Manager Agent runs as
rootso that it can make sure that the required directories are created and that processes and files are owned by the appropriate user (for example, the
hdfsand
mapredusers).
- Security-Enhanced Linux (SELinux) must not block Cloudera Manager or Runtime operations.
- Firewalls (such as
iptablesand
firewalld) must be disabled or configured to allow access to ports used by Cloudera Manager, Runtime, and related services.
- For RHEL and CentOS, the
/etc/sysconfig/networkfile on each host must contain the correct hostname.
- Cloudera Manager and Runtime, Runtime, and managed services create and use the following accounts and groups: | https://docs.cloudera.com/cdp-private-cloud/latest/release-guide/topics/cdpdc-networking-security-requirements.html | 2020-09-19T00:44:21 | CC-MAIN-2020-40 | 1600400189264.5 | [] | docs.cloudera.com |
Notification Listener¶
A notification listener is used to process notification messages sent by a
notifier that uses the
messaging driver.
A notification listener subscribes to the topic - and optionally exchange - in the supplied target. Notification messages sent by notifier clients to the target’s topic/exchange are received by the listener.
A notification listener exposes a number of endpoints, each of which contain a
set of methods. Each method’s name corresponds to a notification’s priority.
When a notification is received it is dispatched to the method named like the
notification’s priority - e.g.
info notifications are dispatched to the
info() method, etc.
Optionally a notification endpoint can define a NotificationFilter. Notification messages that do not match the filter’s rules will not be passed to the endpoint’s methods.
Parameters to endpoint methods are: the request context supplied by the client, the publisher_id of the notification message, the event_type, the payload and metadata. The metadata parameter is a mapping containing a unique message_id and a timestamp.
An endpoint method can explicitly return oslo_messaging.NotificationResult.HANDLED to acknowledge a message or oslo_messaging.NotificationResult.REQUEUE to requeue the message. Note that not all transport drivers implement support for requeueing. In order to use this feature, applications should assert that the feature is available by passing allow_requeue=True to get_notification_listener(). If the driver does not support requeueing, it will raise NotImplementedError at this point.
The message is acknowledged only if all endpoints either return oslo_messaging.NotificationResult.HANDLED or None.
NOTE: If multiple listeners subscribe to the same target, the notification will be received by only one of the listeners. The receiving listener is selected from the group using a best-effort round-robin algorithm.
This delivery pattern can be altered somewhat by specifying a pool name for the listener. Listeners with the same pool name behave like a subgroup within the group of listeners subscribed to the same topic/exchange. Each subgroup of listeners will receive a copy of the notification to be consumed by one member of the subgroup. Therefore, multiple copies of the notification will be delivered - one to the group of listeners that have no pool name (if they exist), and one to each subgroup of listeners that share the same pool name.
NOTE WELL: This means that the Notifier always publishes notifications to a non-pooled Listener as well as the pooled Listeners. Therefore any application that uses listener pools must have at least one listener that consumes from the non-pooled queue (i.e. one or more listeners that do not set the pool parameter.
Note that not all transport drivers have implemented support for listener pools. Those drivers that do not support pools will raise a NotImplementedError if a pool name is specified to get_notification_listener().
Each notification listener is associated with an executor which controls how incoming notification messages will be received and dispatched. Refer to the Executor documentation for descriptions of the other types of executors.
Note: If the “eventlet” executor is used, the threading and time library need to be monkeypatched.
Notification listener have start(), stop() and wait() messages to begin handling requests, stop handling requests, and wait for all in-process requests to complete after the listener has been stopped.
To create a notification listener, you supply a transport, list of targets and a list of endpoints.
A transport can be obtained simply by calling the get_notification_transport() method:
transport = messaging.get_notification_transport(conf)
which will load the appropriate transport driver according to the user’s messaging configuration. See get_notification_transport() for more details.
A simple example of a notification listener with multiple endpoints might be:
from oslo_config import cfg import oslo_messaging class NotificationEndpoint(object): filter_rule = oslo_messaging.NotificationFilter( publisher_id='^compute.*') def warn(self, ctxt, publisher_id, event_type, payload, metadata): do_something(payload) class ErrorEndpoint(object): filter_rule = oslo_messaging.NotificationFilter( event_type='^instance\..*\.start$', context={'ctxt_key': 'regexp'}) def error(self, ctxt, publisher_id, event_type, payload, metadata): do_something(payload) transport = oslo_messaging.get_notification_transport(cfg.CONF) targets = [ oslo_messaging.Target(topic='notifications'), oslo_messaging.Target(topic='notifications_bis') ] endpoints = [ NotificationEndpoint(), ErrorEndpoint(), ] pool = "listener-workers" server = oslo_messaging.get_notification_listener(transport, targets, endpoints, pool=pool) server.start() server.wait()
By supplying a serializer object, a listener can deserialize a request context and arguments from primitive types.
oslo_messaging.
get_notification_listener(transport, targets, endpoints, executor='blocking', serializer=None, allow_requeue=False, pool=None)¶
Construct a
- Raises
NotImplementedError
oslo_messaging.
get_batch_notification_listener(transport, targets, endpoints, executor='blocking', serializer=None, allow_requeue=False, pool=None, batch_size=None, batch_timeout=None)¶
Construct a batch
batch_size (int) – number of messages to wait before calling endpoints callacks
batch_timeout (int) – number of seconds to wait before calling endpoints callacks
- Raises
NotImplementedError | https://docs.openstack.org/oslo.messaging/train/reference/notification_listener.html | 2020-09-19T00:36:54 | CC-MAIN-2020-40 | 1600400189264.5 | [] | docs.openstack.org |
For optimum performance, you must match the minimum recommendations for the deployment.
Recommendations for the Platform Deployment
Note:
- The reservation for the CPU speed and RAM for each node must be 100% of the value specified above.
- To match your setup to all the specifications, you might have to add the resources (RAM, Disk, CPU). See and Increase the Brick Size of your Setup.
Note:
- The count of VMs includes the templates on the vCenter as well.
- Total Flows is the maximum count of flows the system can store for the retention period.
- Flow Planning is the total flows for which the system can perform security planning.
Note:
- The count of VMs includes the templates on the vCenter as well.
- Cluster size is the total number of nodes in the cluster.
- Total Flows is the count of flows in the system for the retention period.
- The query to determine the Total Flows is
count of flows in last 31 days, assuming the retention period as 31 days.
- Flow Planning is the total flows for which the system can perform security planning.
Recommendation for the Collector Deployment
Note: The reservation for the CPU speed and RAM for each node must be 100% of the value specified above.
Note:
- The count of VMs includes the templates on the vCenter as well.
- For a single deployment with more than one collector, the limitation on the total flows across collectors is based on the capacity of the platform. | https://docs.vmware.com/en/VMware-vRealize-Network-Insight/5.2/com.vmware.vrni.install.doc/GUID-F4F34425-C40D-457A-BA65-BDA12B3ABE45.html | 2020-09-19T00:48:43 | CC-MAIN-2020-40 | 1600400189264.5 | [] | docs.vmware.com |
AddSourceIdentifierToSubscription
Adds a source identifier to an existing event notification subscription.
Request Parameters
For information about the parameters that are common to all actions, see Common Parameters.
- SourceIdentifier
The identifier of the event source to be added.
Constraints:.
Type: String
Required: Yes
- SubscriptionName
The name of the event notification subscription you want to add a source identifier to..
- SourceNotFound
The source could not be found.
HTTP Status Code: 404
- SubscriptionNotFound
The designated subscription could not be found.
HTTP Status Code: 404
See Also
For more information about using this API in one of the language-specific AWS SDKs, see the following: | https://docs.aws.amazon.com/neptune/latest/apiref/API_AddSourceIdentifierToSubscription.html | 2020-09-18T23:16:22 | CC-MAIN-2020-40 | 1600400189264.5 | [] | docs.aws.amazon.com |
- General troubleshooting
- What does
coordinatormean?
- Where are logs stored when run as a service?
- Run in
--debugmode
- Enable debug mode logging in
config.toml
- Enable debug mode logging for the Helm Chart
- I’m seeing
x509: certificate signed by unknown authority
- I get
Permission Deniedwhen accessing the
/var/run/docker.sock
- The Docker executor gets timeout when building Java project
- I get 411 when uploading artifacts
warning: You appear to have cloned an empty repository.
zoneinfo.zip: no such file or directoryerror when using
Timezoneor
OffPeakTimezone
- Why can’t I run more than one instance of Runner?
Job failed (system failure): preparing environment:
-
- Docker executor:
unsupported Windows Version
- I’m using a mapped network drive and my build cannot find the correct path
- macOS troubleshooting
Troubleshoot GitLab Runner
This section can assist when troubleshooting GitLab Runner.
General troubleshooting
The following relate to general Runner troubleshooting./macOS the daemon logs to syslog.
- If the GitLab Runner is run as a service on Windows, the logs are system event logs. To view them, open the Event Viewer (from the Run menu, type
eventvwr.mscor search for “Event Viewer”). Then go to Windows Logs > Application. The Source for Runner logs is
gitlab-runner. GitLab Runner's logging level. Available values are: debug, info, warn, error, fatal, panic ## ref: ## logLevel: debug.
zoneinfo.zip: no such file or directory error when using configuration configuration file can cause
unexpected and hard-to-debug behavior. In
GitLab Runner 12.2,
only a single instance of 1909 (OS Build 18363.720)
Docker 17.06.2 on Windows Server 1909 returns the following in the output
of
docker info.
Operating System: Windows Server Datacenter
The fix in this case is to upgrade the Docker version. Read more about supported Docker versions..
fatal: unable to access '': Failed to connect to path port 3000: Operation timed out error in the job
If one of the jobs fails with this error, make sure the Runner can connect to your GitLab instance. The connection could be blocked by things like:
- firewalls
- proxies
- permissions
- routing configurations | https://docs.gitlab.com/runner/faq/README.html | 2020-09-19T00:42:41 | CC-MAIN-2020-40 | 1600400189264.5 | [] | docs.gitlab.com |
In the setup node, there are two ways you can add an image, right-click in the setup window, hover over UI and select Image or press the image button from the setup tab in the The Ribbon after simulating the lesson.
That will create an image place holder in both the editor window and the player window. Select the Image in the setup node and look at the properties tab. You will notice a set of property menus, for now we are only interested in the Image property menu. Scroll in the properties till you find it.
To get the imported image to populate in the image place holder, drag the imported image from the Assets Tab to the image property menu under the Image asset slot in the image above. Toggle the use actual size button to use the actual size of the image.
After hitting Simulate, you will see the image in the player window, to see the image in the editor window hit the UI button in the editor window. Now, lets look at the other property menus.
Layout Menu: controls the 2D properties of your image.
To parent the image to a 3D model. For this we need a 3D model. Import a Model and Populate it in the Editor Tab. After simulating.
Appearance menu, Use the appearance menu to edit the appearance of the image. | https://docs.modest3d.com/Editor_Documentation/Nodes/Setup_Node/Image | 2020-09-19T00:02:49 | CC-MAIN-2020-40 | 1600400189264.5 | [] | docs.modest3d.com |
CNMFE¶
Perform CNMFE using the implementation provided by the CaImAn library. CNMF-E demo notebook, the implementation in Mesmerize is basically from the demo
-
Parameters
Ain: Seed spatial components from another CNMFE item by entering its UUID here.
Please see the CaImAn demo notebook mentioned above to understand the rest of.
Usage¶
This module creates two types of batch items, one where you can inspect the Correlation & PNR images and another that performs CNMFE and extracts components. Here is an outline of typical usage:
Enter a gSig parameter value and a name for “Inspect Correlation and PNR”, the text entry for “Stop here”. Click “Add to batch”. Run the batch item.
Double-click the batch item, you will be presented with a GUI to help optimize min_corr and min_pnr. For the correlation image use the vmin slider to optimize the seperation of cells and set the min_corr parameter to this value. Likewise, optimize the value for the PNR until the PNR image mostly contains regions that show real signal and no or few regions that are likely to be just noise and set this vmin value as the min_pnr parameter. You may need to try slightly different variations to optimize the parameters.
Enter the rest of the parameters and give a name under “Perform CNMF-E”, click “Add to batch” and run the item.
Double-click the batch item and you will be presented with 3 options. The first option will display the correlation-pnr images and the second option is currently non-functional (matplotlib Qt issue). The last option will import the components extracted by CNMFE into an open Viewer. The components are managed by the ROI Manager.
See also
See also
This modules uses the Batch Manager.
Note
The parameters used for CNMFE are stored in the work environment of the viewer and this log is carried over and saved in Project Samples as well. To see the parameters that were used for CNMFE in the viewer, execute
get_workEnv().history_trace in the viewer console and look for the ‘cnmfe’ entry.
Script Usage¶
A script can be used to add CNMFE batch items. This is much faster than using the GUI.
See also
Add Corr PNR items¶
Add Corr PNR batch items from a batch that contains motion corrected items. This example add 2 variants of parameters (just gSig) for each motion corrected item.
See also
This example uses the Caiman CNMFE module API and Batch Manager API
See also
Caiman Motion Correction script usage examples for how to load images if you want to add Corr PNR items from images that are not in a batch. | http://docs.mesmerizelab.org/en/v0.2.3/user_guides/viewer/modules/cnmfe.html | 2020-09-18T22:36:35 | CC-MAIN-2020-40 | 1600400189264.5 | [array(['../../../_images/cnmfe1.png', '../../../_images/cnmfe1.png'],
dtype=object)
array(['../../../_images/corr_pnr_img.png',
'../../../_images/corr_pnr_img.png'], dtype=object)] | docs.mesmerizelab.org |
Stimulus Mapping¶
Video Tutorial¶
This tutorial shows how to create a New Project, open images in the Viewer, use the Stimulus Mapping module and perform Caiman motion correction
Map temporal information such as stimulus or behavioral periods.
Stimulus Mapping Module
Stimulus periods illustrated on the viewer timeline
The tabs that are available in the stimulus mapping module corresponds to the stimulus types in your Project Configuration.
You can add stimulus periods either manually or through a script.
Manual Annotation¶
To add a stimulus manually click the “Add Row” button. This will add an empty row to the current tab page.
Enter a name for the stimulus, start time, end time, and pick a color for illustrating the stimulus periods on the Viewer timeline.
To remove a stimulus click the “Remove stim” button. Stimulus periods do not have to be added in chronological order.
Click “Set all maps” to set the mappings for all stimulus types. You can then choose to illustrate a stimulus on the viewer timeline by selecting it from “Show on timeline”
Import and Export are not implemented yet.
Warning
At the moment, only “frames” are properly supported for the time units.
Note
It is generally advisable to keep your stimulus names short with lowercase letters. When sharing your project you can provide a mapping for all your keys. This helps maintain consistency throughout your project and makes the data more readable.
Script¶
See also
You can also use the Stimulus Mapping module’s API to set the stimulus mappings from a pandas DataFrame.
This example creates a pandas DataFrame from a csv file to set the stimulus mappings. It uses the csv file from the pvc-7 dataset availble on CRCNS:
You can also download the csv here:
stimulus_pvc7.csv | http://docs.mesmerizelab.org/en/v0.2.3/user_guides/viewer/modules/stimulus_mapping.html | 2020-09-18T23:26:42 | CC-MAIN-2020-40 | 1600400189264.5 | [array(['../../../_images/stim_maps_module.png',
'../../../_images/stim_maps_module.png'], dtype=object)
array(['../../../_images/stim_maps_viewer.png',
'../../../_images/stim_maps_viewer.png'], dtype=object)] | docs.mesmerizelab.org |
New Tethys App Project¶
Last Updated: November earth_engine-earth_engine.
earthengine-api and
oauthclient packages to allow it to use Google Earth Engine services. Both packages are available on
conda-forge, which is the preferred Conda channel for Tethys. Open
tethysapp-earth_engine/install.yml and add these dependencies to the
requirements.conda section of the file:
# This file should be committed to your app code. version: 1.0 # This should match the app - package name in your setup.py name: earth_engine requirements: # Putting in a skip true param will skip the entire section. Ignoring the option will assume it be set to False skip: false conda: channels: - conda-forge packages: - earthengine-api - oauth2client pip: post:
3. Development Installation¶
Install the app and it's dependencies into your development Tethys Portal. In a terminal, change into the
tethysapp-earth_engine directory and execute the tethys install -d command.
cd ~/tethysdev/tethysapp-earth_engine tethys install -d
4. Customize App Icon and Theme Color¶
Download this
Google Earth Engine App Icon or find one that you like and save it to the
public/images directory. Modify the
icon property of your app class to reference the new image. Also change the
color property to the
#524745 color:
class EarthEngine(TethysAppBase): """ Tethys app class for Google Earth Engine Tutorial. """ name = 'Google Earth Engine Tutorial' index = 'earth_engine:home' icon = 'earth_engine/images/earth-engine-logo.png' package = 'earth_engine' root_url = 'earth-engine' color = '#524745' ... grey (see screenshot at the beginning of the tutorial).
6. Solution¶
This concludes the New App Project portion of the GEE Tutorial. You can view the solution on GitHub at or clone it as follows:
git clone cd tethysapp-earth_engine git checkout -b new-app-project-solution new-app-project-solution-3.2 | http://docs.tethysplatform.org/en/latest/tutorials/google_earth_engine/part_1/new_app_project.html | 2020-09-18T22:34:56 | CC-MAIN-2020-40 | 1600400189264.5 | [] | docs.tethysplatform.org |
NAT
Policy Overview
You configure a NAT rule to match a packet’s source zone and destination zone, at a minimum. In addition to zones, you can configure matching criteria based on the packet’s destination interface, source and destination address, and service. You can configure multiple NAT rules. The firewall evaluates the rules in order from the top down. Once a packet matches the criteria of a single NAT rule, the packet is not subjected to additional NAT rules. Therefore, your list of NAT rules should be in order from most specific to least specific so that packets are subjected to the most specific rule you created for them.
Static NAT rules do not have precedence over other forms of NAT. Therefore, for static NAT to work, the static NAT rules must be above all other NAT rules in the list on the firewall.
NAT rules provide address translation, and are different from security policy rules, which allow or deny packets. It is important to understand the firewall’s flow logic when it applies NAT rules and security policy rules so that you can determine what rules you need, based on the zones you have defined. You must configure security policy rules to allow the NAT traffic.. Finally, upon egress, for a matching NAT rule, the firewall translates the source and/or destination address and port numbers.
Keep in mind that the translation of the IP address and port do not occur until the packet leaves the firewall. The NAT rules and security policies apply to the original IP address (the pre-NAT address). A NAT rule is configured based on the zone associated with a pre-NAT IP address.
Security policies differ from NAT rules because security policies examine post-NAT zones to determine whether the packet is allowed or not. Because the very nature of NAT is to modify source or destination IP addresses, which can result in modifying the packet’s outgoing interface and zone, security policies are enforced on the post-NAT zone.
A SIP call sometimes experiences one-way audio when going through the firewall because the call manager sends a SIP message on behalf of the phone to set up the connection. When the message from the call manager reaches the firewall, the SIP ALG must put the IP address of the phone through NAT. If the call manager and the phones are not in the same security zone, the NAT lookup of the IP address of the phone is done using the call manager zone. The NAT policy should take this into consideration.
No-NAT rules are configured to allow exclusion of IP addresses defined within the range of NAT rules defined later in the NAT policy. To define a no-NAT policy, specify all of the match criteria and select No Source Translation in the source translation column.
You can verify the NAT rules processed by using the CLI
test nat-policy-matchcommand in operational mode. For example:
user@device1>test nat-policy-match ?+ destination Destination IP address + destination-port Destination port + from From zone + ha-device-id HA Active/Active device ID + protocol IP protocol value + source Source IP address + source-port Source port + to To Zone + to-interface Egress interface to use | Pipe through a command <Enter> Finish input user@device1>test nat-policy-match from l3-untrust source 10.1.1.1 destination 66.151.149.20 destination-port 443 protocol 6Destination-NAT: Rule matched: CA2-DEMO 66.151.149.20:443 => 192.168.100.15:443
Recommended For You
Recommended Videos
Recommended videos not found. | https://docs.paloaltonetworks.com/pan-os/8-1/pan-os-admin/networking/nat/nat-policy-rules/nat-policy-overview.html | 2020-09-19T00:34:12 | CC-MAIN-2020-40 | 1600400189264.5 | [] | docs.paloaltonetworks.com |
Contents:
Contents:
NOTE: Transformations by example are developed using the current sample. When the finished transformation is applied across the entire dataset, some source values may not be matched by the patterns you specified using the current sample.
Tip: Transformation by example works best for text-based inputs. For non-text inputs, you can convert the column data type to String.
To transform by example, you can use either of the following:
- Grid View: Displays Source and Preview column values as they appear in the sample in the data grid.
- Pattern View: Displays the Source and Preview values in groups of similar values based on pattern matching by Trifacta® Wrangler.
For more information on these types of transformations, see Overview of TBE.
Transform Builder
In the Transform Builder panel on the right, you can review and change the Source and Preview columns to transform.
Tip: To transform from a different source column, select a new column from the Example column drop-down. This step clears all examples from the transformation you are building. Some columns may not be available for selection. To use such a column, change its type to String first.
Grid View
In grid view, the Source values for each row in the sample are listed next to an empty Preview column. You can create mappings by selecting a cell in the Preview column and manually entering a value.
Figure: Transformation by Example - Grid View
After you enter a value, Trifacta Wrangler attempts to match other values from the Source using the same pattern to generate additional values in the Preview column.
- Values that you manually enter are listed in dark text.
- Values that are inferred by the product are in lighter text.
- Null values indicate that no pattern has been identified to match the value.
Tip: Values that have been inferred can be replaced by manual entries for further refinement.
For more information on how to use, see Create Column by Example.
Keyboard shortcuts
- Use the arrow keys to navigate up and down the rows in the Preview column.
CTRL+ up arrow or
CTRL+ down arrow to jump to the first or last row of the sample
- In Pattern View, the above shortcuts navigate between groups of values.
ESCcancels your current edit.
RETURNsubmits your current entry as a new example.
Tip: You can copy and paste values from the clipboard into the Preview column.
Pattern View
In Pattern view, Trifacta Wrangler performs some preliminary pattern detection to group Source values together. Transformations are processed using Trifacta patterns.
Figure: Transformation by Example - Pattern View
- This view displays a maximum of five example values per pattern group.
- Other group: Values that do not map to any identifiable pattern are placed in a value group labeled,
Other.
- For more information on patterns, see Text Matching.
Toolbar
Figure: Transformation by Example toolbar
- Grid View: See previous.
- Pattern View: See previous.
- Undo: Undo the last change you made to the Preview values.
- Redo: Redo the most recently undo change you made to the Preview values.
- Trash: Remove all example values from the Preview column and start over.
This page has no comments. | https://docs.trifacta.com/display/SS/Transformation+by+Example+Page | 2020-09-18T23:17:13 | CC-MAIN-2020-40 | 1600400189264.5 | [] | docs.trifacta.com |
Version 19 provides a user interface to configure Business Rules Filtering (the recommended method, as described in previous sections). For previous versions, users had to edit business rule configuration files through the command line instead; we have provided the information on how to do this in case you need it to customize your installation.The following table provides an explanation for the general filter structure. See the examples below to learn how they are being used.Each filter supports only a single match option with the following fields being configurable within a ‘match’ node.When specifying a " (quote) or a \ (backslash) character in any of the Business Rules match or pre-requisite 'filterValue' fields, these two characters must be escaped by an additional backslash character. For example, if I have the pre-requisite to filter only if the parent directory is named: My "Folder01 \ Folder02" Dir.
The Business Rule would be constructed as follows:* Keyword values apply to files only (not folders) - keywordInt, keywordStr, keywordFloat, keywordDate, and keywordBool.In order to limit validity, pre-requisites can be applied to any of the filters (as listed above). For each filter, multiple pre-requisites can be added. | https://docs.xinet.com/docs/Xinet/19.2.1/AllGuides/BusinessRules-config.53.1.html | 2020-09-18T22:33:08 | CC-MAIN-2020-40 | 1600400189264.5 | [] | docs.xinet.com |
Postmasters
Who is the Postmaster?
The Postmaster is responsible for the individual domain.
Who can appoint a Postmaster?
The Postmaster is automatically created with each new domain and your account has a fixed space of 50MB.
What does the Postmaster do?
The Postmaster has access to the management of his own domains and the accounts belonging to them, within the limits assigned to it.
List of postmasters
The list of postmasters contains all the postmasters present in the Qboxmail systems belonging to a Admin with the related detailed information. To view the list of postmasters click on the name on the entry Postmaster in the control panel sidebar.
Information in the postmaster list:
- Postmaster status.
- Name of the domain to which the postmaster belongs.
- Postmaster email address
- Space occupied by the postmaster account.
Postmaster settings
To change the setting of a postmaster click on the name of the postmaster in the list of postmasters you want to modify.
- Postmaster > Postmaster name
Master data
Section dedicated to information relating to the user who uses the postmaster.
User details
Enter the information of the user using the postmaster*.
*The available fields may vary depending on the plan chosen for the domain.
Localization
Set the language (Italian or English) and the default time zone for the postmaster.
General
Section dedicated to postmaster related settings and limits.
Send limits
Change the maximum number of emails that can be sent daily by the postmaster.
The increase in the maximum number of messages that can be sent by an email account involves an increase in cost.
Services
Modify the services * available for the postmaster, set during creation:
- POP access
- IMAP access
- SMTP access
- Webmail access
- Exchange ActiveSync
- DAV Access
- XMPP access
*The services available may vary according to the general domain settings.
Forward
Enter up to a maximum of 20 alternative addresses to which to send the messages received by the postmaster.
Other
Change the delivery of spam messages to Inbox * and choose whether to reject all messages received from the postmaster by setting up an automatic reply message.
*Service availability may vary based on general domain settings.
Security
Section dedicated to the postmaster security settings.
Set policies related to the use of access passwords for the postmaster.
- Set a new password for the postmaster.
- Set the mandatory password change every 3, 6 or 12 months.
- Blocks reuse old passwords.
- Disable password change.
- Disable password recovery.
- Force password change on next account login.
OTP authentication
Sets the mandatory use of two-factor authentication (OTP) for the postmaster.
IP enabled
Restrict access to postmaster POP, IMAP, SMTP, Webmail and API services only to IP addresses that you feel are safe and reliable. You can specify up to a maximum of 5 IP addresses / classes. All connections coming from IP not present in the list will not be authorized.
API access
Change the postmaster's ability to access via API.
Autoresponder
Set up an automatic reply to all messages received by the postmaster for the time interval you prefer. | https://docs.qboxmail.com/en/panel/roles/postmasters | 2020-09-18T22:45:04 | CC-MAIN-2020-40 | 1600400189264.5 | [] | docs.qboxmail.com |
Store, logical flow
As of v3.0, as long as you’ve followed our conventions in your module configuration, you probably don’t need to write your own store() method. Calling store() on any object that extends
w2p_Core_BaseObject will automatically trigger this flow;
- When an object is ready to be stored, it simply has the store() method called.
- If the object doesn’t have its own store(), it defaults to the w2p_Core_BaseObject method:
- First, all previous errors are cleared via clearErrors().
- Next, a
preStoreEventnotification is published which triggers any hook_preStore() methods for that object are called. This is where you can set default values, apply business rules, or send other notifications. Any return values from hook_preStore() are ignored.
- Next, the isValid() method is called. This is where you can perform data validation specific to the object. For example, it will make sure all Projects have Owners, that Link Urls actually have value, and new Users have unique Usernames.
- Under no circumstances should isValid() modify data in or outside the object.
- If isValid() returns true, processing continues.
- If isValid() returns false, the store() immediately returns false. To inspect the reason(s) for failure, calling getError() on the object returns an array listing each error.
- Next, a
preCreateEventor
preUpdateEventnotification is published depending on whether the object is being created or updated in the database. These events trigger hook_preCreate() or hook_preUpdate() respectively. These are allowed to modify the object to set fields such as date created, date updated, etc.
- Next, the processing checks if the object already has an id - like link_id on a CLink object - and the user has canEdit permission on that object. The default canEdit() method (on w2p_Core_BaseObject) will perform the standard permissions check. If necessary, you can create a custom canEdit() method in your module. This allows circumstances such as allowing a User to always edit themselves even if they don’t have permission to manage users.
- If the object has both an id and the canEdit() method returns true, the system attempts to update the object.
- Next, the processing checks if the id is 0 and the user has canCreate permission for that module. The default canCreate() method (on w2p_Core_BaseObject) will perform the standard permissions check. If necessary, you can create a custom canCreate method in your module.
- Next, if the update or create was successful, a
postCreateEventor
postUpdateEventnotification is published depending on which occurred. These events trigger hook_postCreate() or hook_postUpdate() respectively. This is where you should trigger email notifications, cache updates, or eventually workflow rules. Then a
postStoreEventnotification is published regardless of which one occurred.
- If you have the History module enabled for audit logging each of these post hooks will generate a log entry.
- Finally a true is returned if an update or create was successful or a false if neither.
Note: If there are any errors anywhere in this process, store() returns a boolean FALSE and you can inspect the errors by calling getError() on the object.
While this seems incredibly complicated, it is all handled behind the scenes for you automagically by simply calling store() on your object.
- In the vast majority of cases, you will only have to write an isValid() method for your module.
- In some cases, you will need your own hooks into preStore, preCreate, preUpdate, postCreate, postUpdate, or postStore.
- In rare circumstances when you have module-specific permission requirements, you may also need custom canCreate(), canEdit() methods.
- Only in very exceptional cases will you still have to write your own store() method. Within core web2project, this is required in exactly one place in the CSystem_SysVal class (./modules/system/sysvals.class.php). | http://docs.web2project.net/docs/store-logical-flow.html | 2018-01-16T11:24:49 | CC-MAIN-2018-05 | 1516084886416.17 | [] | docs.web2project.net |
If you have problems accessing data from an ODBC‑enabled application, follow the instructions in this topic. If you’re caching the system catalog, also see Troubleshooting system catalog caching.
For information on troubleshooting problems encountered when generating a system catalog, see Troubleshooting system catalog generation.
Review the repository definitions
If you have problems accessing data, or if data does not display as expected, the problem might be with the way the data is defined in the repository.
See Setting up a repository for more information.
Verify environment and environment variables
Because environment variables are crucial in enabling xfODBC to locate files, it is important to verify them.
Verify file locations
Verify the system catalog
Use DBA to verify the system catalog
You can use the xfODBC Database Administrator (DBA) program to verify that the data definitions were converted correctly.
Verify encryption
If you have verified environment variables and file locations and you are still unable to connect, check the encryption settings on the client (in net.ini) and on the server (in the vtxnetd or vtxnet2 command line). Make sure these settings match, or for testing, remove the encryption settings in both locations and see if you can connect. Additionally, make sure the net.ini file is in the directory specified by VORTEX_HOME. Mismatched encryption settings, along with the inability to access encryptions settings in net.ini, can cause a variety of errors when you try to connect, including “invalid connect syntax,” “invalid user ID and/or password,” and “invalid DSN” errors. For more information, see Setting SQL OpenNet client options in net.ini.
Verify the log‑in
Open your database with a third‑party application
Did you generate a system catalog from the Synergy sample database, and were you able to access this data successfully with an ODBC‑enabled application?
Note that Synergex does not provide support for other ODBC‑enabled applications, and resolving problems particular to a specific third‑party application is beyond the scope of this user’s guide.
If you are still encountering problems…
If you have followed all the above troubleshooting steps and are still unable to access your database, turn on the ODBC logging options and use the tracing feature. See Error logging for xfODBC for instructions. You may need to call Synergy/DE Developer Support for an interpretation of the log files you generate.
Other sources of information
The Synergy/DE KnowledgeBase is available in the Resource Center on the Synergex website for customers who have purchased a support agreement. The KnowledgeBase includes a wealth of troubleshooting information and answers to frequently asked questions that may help you solve a problem you have encountered. To purchase a support agreement, contact your Synergy/DE account manager.
The REL_CONN.TXT release notes distributed with Connectivity Series include the latest information about new features and restrictions in xfODBC. | http://docs.synergyde.com/odbc/odbcChap9Troubleshootingdataaccess.htm | 2018-01-16T11:08:27 | CC-MAIN-2018-05 | 1516084886416.17 | [] | docs.synergyde.com |
Circle (Center Radius)
Toolbox Icon:
Draws a circle of a specified radius with a point set for the center. Two more points can be used to orient the circle in 3D space.
Point 1: Center of the circle
Point 2: Orientation of the circle (optional). This makes it possible to scale the circle or treat it as a line entity with other commands.
Enter the Radius of the circle in the window. To use the same radius as another circle, click the Same As icon. A rubber-band circle shows how the circle will be drawn. Set a point for the center of the circle. Press Enter or set a second point to orient the circle. Press Enter or set a third point to tilt the circle using the first two points as a hinge.
N Segments: When saved as a plane or line, this is the number of segments, into which the object will be divided. | http://docs.imsidesign.com/projects/DesignCad-2020-User-Guide/DesignCad-2020-User-Guide/2D-Drawing-Tools/Circles/Circle-%28Center-Radius%29/ | 2021-10-16T00:36:01 | CC-MAIN-2021-43 | 1634323583087.95 | [array(['../../Storage/designcad-2020-user-manual-publication/circle-center-radius-img00012.png',
'img'], dtype=object)
array(['../../Storage/designcad-2020-user-manual-publication/circle-center-radius-img00022.png',
'img'], dtype=object) ] | docs.imsidesign.com |
Convert Plugin¶
The
convert plugin lets you convert parts of your collection to a
directory of your choice, transcoding audio and embedding album art along the
way. It can transcode to and from any format using a configurable command
line.
Installation¶
To use the
convert plugin, first enable it in your configuration (see
Using Plugins). By default, the plugin depends on FFmpeg to
transcode the audio, so you might want to install it.
Usage¶
To convert a part of your collection, run
beet convert QUERY. The
command will transcode all the files matching the query to the
destination directory given by the
-d (
--dest) option or the
dest configuration. The path layout mirrors that of your library,
but it may be customized through the
paths configuration.
The plugin uses a command-line program to transcode the audio. With the
-f (
--format) option you can choose the transcoding command
and customize the available commands
through the configuration.
Unless the
-y (
--yes) flag is set, the command will list all
the items to be converted and ask for your confirmation.
The
-a (or
--album) option causes the command
to match albums instead of tracks.
By default, the command places converted files into the destination directory
and leaves your library pristine. To instead back up your original files into
the destination directory and keep converted files in your library, use the
-k (or
--keep-new) option.
To test your configuration without taking any actions, use the
--pretend
flag. The plugin will print out the commands it will run instead of executing
them.
Configuration¶
To configure the plugin, make a
convert: section in your configuration
file. The available options are:
- auto: Import transcoded versions of your files automatically during imports. With this option enabled, the importer will transcode all (in the default configuration) non-MP3 files over the maximum bitrate before adding them to your library. Default:
no.
- tmpdir: The directory where temporary files will be stored during import. Default: none (system default),
- copy_album_art: Copy album art when copying or transcoding albums matched using the
-aoption. Default:
no.
- album_art_maxwidth: Downscale album art if it’s too big. The resize operation reduces image width to at most
maxwidthpixels while preserving the aspect ratio.
- dest: The directory where the files will be converted (or copied) to. Default: none.
- embed: Embed album art in converted items. Default:
yes.
- max_bitrate: All lossy files with a higher bitrate will be transcoded and those with a lower bitrate will simply be copied. Note that this does not guarantee that all converted files will have a lower bitrate—that depends on the encoder and its configuration. Default: none.
- never_convert_lossy_files: Cross-conversions between lossy codecs—such as mp3, ogg vorbis, etc.—makes little sense as they will decrease quality even further. If set to
yes, lossy files are always copied. Default:
no.
- paths: The directory structure and naming scheme for the converted files. Uses the same format as the top-level
pathssection (see Path Format Configuration). Default: Reuse your top-level path format settings.
- quiet: Prevent the plugin from announcing every file it processes. Default:
false.
- threads: The number of threads to use for parallel encoding. By default, the plugin will detect the number of processors available and use them all.
You can also configure the format to use for transcoding (see the next section):
- format: The name of the format to transcode to when none is specified on the command line. Default:
mp3.
- formats: A set of formats and associated command lines for transcoding each.
Configuring the transcoding command¶
You can customize the transcoding command through the
formats map
and select a command with the
--format command-line option or the
format configuration.
convert: format: speex formats: speex: command: ffmpeg -i $source -y -acodec speex $dest extension: spx wav: ffmpeg -i $source -y -acodec pcm_s16le $dest
In this example
beet convert will use the speex command by
default. To convert the audio to wav, run
beet convert -f wav.
This will also use the format key (wav) as the file extension.
Each entry in the
formats map consists of a key (the name of the
format) as well as the command and optionally the file extension.
extension is the filename extension to be used for newly transcoded
files. If only the command is given as a string or the extension is not
provided, the file extension defaults to the format’s name.
command is the
command to use to transcode audio. The tokens
$source and
$dest in the
command are replaced with the paths to the existing and new file.
The plugin in comes with default commands for the most common audio
formats: mp3, alac, flac, aac, opus, ogg, wmv. For
details have a look at the output of
beet config -d.
For a one-command-fits-all solution use the
convert.command and
convert.extension options. If these are set, the formats are ignored
and the given command is used for all conversions.
convert: command: ffmpeg -i $source -y -vn -aq 2 $dest extension: mp3
Gapless MP3 encoding¶
While FFmpeg cannot produce “gapless” MP3s by itself, you can create them by using LAME directly. Use a shell script like this to pipe the output of FFmpeg into the LAME tool:
#!/bin/sh ffmpeg -i "$1" -f wav - | lame -V 2 --noreplaygain - "$2"
Then configure the
convert plugin to use the script:
convert: command: /path/to/script.sh $source $dest extension: mp3
This strategy configures FFmpeg to produce a WAV file with an accurate length
header for LAME to use. Using
--noreplaygain disables gain analysis; you
can use the ReplayGain Plugin to do this analysis. See the LAME
documentation and the HydrogenAudio wiki for other LAME configuration
options and a thorough discussion of MP3 encoding. | https://beets.readthedocs.io/en/v1.4.5/plugins/convert.html | 2021-10-16T00:06:50 | CC-MAIN-2021-43 | 1634323583087.95 | [] | beets.readthedocs.io |
Function:
a!imageField()
Displays an image from document management or the web.
See also: Document Image, User Image, Web Image
The maximum display dimensions for each Size are listed below:
"ICON": 20x20 pixels
"TINY": 60x120 pixels
"GALLERY": 240x80 pixels
"SMALL": 100x200 pixels
"MEDIUM": 200x400 pixels
"LARGE": 400x600 pixels
"FIT": natural dimensions.
"FIT", images display at either their natural width or the width of the container, whichever is smaller.:
The following patterns include usage of the Image Component.
Call to Action Pattern (Formatting): Use the call to action pattern as a landing page when your users have a single action to take.
Display Processes by Process Model with Status Icons (Grids, Images, Reports): Use an interface to display information about instances of a specific process model.
Save a User's Report Filters to a Data Store Entity (Grids, Smart Services, Filtering, Reports): Allow a user to save their preferred filter on a report and automatically load it when they revisit the report later.
Track Adds and Deletes in Inline Editable Grid (Grids): In an inline editable grid, track the employees that are added for further processing in the next process steps.
User List Pattern (Looping): The user list pattern retrieves all the users in a specified group and displays them in a single column.
On This Page | https://docs.appian.com/suite/help/20.2/Image_Component.html | 2021-10-15T22:41:20 | CC-MAIN-2021-43 | 1634323583087.95 | [] | docs.appian.com |
Collection Manager¶
This add-on adds new functionality for the management of collections via a pop-up and a QCD (Quick Content Display) system in the 3D Viewport. It also offers simple display and modification of the relationship of objects with collections.
Activation¶
Open Blender and go to Preferences then the Add-ons tab.
Click Interface then Collection Manager to enable the script.
Description
Data-Block Menu.
Hint
All options can be combined with each other.
- Specials
-.
- Name
This is static and can’t be edited.
- Set Object (box icon)
LMB – Move object(s) to collection.
Shift-LMB – Add/Remove object(s) to/from collection.
-.
Ctrl-Alt-LMB – Swap the restriction state on all collections with that of another restriction..)
- Name
Double LMB-click to rename the collection.
- Set Object (box icon)
LMB – Move object(s) to collection.
Shift-LMB – Add/Remove object(s) to/from collection.
-.
- Remove
X
Remove the collection.
- Filtering
-.
Hint
All options can be combined with each other.
- Add Collection, Add Subcollection
Self-explanatory.
Note)
Hotkeys.
Hotkeys.
Hotkeys.
Note
Slots with objects not in Object Mode can not be excluded.
Preferences.
Noteary¶
- General
- Chaining
Dependent on parents for whether an RTO can be active.
- LMB
Left Mouse Button.
-
- Category
Interface
- Description
Collection management system.
- Location
3D Viewport
- File
object_collection_manager folder
- Author
Imaginer (Ryan Inch)
- Maintainer
Imaginer
- License
GPL
- Support Level
Community
- Note
This add-on is bundled with Blender. | https://docs.blender.org/manual/en/2.92/addons/interface/collection_manager.html | 2021-10-15T22:43:30 | CC-MAIN-2021-43 | 1634323583087.95 | [] | docs.blender.org |
dask.array.random.standard_cauchy¶
- dask.array.random.standard_cauchy(size=None, chunks='auto', **kwargs)¶
Draw samples from a standard Cauchy distribution with mode = 0.
This docstring was copied from numpy.random.mtrand.RandomState.standard_cauchy.
Some inconsistencies with the Dask version may exist.
Also known as the Lorentz distribution.
Note
New code should use the
standard_cauchymethod of a
default_rng()instance instead; please see the\[P(x; x_0, \gamma) = \frac{1}{\pi \gamma \bigl[ 1+ (\frac{x-x_0}{\gamma})^2 \bigr] }\]
and the Standard Cauchy distribution just sets \(x_0=0\) and \(\gamma() | https://docs.dask.org/en/latest/generated/dask.array.random.standard_cauchy.html | 2021-10-16T00:00:48 | CC-MAIN-2021-43 | 1634323583087.95 | [] | docs.dask.org |
Follow the upgrade path and use an upgrade method according to the current vCloud Availability version. Then select a source repository that contains the upgrade files and upgrade each vCloud Availability component in the cloud site, according to a specific order.
Upgrade Paths
To upgrade VMware vCloud Availability to the latest release, use the following upgrade methods according to the current vCloud Availability version.
Upgrade Repository
To upgrade VMware vCloud Availability, you can configure each vCloud Availability component to download the upgrade files from the following source repositories. | https://docs.vmware.com/en/VMware-vCloud-Availability/3.0/com.vmware.vcav.cloud.install.config.upgrade.doc/GUID-F1018559-F8C9-4FB7-AACD-F390FA81768C.html | 2021-10-15T23:07:59 | CC-MAIN-2021-43 | 1634323583087.95 | [] | docs.vmware.com |
Trusted log on using a trusted 3rd party (typically SAML) based Identity Provider.
It works with similar behaviour to the LOGON method, but accepts different inputs to validate authentication (AuthN).
By default, mydigitalstructure expects the user logon name to be an email address, as returned by the Identity Provider.
See Getting Started Single Sign On (SSO) / SAML. | http://docs.mydigitalstructure.com/LOGON_TRUSTED | 2018-12-09T23:42:09 | CC-MAIN-2018-51 | 1544376823228.36 | [] | docs.mydigitalstructure.com |
Full System Recovery: Recovering a Cluster on Windows Server 2008 and later
You can recover a cluster on Windows Server 2008 clients in one of the following ways:
- Restoring Cluster Services
- Authoritative Restores
- Non-Authoritative Restores
Restoring Cluster Services
On a Windows computers, the cluster services will be restored automatically in any of the following scenarios:
Authoritative Restores
Perform the Authoritative Restore of cluster on Windows 2008 or later if you alter the cluster configuration or delete a large number of resources. You can perform the authoritative restore only if all cluster nodes and services are running.
Non-Authoritative Restores
A non-authoritative restore is performed in the following scenarios:
- Single node in a cluster fails completely (no boot) and the shared disks are still functional
- Complete loss of all nodes and storage
To restore the original cluster configuration, perform the following tasks:
Related Topics
Recovering a Non-Domain Controller
Recovering a Domain Controller
Recovering RIS Server with SIS Common Store | http://docs.snapprotect.com/netapp/v11/article?p=features/disaster_recovery/c_win_2008_dr_cluster.htm | 2018-12-09T23:55:31 | CC-MAIN-2018-51 | 1544376823228.36 | [] | docs.snapprotect.com |
LoadStringA function
Loads a string resource from the executable file associated with a specified module, copies the string into a buffer, and appends a terminating null character.
Syntax
int LoadStringA( HINSTANCE hInstance, UINT uID, L is to receive the string. resource itself.
Return Value
Type: int
If the function succeeds, the return value is the number of characters copied into the buffer, not including the terminating null character, or zero if the string resource does not exist. To get extended error information, call GetLastError.
Remarks | https://docs.microsoft.com/en-us/windows/desktop/api/winuser/nf-winuser-loadstringa | 2018-12-10T00:41:22 | CC-MAIN-2018-51 | 1544376823228.36 | [] | docs.microsoft.com |
foreach Summary Iterates over an object's own enumerable properties but not properties inherited from its prototype. Statements can be executed for each enumerable property. Syntax foreach (variable in object) statement; Parameters variable The variable to assign to on each iteration. A variable declaration or valid identifier expression is expected. If variables are declared here, they are scoped to the loop. object The object to iterate over. statement The statement to execute. In order to execute a group of statements, use a block. Description The foreach loop iterates over each element of a collection (such as an array or dictionary). In order to loop over keys of a collection, use the for-in Loop. The type of the variable in the foreach loop must be a type compatible with the object being iterated over. For example, when looping over an int[] array, the variable assigned to the element for each iteration must be int. If the object to iterate over has an external type, behavior will vary depending on the runtime type of the object. If the runtime type of the object is a JavaScript array, the foreach loop will iterate over only the array elements. Otherwise, the foreach loop will iterate over each property of the object (including properties inherited through the prototype chain) and assign the value of the property to the foreach loop variable. However, if the JavaScript object has the internal [[DontEnum]] flag set, the property will not be enumerated with either for-in or foreach loops. If the object to iterate over has an external type, the values iterated over will include values inherited from the prototype chain and the iteration will occur in no guaranteed order. For arrays (both JavaScript and JS++ arrays), the order is guaranteed to be sequential and only the array element values will be enumerated when using foreach. For non-array JS++ containers (such as dictionaries), the order is dependent on the type and may not necessarily be guaranteed. Runtime Modification of Containers If arrays are modified at runtime, the behavior is defined. Arrays are iterated from the first element to the last element - including modifications. If dictionary keys are deleted at runtime, the behavior is defined. Deleted dictionary keys that have not yet been visited will not be visited. However, if keys are added at runtime, the behavior is undefined and added keys are not guaranteed to be visited across all web browsers. Examples Looping Over Array Elements 123456789101112import System; int[] arr = [ 3, 4, 5 ]; foreach (int x in arr) { Console.log(x);} // Output:// 3// 4// 5 Looping Over Dictionary Elements 1234567891011121314import System; Dictionary<int> dict = {{ "foo": 10, "bar": 100}}; foreach (int value in dict) { Console.log(value);} // Output:// 10// 100 See Also Block for-in Loop Share HTML | BBCode | Direct Link | https://docs.onux.com/en-US/Developers/JavaScript-PP/Language/Reference/Statements/foreach | 2018-12-09T23:53:37 | CC-MAIN-2018-51 | 1544376823228.36 | [] | docs.onux.com |
Upgrade on Kubernetes
This guide describes the procedure to upgrade Portworx running as OCI container using talisman.
To upgrade to the 2.0 release, run the curl command:
curl -fsL | bash -s
To upgrade to the 1.7 release, run the curl command:
curl -fsL | bash -s
This runs a script that will start a Kubernetes Job to perform the following operations:
- Runs a DaemonSet on the cluster which fetches the new Portworx image. This reduces the time Portworx is down between the old and new versions as the image is already pulled.
- If the upgrade is from version 1.2 to 1.3 or 1.4, it will scale down all Deployments and StatefulSets that use shared Portworx PersistentVolumeClaims. If you are already at version 1.4 and upgrading to subsequent versions, this is not required.
- Triggers RollingUpdate of the Portworx DaemonSet to the default stable image.
- If the upgrade is from version 1.2 to 1.3 or 1.4, all application pods using Portworx PersistentVolumeClaims will be rescheduled to other nodes in the cluster before the new Portworx version starts on that node. If you are already at version 1.4 and upgrading to subsequent versions, this is not required.
- Restore any Deployments or StatefulSets that were scaled down in step 2 back to their original replicas.
This script will also monitor the above operations.
Customizing the upgrade process
Specify a different Portworx upgrade image
You can invoke the upgrade script with the -t to override the default Portworx image. For example below command upgrades Portworx to portworx/oci-monitor:1.7.2 image.
curl -fsL | bash -s -- -t 1.7.2
Disable scaling down of shared Portworx applications during the upgrade
You can invoke the upgrade script with –scaledownsharedapps off to skip scaling down Deployments and StatefulSets that use shared Portworx PersistentVolumeClaim.
For example:
curl -fsL | bash -s -- --scaledownsharedapps off
By default, the upgrade process scales down shared applications as that avoids a node reboot when upgrading between major Portworx versions. Disabling that flag would mean the node would need a reboot before Portworx comes up with the new major version.
Troubleshooting
Find out status of Portworx pods
To get more information about the status of Portworx daemonset across the nodes, run:
kubectl get pods -o wide -n kube-system -l name=portworx NAME READY STATUS RESTARTS AGE IP NODE portworx-9njsl 1/1 Running 0 16d 192.168.56.73 minion4 portworx-fxjgw 1/1 Running 0 16d 192.168.56.74 minion5 portworx-fz2wf 1/1 Running 0 5m 192.168.56.72 minion3 portworx-x29h9 0/1 ContainerCreating 0 0s 192.168.56.71 minion2
As we can see in the example output above:
- looking at the STATUS and READY, we can tell that the rollout-upgrade is currently creating the container on the “minion2” node
- looking at AGE, we can tell that:
- “minion4” and “minion5” have Portworx up for 16 days (likely still on old version, and to be upgraded), while the
- “minion3” has Portworx up for only 5 minutes (likely just finished upgrade and restarted Portworx)
- if we keep on monitoring, we will observe that the upgrade will not switch to the “next” node until STATUS is “Running” and the READY is 1⁄1 (meaning, the “readynessProbe” reports Portworx service is operational).
Find out version of all nodes in Portworx cluster
One can run the following command to inspect the Portworx cluster:
PX_POD=$(kubectl get pods -n kube-system -l name=portworx -o jsonpath='{.items[0].metadata.name}') kubectl exec -it $PX_POD -n kube-system /opt/pwx/bin/pxctl cluster list [...] Nodes in the cluster: ID DATA IP CPU MEM TOTAL ... VERSION STATUS minion5 192.168.56.74 1.530612 4.0 GB ... 1.2.11.4-3598f81 Online minion4 192.168.56.73 3.836317 4.0 GB ... 1.2.11.4-3598f81 Online minion3 192.168.56.72 3.324808 4.1 GB ... 1.2.11.10-421c67f Online minion2 192.168.56.71 3.316327 4.1 GB ... 1.2.11.10-421c67f Online
- from the output above, we can confirm that the:
- “minion4” and “minion5” are still on the old Portworx version (1.2.11.4), while
- “minion3” and “minion2” have already been upgraded to the latest version (in our case, v1.2.11.10).
Manually restoring scaled down shared applications
If the upgrade job crashes unexpectedly and fails to restore shared applications back to their original replica counts, you can run the following command to restore them.
curl -fsL | bash -s -- --scaledownsharedapps off | https://docs.portworx.com/portworx-install-with-kubernetes/operate-and-maintain-on-kubernetes/upgrade/ | 2018-12-09T23:42:27 | CC-MAIN-2018-51 | 1544376823228.36 | [] | docs.portworx.com |
Trial grant is for testing user's needs and requirements regarding resources.
Allows for use of 1000 CPU units and.
Resource grants are always associated with a workgroup - a unit which represents a user or a group of users in resource negotiation process with supercomputer centers.
For trial grants, even though no negotiations are necessary, each user gets a personal so-called "trial workgroup". It is created at the moment user's first affiliation is activated and the only member (and leader) of the group is its owner. Its parameters cannot be changed.
Trial workgroup parameters:
Even though the workgroups with status "private" are not visible in team search, the supervisor of such a group can add other users as workgroup members and thus share trial grant resources with them.
To activate a trial grant, click the green button "Włącz grant testowy" in "Granty testowe" submenu.
You will see a form with parameters of your trial grant: | https://docs.cyfronet.pl/display/PLGDoc/Trial+grant | 2018-12-09T23:38:52 | CC-MAIN-2018-51 | 1544376823228.36 | [] | docs.cyfronet.pl |
This section provides solutions to common issues with TeamForge-Binary integrations.';
Note: In the above query, replace
with valid base URL, go URL and end point URL for your site. The URLs must use https as illustrated above.
Binary initialization fails at the end of provision. Why?
If SOAP services are not completely up and running during service startup, binary initialization fails at the end of provision. As a workaround, reinitialize binary with this command:
teamforge reinitialize -s binary | http://docs.collab.net/teamforge182/binaries-faqs.html | 2018-12-09T23:27:11 | CC-MAIN-2018-51 | 1544376823228.36 | [] | docs.collab.net |
In the Barcode Completion admin interface at Administration → Local Administration → Barcode Completion you can create, update and disable rules.
To create a new rule click on the New button in the upper right corner. When you are are done with editing the new rule click the Save button. If you want to cancel the new rule creation click the Cancel button.
It may be useful to filter the rules list if there are a large number of rules. Click on the filter link to bring up the Filter Results dialog box. You can filter on any of the data fields and you can setup multiple filter rules. Click Apply to enable the filter rules, only the rows that match will now be displayed.
To clear out the filter rules, delete all of the filter rules by clicking the X next to each rule, and then click Apply. | http://docs.evergreen-ils.org/3.0/_create_update_filter_delete_disable_rules.html | 2018-12-10T01:18:52 | CC-MAIN-2018-51 | 1544376823228.36 | [] | docs.evergreen-ils.org |
Printing from a WorkSpace
The following printing methods are supported by Amazon WorkSpaces.
Printing Methods
Local Printers
Amazon WorkSpaces supports.
In some cases, you need to download and install the driver for your local printer manually on the WorkSpace. When you install a printer driver on your WorkSpace, there are different types of drivers that you may encounter:
Add Printer wizard driver. This driver includes only the printer drivers, and are for users who are familiar with installation using the Add Printer wizard in Windows.
Printer model-specific drivers which do not require communication with the printer. In these cases, you can install the printer driver directly.
Printer model-specific drivers which require communication with the printer. In these cases, you can use the printer driver files to add a local printer using an existing port (LPT1:). After selecting the port, you can choose Have Disk and select the
.INFfile for the printer driver.
After installing the printer driver, you must restart the WorkSpace for the new printer to be recognized.
If you cannot print to your local printer from your WorkSpace, make sure you can print to your local printer from your client computer. If you cannot print from your client computer, refer to the printer documentation and support to resolve the issue. If you can print from your client computer, contact AWS Support for further assistance.
Other Printing Methods
You can also use one of the following methods to print from a WorkSpace:
In a connected directory, you can attach your WorkSpace to network printers that are exposed through Active Directory.
Use a cloud printing service, such as Google Cloud Print or HP Mobile Printing.
Print to a file, transfer the file to your local desktop, and print the file locally to an attached printer. | https://docs.aws.amazon.com/workspaces/latest/userguide/printing.html | 2018-12-10T00:50:12 | CC-MAIN-2018-51 | 1544376823228.36 | [] | docs.aws.amazon.com |
View > Bookmarks to show/hide this pane.
Bookmarks are provided for your editing convenience. You can set a bookmark and return to it later.
Bookmarks have nothing to do with debugging.
This pane shows all bookmarks set in your project.
Double-clicking on any entry in the list teleports you to a corresponding place in the sources.
To toggle (set/clear) a bookmark, put the | cursor on the line in the Editor pane you wish to mark/unmark and press [CTRL]+[B].
To clear all bookmarks right-click on the Bookmarks pane and select Remove All. | http://docs.tibbo.com/taiko/pane_bookmarks.htm | 2018-12-10T00:38:53 | CC-MAIN-2018-51 | 1544376823228.36 | [] | docs.tibbo.com |
.
Settings¶
This configuration file has the following settings:
- attribute
A range of chef-client versions that are supported by this cookbook.
For example, to match any 12.x version of the chef-client, but not 11.x or 13.x:
chef_version "~> 12"
Or matches any 12.x (or higher) version of the chef-client:
chef_version ">= 12"
Or matches any version of the chef-client greater than 12.5.1, any 13.x version, but no 14.x versions:
chef_version ">= 12.5.1", "< 14.0"
Or matches any version of the chef-client greater than or equal to 11.18.4 and less than 12.0 and also any version of the chef-client greater than or equal to 12.5.1, but less than 13.0:
chef_version ">= 11.18.12", "< 12.0" chef_version ">= 12.5.1", "< 13.0"
Note
This setting is not visible in Chef Supermarket.
- depends
Show that a cookbook has a dependency on another cookbook. Use a version constraint to define dependencies for cookbook versions: < (less than), <= (less than or equal to), = (equal to), >= (greater than or equal to; also known as “optimistically greater than”, or “optimistic”), ~> (approximately greater than; also known as “pessimistically greater than”, or “pessimistic”), or > (greater than)."
- issues_url
The URL for the location in which a cookbook’s issue tracking is maintained. This setting is also used by Chef Supermarket. In Chef Supermarket, this value is used to define the destination for the “View Issues” link.
For example:
issues_url ''
-'
- ohai_version
A range of chef-client versions that are supported by this cookbook.
For example, to match any 8.x version of Ohai, but not 7.x or 9.x:
ohai_version "~> 8"
Or matches any 8.x (or higher) version of Ohai:
ohai_version ">= 8"
Note
This setting is not visible in Chef Supermarket.
-' | https://docs-archive.chef.io/release/12-8/config_rb_metadata.html | 2018-12-10T00:54:35 | CC-MAIN-2018-51 | 1544376823228.36 | [] | docs-archive.chef.io |
Export
You can use the Export view to export various module details in XML format. You can also download any previously uploaded audio prompts.
To export files:
- Click Export in the navigation bar to access the Export view.
- In the What to Export section, select one of the following options:
- Export Everything – Include all callflows, uploaded grammars, product-specific settings, and uploaded audio files.
- Export Prompts Only – Include uploaded audio files, which includes those that are part of the callflow and those that are product-specific.
- Export Product-Specific Data and Prompts Only – Unlike Export Everything, this option excludes callflow information, static prompts, and grammars.
- In the Modules to Export section, select which Modules you want to export. You can select multiple modules by holding the Ctrl key on your keyboard as you click modules.
- In the Export Options section, enable the Use Production Version of Each Module check box to export the production version of the module, or disable the check box to export the test version of the module.
- Click Export.
This page was last modified on January 19, 2018, at 09:03.
Feedback
Comment on this article: | https://docs.genesys.com/Documentation/GAAP/latest/iaHelp/Export | 2018-12-10T00:18:53 | CC-MAIN-2018-51 | 1544376823228.36 | [] | docs.genesys.com |
Copy File
The
Copy File operation copies a blob or file to a destination file within the storage account.
Available in version 2015-02-21 and newer.
Request
The
Copy File request may be constructed as follows. HTTPS is recommended.
Beginning with version 2013-08-15, you may specify a shared access signature for the destination file if it is in the same account as the source file. Beginning with version 2015-04-05, you may also specify a shared access signature for the destination file if it is in a different storage account.
Replace the path components shown in the request URI with your own, as follows:
For details on path naming restrictions, see Naming and Referencing Shares, Directories, Files, and Metadata. 202 (Accepted).
For information about status codes, see Status and Error Codes.
Response Headers
The response for this operation includes the following headers. The response also includes additional standard HTTP headers. All standard headers conform to the HTTP/1.1 protocol specification.
Response Body
None
Sample Response
Response Status: HTTP/1.1 202 Accepted Response Headers: Last-Modified: <date> ETag: "0x8CEB669D794AFE2" Server: Windows-Azure-File, or by a client possessing a shared access signature that has permission to write to the destination file or its share. Note that the shared access signature specified on the request applies only to the destination file.
Access to the source file or blob is authorized separately, as described in the details for the request header
x-ms-copy-source.
The following table describes how the destination and source objects for a Copy File operation may be authenticated.
Remarks
The Copy File operation can complete asynchronously. The copy ID returned by the
x-ms-copy-id response header can be used to check the status of the copy operation or to abort it. The File service copies files on a best-effort basis.
If the destination file exists, it will be overwritten. The destination file cannot be modified while the copy operation is in progress.
The Copy File operation always copies the entire source blob or file; copying a range of bytes or set of blocks is not supported.
The source of a Copy File operation can be a file which resides in a share snapshot. The destination of a Copy File operation cannot be a file which resides in a share snapshot.
When the source of a copy operation provides ETags, if there are any changes to the source while the copy is in progress, the copy will fail. An attempt to change the destination file while a copy is in progress will fail with 409 (Conflict).
The ETag for the destination file changes when the Copy File operation is initiated, and continues to change frequently during the copy operation.
Copying Properties and Metadata
When a blob or file is copied, the following system properties are copied to the destination file with the same values:
Content-Type
Content-Encoding
Content-Language
Content-Length
Cache-Control
Content-MD5
Content-Disposition
The destination file is always the same size as the source blob or file, so the value of the Content-Length header for the destination file matches that for the source blob or file.
Copying a Leased Blob to a File
The Copy File operation only reads from the source blob, so the lease state of the source blob does not matter. However, the Copy File operation saves the ETag of the source blob when the copy is initiated. If the ETag value changes before the copy completes, the copy fails. You can prevent changes to the source blob by leasing it during the copy operation.
Working with a Pending Copy
The Copy File operation may complete the copy asynchronously. Use the following table to determine the next step based on the status code returned by Copy File:
During and after a Copy File operation, the properties of the destination file contain the copy ID of the Copy File operation and URL of the source blob or file. When the copy completes, the File service writes the time and outcome value (success, failed, or aborted) to the destination file properties. If the operation failed, the
x-ms-copy-status-description header contains an error detail string.
A pending Copy File operation has a 2 week timeout. A copy attempt that has not completed after 2 weeks times out and leaves an empty file with the
x-ms-copy-status field set to failed and the
x-ms-status-description set to 500 (OperationCancelled). Intermittent, non-fatal errors that can occur during a copy might impede progress of the copy but not cause it to fail. In these cases,
x-ms-copy-status-description describes the intermittent errors.
Any attempt to modify the destination file during the copy will fail with 409 (Conflict) Copy File in Progress.
If you call Abort Copy File operation, you will see a
x-ms-copy-status:aborted header and the destination file will have intact metadata and a file length of zero bytes. You can repeat the original call to Copy File to try the copy again.
Billing
The destination account of a Copy File operation is charged for one transaction to initiate the copy, and also incurs one transaction for each request to abort or request the status of the copy operation.
When the source file or blob is in another account, the source account incurs transaction costs. In addition, if the source and destination accounts reside in different regions (e.g. US North and US South), bandwidth used to transfer the request is charged to the source account as egress. Egress between accounts within the same region is free.
See Also
Authentication for the Azure Storage Services
Status and Error Codes
File Service Error Codes
Abort Copy File | https://docs.microsoft.com/en-us/rest/api/storageservices/copy-file | 2018-12-10T00:54:49 | CC-MAIN-2018-51 | 1544376823228.36 | [] | docs.microsoft.com |
Generic Programming Generic programming in JS++ allows you to define classes, interfaces, and functions with "type parameters." This enables classes and algorithms to be defined once for all applicable data types. The classic example of generic programming is containers. In the JS++ Standard Library, there are generic container classes such as System.Array<T> and System.Dictionary<T>. Instantiating these classes might look like this: 123456789101112System.Dictionary<int> intDict = { foo: 1, bar: 2, baz: 3}; System.Dictionary<string> stringDict = { "China": "Beijing", "Japan": "Tokyo", "USA": "Washington, D.C.", "UK" : "London"}; Notice that we declared two different types. intDict had the type System.Dictionary<int> and stringDict had the type System.Dictionary<string>. Both are using the System.Dictionary container, but their type arguments differ. intDict provided the int type argument to System.Dictionary and stringDict provided the string type argument to System.Dictionary. Passing type arguments to System.Dictionary is allowed because System.Dictionary is a generic class that takes a type parameter: 1234567module System{ class Dictionary<T> { // ... }} This allows us to define one Dictionary class that works on all types. From here, we might further define a generic method as follows: 1234567891011class Dictionary<T>{ public T[] values() { T[] result = []; // Get all values in dictionary and store in 'result' // ... return result; }} We can then instantiate the generic class and use the method as follows: 123456import System; auto dict = new Dictionary<string>({ foo: "bar"});string[] values = dict.values(); Notice that the values() method was defined to return T[] (an array of type parameter T). When we instantiated Dictionary, we provided the type argument string. Therefore, all methods inside the instantiated Dictionary will have all T types replaced with string. When we call the values() method, it returns the replaced type string[] (and not T[]). We can also instantiate a new dictionary with a different type argument: 123456import System; auto dict = new Dictionary<int>({ foo: 1});int[] values = dict.values(); This time, the Dictionary was instantiated with the int type argument, and, therefore, the values() method will return int[] (instead of string[] as in the previous example). As illustrated, generic programming allows us to support multiple different data types for classes and functions while only having to define the class or function once. Generic Interfaces In addition to classes, interfaces can also be generic: 1234interface IFoo<T>{ // ...} Since a single class can implement multiple interfaces, this can be useful with generic interfaces. For example, with the Standard Library's IComparable<T> interface, this can enable a class to be compared with multiple different types: 1234567891011import System; class Dog : IComparable<Cat>, IComparable<Rabbit>{ public final Comparison compare(Cat that) { return Comparison.EQUAL; } public final Comparison compare(Rabbit that) { return Comparison.LESS_THAN; }} (Please note that implementations of the IComparable interface affects sorting behavior, e.g. for System.Array<T>.sort). Generic Functions Free functions (functions that don't belong to a class) and static class methods can also be generic. 123456789101112131415import System; bool equals<T1: IComparable<T2>, T2>(T1 a, T2 b) { // Free function return a.compare(b) == Comparison.EQUAL;} class Bar {}class Foo : IComparable<Bar>{ public final Comparison compare(Bar that) { return Comparison.EQUAL; }} Console.log(equals(new Foo(), new Bar())); Generic parameters are hoisted so, in the code above, T2 can be accessed before it is declared. Generic Constraints By default, generic parameters are not constrained to any type. In order to limit the type arguments a generic class can accept, a constraint can be specified: 1class Foo<T: System.Object> {} As illustrated above, constraints are defined using the colon (:) syntax followed by the type to constrain the generic parameter to. In addition, the where keyword can also be used for defining constraints: 1class Foo<T> where T: System.Object {} The alternative where keyword syntax can be used in some cases to enhance readability. By default, JS++ constraints are subtype constraints. In other words, only the class defined as the constraint and its subclasses (but not superclasses) can be passed as arguments for the constrained generic parameter. Wildcard Constraint It's often desirable to pass primitive types (such as string, int, or even external) as arguments. Passing primitive types as arguments is possible in the Standard Library containers such as System.Array and System.Dictionary. The ability to accept primitive types as type arguments can be enabled using the "wildcard" constraint: 12class Foo<T: *> {}auto foo = new Foo<string>(); By default, generic classes have no constraints on type arguments. Therefore, the following code is equivalent: 1234import System; class Foo<T> {}class Foo<T: *> {} Type arguments with the wildcard constraint have a special toString() method that is always available (regardless of the argument passed in). It is conceivable how this functionality is made available for internal types (e.g. System.Object.toString) and even for external types (JavaScript's Object.prototype.toString). However, there are corner cases where a toString method would not be available (such as ActiveX). In such cases, System.Exceptions.UnsupportedException will be thrown. Multiple Constraints JS++ supports multiple constraints for a generic type parameter using the && syntax: 12345678class Baz {}interface IFoo {}interface IBar {}class GenericClass<T: Baz && IFoo && IBar> {} class Foo : IFoo, IBar, Baz {} new GenericClass<Foo>(); All constraints must be satisifed when using multiple constraints. In the example above, if Foo does not inherit from all of IFoo, IBar, and Baz then it will be an error. Covariant and Contravariant Types Most type relationships in JS++ are expressed as subtype relationships. For example, a Dog might be defined as a subtype of Animal, and all variables declared as Animal will accept instances of Dog. However, this subtyping relationship becomes complicated with generic types and containers. Consider arrays: 12345abstract class Animal {}class Dog : Animal {}class Cat : Animal {} Animal[] animals = [ new Dog, new Cat ]; In the above code, we have an array of type Animal. At index 0, we have a Dog instance. At index 1, we have a Cat instance. We can get elements from the above array by always declaring variables to have type Animal, but this isn't always ideal. Sometimes, we want to narrow the type. Other times, we want to use an ancestor type (superclass). This is when covariance and contravariance are needed. A covariance relationship describes subtyping relationships (e.g. Dog is Animal). A contravariance relationship describes the opposite, a supertype relationship (e.g. Animal is an ancestor of Dog). This allows us to work with arrays and other generic types with more refined types. By extending our array example, this is what we want to ideally achieve: 1234567891011121314151617import System; abstract class Animal {}class Tiger : Animal {} abstract class Pet : Animal {}class Dog : Pet {}class Cat : Pet {} // For covarianceArray<Pet> animals = [ new Dog ];animals.push(new Cat); // Yes, because Cat descends from Pet// animals.push(new Tiger); // ERROR, because Tiger does NOT descend from Pet // For contravarianceAnimal dog = animals[0]; // Yes, because Animal is a supertype of Pet// Tiger tiger = animals[0]; // ERROR, because Tiger is NOT a supertype of Pet A useful mnemonic for determining when to use covariance versus contravariance is "Producer Extends, Consumer Super" (PECS, from Effective Java by Joshua Bloch). As seen above, a producer (System.Array.push) accepts "extends" relationships (subtype); while a consumer (reading from the animals array at index 0) uses a "super" (supertype) relationship. By extending the PECS intuition, we can begin to define well-typed generic interfaces. In JS++, the descend keyword is used for specifying covariance (subtype) and the ascend keyword is used for specifying contravariance. We can extend our previous examples with animals and pets by defining a PetCollection class which act on arrays of pets. Following PECS, we must specify producer/write operations that are covariant (descend) and consumer/read operations that are contravariant (ascend): 1234567891011121314151617181920212223242526272829303132333435363738394041424344import System; abstract class Animal {}class Tiger : Animal {} abstract class Pet : Animal {}class Dog : Pet {}class Cat : Pet {} class PetCollection{ Pet[] data = []; void insert(descend Pet[] pets) { foreach(Pet pet in pets) { this.data.push(pet); } } ascend Pet[] get() { return this.data; }} auto myPets = new PetCollection(); // Write operations (descend, covariance)myPets.insert([ new Dog, new Cat ]);// myPets.insert([ new Tiger ]); // not allowed // Read operations (ascend, contravariance)Pet[] getPets = [];Animal[] getAnimals = [];ascend Pet[] tmp = myPets.get(); // read hereforeach(Pet pet in tmp) { // but we still need to put them back into our "result" arrays getPets.push(pet); getAnimals.push(pet);} // Now we can modify the arrays we read into abovegetPets.push(new Dog);getAnimals.push(new Dog);getAnimals.push(new Tiger);// getPets.push(new Tiger); // ERROR As illustrated in the examples above, type safety is always preserved by using covariant and contravariant generic types. As shown in the code above, we specify variance using use-site variance; in other words, variance is defined exactly where it is used. Declaration-site variance specifies variance for an entire class on its generic type parameters and is not available for JS++. Upcasting/Downcasting Correct upcasting and downcasting with generic types are possible, but variances must be specified. For example, the following code will result in a compiler error: 1234567import System; class Parent {}class Child : Parent {} Array<Parent> obj = new Array<Child>(); // ERROR(Array<Child>) obj; // ERROR Meanwhile, if variance is specified, the code will compile without errors: 1234567import System; class Parent {}class Child : Parent {} Array<descend Parent> obj = new Array<Child>();(Array<Child>) obj; The above example shows downcasting. It's also possible to upcast: 1234567import System; class Parent {}class Child : Parent {} Array<ascend Child> obj = new Array<Child>();(Array<Parent>) obj; Share HTML | BBCode | Direct Link | https://docs.onux.com/en-US/Developers/JavaScript-PP/Language-Guide/generic-programming | 2018-12-10T00:14:14 | CC-MAIN-2018-51 | 1544376823228.36 | [] | docs.onux.com |
The result of the project's compilation is a .TPC binary file, which contains the p-code and is uploaded onto the target device for execution.
The .TPC binary does not include TiOS. It is expected that the target device already has TiOS preloaded or nothing described below and in the Debugging chapter will work.
There is a way to combine TiOS with the .TPC file.
A project's binary can be compiled for debug or release. This is selected in the drop-down on the Debug toolbar (or set in the Project Settings dialog).
A debug binary doesn't execute automatically. It waits for TIDE to send control commands and make it run, break, etc. This is covered in DEBUGGING.
Release binaries run immediately after the device is powered up (or the new binary is uploaded). All debug functions on release binaries are disabled.
The upload is always performed to the currently selected target, and the selection is made in the Device Explorer.
Hit [F5] or Debug > Run to have your project saved, compiled, uploaded onto the target, and ordered to start executing.
The status bar will show you the upload progress.
Naturally, the upload won't start if the compilation yields error(s). The execution won't happen if the upload fails. (More).
After the successful build and upload, the execution will start both in the debug and release modes, but this will happen for two different reasons. In the debug mode, TIDE will send a "run" command that will instruct the target to start the execution. In the release mode, the execution will start automatically, without any command from TIDE. That's what the release mode is for.
Full rebuilds
"Normal" compilation only processes files that were modified since the last compile.
Use Build > Rebuild All to recompile all the files in your project.
There is also Build > Rebuild All and Upload.
TIDE supports incremental uploads. These cache the previous build on the computer; when you make a modification TIDE makes sure the previous build is indeed what's running on your device now, and then compares the new build with the previous build. It then attempts to upload only what's changed. This really speeds up the upload process. Doing the full rebuild resets the cache. Next upload is always the full upload. | http://docs.tibbo.com/taiko/tide_compiling.htm | 2018-12-10T00:29:47 | CC-MAIN-2018-51 | 1544376823228.36 | [] | docs.tibbo.com |
Tracks are loaded in the Playlist. In normal track progression, when a song is over, the next one played is the one below the just-played one. The Queue is one way to change the order in which the songs are played.
It is very easy to make a Queue; just select one or more tracks and move in the center of Amarok, to the Pop-Up Dropper (PUD), or right-click and scroll to Queue Track.
As you select or add songs to the Queue each of them takes a number. The number indicates the order in which songs will play. | https://docs.kde.org/trunk4/en/extragear-multimedia/amarok/queue-manager.html | 2016-09-25T01:58:04 | CC-MAIN-2016-40 | 1474738659753.31 | [array(['/trunk4/common/top-kde.jpg', None], dtype=object)] | docs.kde.org |
stats
Synopsis
Provides statistics, grouped optionally by field.
Syntax
Simple: stats (stats-function(field) [as field])+ [by field-list]
Complete: stats [allnum=<bool>] [delim=<string>] ( <stats-agg-term> | <sparkline-agg-term> ) [<by clause>]
Required arguments
-() | list() | max() | median() | min() | mode() | p<in>() | perc<int>() | range() | stdev() | stdevp() | sum() | sumsq() | values() | var() | varp()
- Description: Functions used with the stats command. Each time you invoke the
statscommand, you can use more than one function; however, you can only use one
by clause. For a list of stats functions with descriptions and examples, see "Functions for stats, chart, and timechart".
Sparkline function options
Sparklines are inline charts that appear within table cells in search results to display time-based trends associated with the primary key of each row. Read more about how to "Add sparklines to your search results" in the.
Description
Calculate aggregate statistics over the dataset, similar to SQL aggregation. If called without a
by clause, one row is produced, which represents the aggregation over the entire incoming result set. If called with a by-clause, one row is produced for each distinct value of the by-clause.
Examples
Example 1 examples
Example 1:
Example 2: Return the average for each hour, of any unique field that ends with the string "lay" (for example, delay, xdelay, relay, etc).
... | stats avg(*lay) BY date_hour
Example 3: Remove duplicates of results with the same "host" value and return the total count of the remaining results.
... | stats dc(host)
Example® Enterprise: 4.3, 4.3.1, 4.3.2, 4.3.3, 4.3.4, 4.3.5, 4.3.6, 4.3.7
Feedback submitted, thanks! | http://docs.splunk.com/Documentation/Splunk/4.3.1/SearchReference/Stats | 2016-10-21T00:31:35 | CC-MAIN-2016-44 | 1476988717959.91 | [array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)] | docs.splunk.com |
Execute Process Task
The Execute Process task runs an application or batch file as part of a SQL Server Integration Services.
Change History | https://docs.microsoft.com/en-us/previous-versions/sql/sql-server-2008/ms141166(v=sql.100) | 2018-05-20T16:26:52 | CC-MAIN-2018-22 | 1526794863626.14 | [] | docs.microsoft.com |
Tutorial: SQL Server Management Studio Components and Configuration
This Tutorial describes the different window components within SQL Server Management Studio (SSMS) and some basic configuration options for your workspace. In this article, you will learn how about:
- The different components that make up the SSMS environment
- Changing the environmental layout and resetting it to default
- Maximizing the query editor
- Changing the font
- Configuring startup options
- Resetting the configuration back to default
Prerequisites
To complete this Tutorial, you need SQL Server Management Studio.
- Install SQL Server Management Studio.
SQL Server Management Studio Components
This section covers the different window components available in the workspace, and their purpose.
Every window component can be closed by hitting the X in the corner of the title bar and then reopened from the View dropdown in the main menu.
Object Explorer (F8): Object Explorer is a tree view of all the database objects in a server. This can include the databases of the SQL Server Database Engine, Analysis Services, Reporting Services, and Integration Services. Object Explorer includes information for all servers to which it is connected.
Query Window (Ctrl+N): Once you've clicked on New Query, this is the window where you will type in your Transact-SQL (T-SQL) queries. Results of your queries are visible here as well.
Properties (F4): This is visible once the Query Window is open and displays basic properties of the query. For example, it will show the time a query started, the number of rows returned, and connection details.
Template Browser (Ctrl+Alt+T): There are a number of pre-built T-SQL Templates that can be found in the template browser. These templates allow you to perform various functions such as creating or backing up a database.
Object Explorer Details(F7): This is a more granular view of what's visible in the Object Explorer, and allows you to manipulate multiple objects at once. For example, the Object Explorer Details window allows you to select multiple databases simultaneously and either delete them or script them out.
Change the Environmental Layout
This section discusses manipulating the environmental layout, such as moving the various windows around.
- Each window component can be moved around by holding down the title and dragging the window around.
Each window component can be pinned and unpinned by selecting the pushpin icon in the title bar:
Each window component has a drop-down arrow that allows for the window to be manipulated in various ways:
Once you have two or more query windows open, they can be tabbed vertically or horizontally so that both query windows are visible at once. To achieve this, right-click the title of the query and select the desired tabbed option.
This is the Horizontal Tab Group:
This is the Vertical Tab Group:
To merge the tabs back again, right-click the query title again and Move to Next Tab Group or Move to Previous Tab Group:
To restore the default environmental layout, click on the Window Menu > Reset Window Layout:
Maximize Query Editor
The query editor can be maximized to full screen mode.
- Click anywhere within the Query Editor Window.
- Press SHIFT + ALT + ENTER to toggle between full-screen mode and regular mode.
This keyboard shortcut works with any document window.
Change Basic Settings
This section discusses how to modify some basic settings within SSMS. These options are found within the Tools menu option:
The highlighted toolbar can be modified by going to the menu: Tools > Customize:
Change the font
The font can be changed from the menu: Tools > Options > Fonts and Colors:
Change the Startup Options
The startup options determine what your workspace looks like when you first launch SSMS. These can be configured from the menu: Tools > Options > Startup:
Reset Settings to Default
All of these settings can be exported and imported from the menu: Tools > Import and Export Settings
- This is also where you can reset all of your settings to default.
Next steps
The next article will teach you some additional tips and tricks for using SSMS, such as finding your SQL Server error log and your SQL instance name.
Advance to the next article to learn more | https://docs.microsoft.com/en-us/sql/ssms/tutorials/ssms-configuration?view=sql-server-2017 | 2018-05-20T15:57:25 | CC-MAIN-2018-22 | 1526794863626.14 | [array(['media/ssms-configuration/tools.png?view=sql-server-2017',
'Tools Menu'], dtype=object) ] | docs.microsoft.com |
Perl 6 Pod
An easy-to-use markup language
Perl 6 Pod is an easy-to-use markup language. Pod can be used for writing language documentation, for documenting programs and modules, as well as for other types of document composition.
Every Pod document has to begin with
=begin pod and end with
=end pod. Everything between these two delimiters will be processed and used to generate documentation.
=begin podA very simple Perl 6 Pod document=end pod
Block Structure
A Pod document may consist of multiple Pod blocks. There are four ways to define a block (delimited, paragraph, abbreviated, and declarator); the first three yield the same result but the fourth differs. You can use whichever form is most convenient for your particular documentation task.
Delimited blocks
Delimited blocks are bounded by
=begin and
=end markers, both of which are followed by a valid Perl 6 identifier, which is the
typename of the block. Typenames that are entirely lowercase (for example:
=begin head1) or entirely uppercase (for example:
=begin SYNOPSIS) are reserved.
=begin head1Top Level Heading=end head1
Configuration information
After the typename, the rest of the
=begin marker line is treated as configuration information for the block. This information is used in different ways by different types of blocks, but is always specified using Perl6-ish option pairs. That is, any of:
Where '$e1, $e2, ...' are list elements of type String, Int, Number, or Boolean. Lists may have mixed element types. Note that one-element lists are converted to the type of their element (String, Int, Number, or Boolean). Also note that "bigints" can be used if required.
For hashes, '$k1, $k2, ...' are keys of type Str and '$v1, $2, ...' are values of type String, Int, Number, or Boolean.
Strings are delimited by single or double quotes. Whitespace is insignificant outside of strings. Hash keys need not be quote-delimited unless they contain significant whitespace.
= in the first (virtual) column followed by a whitespace character. (NOTE: This feature is not yet implemented. All configuration information currently must be provided on the same line as the
=begin marker line.)
Paragraph blocks
Paragraph blocks begin by a
=for marker and end by the next Pod directive or the first blank line. The
=for marker is followed by the typename of the block.
=for head1Top Level Heading
Abbreviated blocks
Abbreviated blocks begin by an
'=' sign, which is followed immediately by the typename of the block. The block ends at the next Pod directive or the first blank line.
=head1 Top Level Heading»
Block types
Pod offers a wide range of standard block types.
Headings
Headings can be defined using
=headN, where N is greater than zero (e.g.,
=head1,
=head2, …).
=head1 A Top Level Heading=head2 A Second Level Heading=head3 A Third Level Heading
Ordinary paragraphs
An ordinary paragraph consists of text that is to be formatted into a document at the current level of nesting, with whitespace squeezed, lines filled, and any special inline mark-up applied.
Ordinary paragraphs consist of one or more consecutive lines of text, each of which starts with a non-whitespace character. The paragraph is terminated by the first blank line or block directive.
For example:
=head1 This is a heading blockThis is an ordinary paragraph.Its text will be squeezed andshort lines filled. It is terminated bythe first blank line.This is another ordinary paragraph.Its text will also be squeezed andshort lines filled. It is terminated bythe trailing directive on the next line.=head2 This is another heading blockThis is yet another ordinary paragraph,at the first virtual column set by theprevious directive
Ordinary paragraphs do not require an explicit marker or delimiters.
Alternatively, there is also an explicit
=para marker that can be used to explicitly mark a paragraph.
=paraThis is an ordinary paragraph.Its text will be squeezed andshort lines filled.
In addition, the longer
=begin para and
=end para form can be used.
For example:
=begin paraThis is an ordinary paragraph.Its text will be squeezed andshort lines filled.This is still part of the same paragraph,which continues until an...=end para
As demonstrated by the previous example, within a delimited
=begin para and
=end para block, any blank lines are preserved.
Code blocks
Code blocks are used to specify source code, which should be rendered without re-justification, without whitespace-squeezing, and without recognizing any inline formatting codes. Typically these blocks are used to show examples of code, mark-up, or other textual specifications, and are rendered using a fixed-width font.
A code block may be implicitly specified as one or more lines of text, each of which starts with a whitespace character. The implicit code block is then terminated by a blank line.
For example:
This ordinary paragraph introduces a code block:my = 'John Doe';say ;
Code blocks can also be explicitly defined by enclosing them in
=begin code and
=end code
=begin codemy $name = 'John Doe';say $name;=end code
I/O blocks
Pod provides blocks for specifying the input and output of programs.
The
=input block is used to specify pre-formatted keyboard input, which should be rendered without re-justification or squeezing of whitespace.
The
=output block is used to specify pre-formatted terminal or file output, which should also be rendered without re-justification or whitespace-squeezing.
Lists
Unordered Lists
Lists in Pod are specified as a series of
=item blocks.
For example:
The three suspects are:=item Happy=item Sleepy=item Grumpy
The three suspects are:
Happy
Sleepy
Grumpy
Definition Lists
Lists that define terms or commands use
=defn, equivalent to the
DL lists in HTML
=defn HappyWhen you're not blue.=defn blueWhen you're not happy.
Would be rendered this way:
Happy When you're not blue.
Blue When you're not happy.
Which, for the time being, might be a simple HTML paragraph, but might change in the future.
Multi-level Lists
Animal
Vertebrate
Invertebrate
Phase
Solid
Liquid
Gas
Multi-paragraph Lists
Using the delimited form of the
=item block (
=begin item and
=end item), we can specify items that contain multiple paragraphs.
For example:
Let's consider two common proverbs:=begin itemThis is a common myth and an unconscionable slur on the Spanishpeople, the majority of whom are extremely attractive.=end item=begin itemIn deciding whether to become an early riser, it is worthconsidering whether you would actually enjoy annelidsfor breakfast.=end itemAs you can see, folk wisdom is often of dubious value.
Let's consider two common proverbs:
The rain in Spain falls mainly on the plain.
This is a common myth and an unconscionable slur on the Spanish people, the majority of whom are extremely attractive.
The early bird gets the worm.
In deciding whether to become an early riser, it is worth considering whether you would actually enjoy annelids for breakfast.
As you can see, folk wisdom is often of dubious value.
Tables
Pod comments
Pod comments are comments that Pod renderers ignore.
=comment Add more here about the algorithm
For multi-line comments use a delimited block:
=begin commentThis comment ismulti-line.=end comment
Semantic blocks
All uppercase block typenames are reserved for specifying standard documentation, publishing, source components, or meta-information.
=NAME=AUTHOR=VERSION=TITLE=SUBTITLE
Formatting Codes
Formatting codes provide a way to add inline mark-up to a piece of text.
All Pod formatting codes consist of a single capital letter followed immediately by a set of angle brackets.
Formatting codes may nest other formatting codes.
Bold
To format a text in bold enclose it in
B< >
Perl 6 is B<awesome>
Perl 6 is awesome
Italic
To format a text in italic enclose it in
I< >
Perl 6 is I<awesome>
Perl 6 is awesome
Underlined
To underline a text enclose it in
U< >
Perl 6 is U<awesome>
Code
To flag text as Code and treat it verbatim enclose it in
C< >
C<my $var = 1; say $var;>
my $var = 1; say $var;
Links
To create a link enclose it in
L< >
Perl 6 homepage L<>L<Perl 6 homepage|>
Perl 6 homepage
To create a link to a section in the same document:
A comment is text that is never rendered.
To create a comment enclose it in
Z< >
Perl 6 is awesome Z<Of course it is!>
Perl 6 is awesome
Notes
Notes are rendered as footnotes.
To create a note enclose it in
N< >
Perl 6 is multi-paradigmatic N<Supporting Procedural, Object Oriented, and Functional programming>
Keyboard input
To flag text as keyboard input enclose it in
K< >
Enter your name K<John Doe>
Terminal Output
To flag text as terminal output enclose it in
T< >
Hello T<John Doe>
Unicode
To include Unicode code points or HTML5 character references in a Pod document, enclose them in
E< >
E< > can enclose a number, that number is treated as the decimal Unicode value for the desired code point. It can also enclose explicit binary, octal, decimal, or hexadecimal numbers using the Perl 6 notations for explicitly based numbers.
Perl 6 makes considerable use of the E<171> and E<187> characters.Perl 6 makes considerable use of the E<laquo> and E<raquo> characters.Perl 6 makes considerable use of the E<0b10101011> and E<0b10111011> characters.Perl 6 makes considerable use of the E<0o253> and E<0o273> characters.Perl 6 makes considerable use of the E<0d171> and E<0d187> characters.Perl 6 makes considerable use of the E<0xAB> and E<0xBB> characters.
Perl 6 makes considerable use of the « and » characters.
Rendering Pod
HTML
In order to generate HTML from Pod, you need the
Pod::To::HTML module.
If it is not already installed, install it by running the following command:
zef install Pod::To::HTML
Using the terminal run the following command:
perl6 --doc=HTML input.pod6 > output.html
Markdown
In order to generate Markdown from Pod, you need the
Pod::To::Markdown module.
If it is not already installed, install it by running the following command:
zef install Pod::To::Markdown
Using the terminal run the following command:
perl6 --doc=Markdown input.pod6 > output.md
Text
In order to generate Text from Pod, you can use the default
Pod::To::Text module.
Using the terminal, run the following command:
perl6 --doc=Text input.pod6 > output.txt
You can omit the
=Text portion:
perl6 --doc input.pod6 > output.txt
You can even embed Pod directly in your program and add the traditional Unix command line "--man" option to your program with a multi MAIN subroutine like this:
multi MAIN(Bool :)
Now
myprogram --man will output your Pod rendered as a man page.
Accessing Pod
In order to access Pod documentation from within a Perl 6 program it is required to use the special
= twigil, as documented in the variables section.
The
= twigil provides the introspection over the Pod structure, providing a Pod::Block tree root from which it is possible to access the whole structure of the Pod document.
As an example, the following piece of code introspects its own Pod documentation:
=begin pod=head1 This is an head1 titleThis is a paragraph.=head2 SubsectionHere some text for the subsection.=end podfor ->
producing the following output:
Pod::Heading.new(level => 1, config => , contents => [Pod::Block::Para.new(config => , contents => ["This is an head1 title"])]);Pod::Block::Para.new(config => , contents => ["This is a paragraph."]);Pod::Heading.new(level => 2, config => , contents => [Pod::Block::Para.new(config => , contents => ["Subsection"])]);Pod::Block::Para.new(config => , contents => ["Here some text for the subsection."]); | https://docs.perl6.org/language/pod | 2018-05-20T15:26:38 | CC-MAIN-2018-22 | 1526794863626.14 | [] | docs.perl6.org |
About this Website
This website provides online documentation for all Stonebranch products.
The website is organized into spaces, each of which contains a set of information on specific subject matter.
Current documentation for the Universal Automation Center contains the following spaces:
Similar spaces are provided for earlier versions of documentation.
Accessing the Website
You can access this website three different ways:
- In your browser address bar, enter docs.stonebranch.com/confluence.
- On the Stonebranch website, select documentation from the support menu.
- From the Universal Controller user interface, click
(the help button) in the upper right corner of any screen.
Accessing this website from your browser address bar or the Stonebranch website takes you to the home page for Universal Automation Center Documentation.
Accessing this website from the Universal Controller user interface takes you to the home page for Universal Controller 6.4.x.
Navigating the Website
Each page on the website is organized into two frames: a Navigation frame on the left and a Documentation frame on the right.
The upper half of the Navigation frame is the same on every page. It contains links that form a master table of contents; for the current Universal Automation Center documentation:
- Home page for the entire website
- Home page for Universal Automation Center
- Home page for Universal Controller 6.4.x
- Home page for Universal Agent 6.3.x - All Components
- Home page for Universal Agent 6.3.x - Universal Command
- Home page for Universal Agent 6.3.x - Universal Data Mover
- Home page for Support, Maintenance Lists, and Release Information
- Documentation Library
- Documentation - All Versions
- Documentation Help (this page)
- Glossary
The lower half of the Navigation frame contains a table of contents for the current documentation space, preceded by a search box for searching that space (see Searching the Website, below).
Each home page contains links to landing pages based on feature or function, such as Installation or Security.
Searching the Website
Search Locations
To search the entire documentation website, enter the search text in the Search All Documentation box at the top right of any page.
To search only the current space, enter search text in the Navigation frame Search box. Text above the search box identifies the current documentation space to be searched.
Both searches provide a list of pages containing the best matches for the data being searched. Click a page link to view the page.
To search the contents of a page, use the web browser Find action (typically Ctrl-F) to locate a specific instance of the search data.
Search Criteria
To find pages containing an exact match of the search text, enclose the text in double quotation marks ( " ).
To find pages that contain at least one instance of each word in the search text (including exact matches), do not enclose the text in double quotation marks.
You can use * as a wildcard in your search text, as long as it is not the first search character.
PDFs
PDFs of website information are provided on the All Universal Automation Center PDFs page. Each PDF contains interactive bookmarks and Table of Contents.
Creating an Ad Hoc PDF
If you want to create a PDF of specific information from your current location in the website:
- From the Browse drop-down list, click Advanced.
- From the Export menu, click PDF Export.
- Select any or all of the pages to be included in the PDF.
- Click Export.
- When the PDF export is complete, click the Download here link.
Creating a PDF of the Current Page
If you want to create a PDF of the current page:
- From the Tools drop-down list, click Export to PDF.
- Open or Save the PDF.
Creating a ZIP File of All PDFs
You can create a ZIP file of all PDFs on the PDFs page in any Documentation Library.
- Access the Documentation Library page.
- Click the All Universal Automation Center PDFs link.
- On the All Universal Automation Center PDFs page, select Tools > Attachments.
- On the Attached Files, page, click Download All to create a ZIP file of all PDFs attached to the All Universal Automation Center PDFs page.
Feedback
Your feedback is very helpful in improving the documentation and its usability.
Click the Feedback link at the bottom of any page to provide feedback about this website.
If you are providing feedback for a specific page, please cut & paste the page URL into your response.
Customer Support
Stonebranch, Inc. provides customer support, via telephone and e-mail, for all Universal Automation Center components.
All Locations
[email protected]
Customer support contact via e-mail also can be made via the Stonebranch website.
TELEPHONE
Customer support via telephone is available 24 hours per day, 7 days per week.
North America
(+1) 678 366-7887, extension 6
(+1) 877 366-7887, extension 6 [toll-free]
Europe
+49 (0) 700 5566 7887
Stonebranch Website Policy
This website contains proprietary information that is protected by copyright. All rights reserved. No part of this publication may be reproduced, transmitted or translated in any form or language or by any means, electronic or mechanical, including photocopy, recording, or any information storage and retrieval system, without permission, in writing, from the publisher. Requests for permission to make copies of any part of this website should be mailed to:
Stonebranch, Inc.
4550 North Point Parkway, Suite 200
Alpharetta, GA 30022 USA
Tel: (678) 366-7887
Stonebranch, Inc.® makes no warranty, express or implied, of any kind whatsoever, including any warranty of merchantability or fitness for a particular purpose or use.
The information on this website is subject to change without notice.
Stonebranch shall not be liable for any errors contained herein or for incidental or consequential damages in connection with the furnishing, performance or use of this website.
All products mentioned herein are or may be trademarks of their respective owners.
Vendor References
References are made throughout this website to a variety of vendor operating systems. We attempt to use the most current product names when referencing vendor software.
The following names are used in this website:
- z/OS is synonymous with IBM z/OS and IBM OS/390 line of operating systems.
- Windows is synonymous with Microsoft's Windows XP SP3, Windows Server 2003 SP1 and higher, Windows Vista, Windows 7, Windows Server 2008, Windows Server 2008 R2, and Windows Server 2012 lines of operating systems. Any differences between the different systems will be noted.
- UNIX is synonymous with operating systems based on AT&T and BSD origins and the Linux operating system.
- IBM i is synonymous with IBM i/5, IBM OS/400, and OS/400 operating systems.
- IBM System i is synonymous with IBM i Power Systems, IBM AS/400, and AS/400 systems.
These names do not imply software support in any manner. | http://docs.stonebranch.com/confluence/display/SMLRI/Documentation+Help | 2018-05-20T15:22:07 | CC-MAIN-2018-22 | 1526794863626.14 | [] | docs.stonebranch.com |
- Onboarding and Managing a service
- Customizing the list order of cloud service categories
- Customizing the list order of cloud service instances
- Uploading a logo for a service instance. | https://docs.citrix.com/de-de/cloudportal-business-manager/2-4/cpbm-admin-wrapper-con/cpbm-cloudservices-con/cpbm-set-display-order-services-tsk.html | 2018-05-20T15:55:21 | CC-MAIN-2018-22 | 1526794863626.14 | [] | docs.citrix.com |
NOTICE: Our WHMCS Addons are discontinued and not supported anymore.
Most of the addons are now released for free in github -
You can download them from GitHub and conribute your changes ! :)
VMM :: Adding new server :: Proxmox
Step one: Configuring the physical Proxmox server
This guide will help you configure proxmox server, and configure a virtual machine inside that server.
Navigate to VMM Addon – WHMCS top menu -> Addons -> Virtual Machine Manager.
Inside Virtual Machine Manager, choose “Plugins” -> “Manage plugins“, in the Proxmox row, choose “Manage Hosts” –
Specify your proxmox host physical server login details.
Standard systems use port 8006 and “Linux PAM standard authentication” as authenticaton method.
* If your WHMCS server has a firewall, make sure to whiltelist your proxmox host ip address for in/out communication.
The physical host is the server that holds the virtual machines (multiple machines), and not the end user’s virutal machine (this will be configured later..).
Step two: Adding virtual machines for your Proxmox Proxmox, “proxmox”. | https://docs.jetapps.com/proxmox-add-server | 2018-05-20T15:43:53 | CC-MAIN-2018-22 | 1526794863626.14 | [] | docs.jetapps.com |
Log Message workflow activity The Log Message activity writes a message to the workflow log. Use this activity to add entries to the workflow's log for debugging or tracing purposes. Input variables Input variables determine the initial behavior of the activity. Table 1. Log Message activity input variables Field Description Message The message to log. This variable can be a string or a JavaScript expression that evaluates to a string. | https://docs.servicenow.com/bundle/geneva-servicenow-platform/page/administer/workflow_activities/reference/r_LogMessageActivity.html | 2018-05-20T15:37:52 | CC-MAIN-2018-22 | 1526794863626.14 | [] | docs.servicenow.com |
Porter
This addon shows the slips’ items highlighting those that are stored.
Commands
s### porter
porter [<slip> [<page>]] [owned]
Shows the specified slip or slip’s page. if “owned” is specified, only the owned items will be shown. if no parameter is specified, all the owned slips will be shown.
- //slip//: the number of the slip you want to show.
- //page//: the page of the slip you want to show.
- owned: shows only the items you own.
Bug Tracker
Please report any bug or suggetion on the bug tracker @
Change Log
v1.20130529
- fix: Fixed parameters validation
- change: Aligned to Windower’s addon development guidelines
v1.20130525.1
- add: Added the “owned” param. if present, only the owned items will be shown.
v1.20130525
- change: If no parameter is specified all the owned slips will be shown.
v1.20130524
- First release.
Source
The latest source and information for this addon can be found on GitHub. | http://docs.windower.net/addons/porter/ | 2018-05-20T16:07:20 | CC-MAIN-2018-22 | 1526794863626.14 | [] | docs.windower.net |
Start a Group Chat
To send the same messages to more than one person
Access chat from the left side of the Amazon Chime desktop app or the Messages and Rooms tab in the mobile app.
Choose Group Message and add up to 50 users to the To: field. everyone in the group. | https://docs.aws.amazon.com/chime/latest/ug/group-chat.html | 2018-05-20T15:18:34 | CC-MAIN-2018-22 | 1526794863626.14 | [] | docs.aws.amazon.com |
InfoReplacer
Replaces outgoing text prefixed by % with respective information. For a complete list of replacements, view reps.lua.
Commands:
Show replacements
//inforeplacer list
Shows all (custom) replacement names.
Set custom replacement
//inforeplacer set <name> <value>
Defines a custom replacement variable as the provided value.
Set custom code replacement
//inforeplacer seteval <name> <code>
Defines a custom replacement variable as the provided Lua expression. This expression will be evaluated whenever it appears in the chat, so this can be used to print dynamic values.
Remove custom replacement
//inforeplacer unset <name>
Deletes a previously defined custom replacement variable.
Source
The latest source and information for this addon can be found on GitHub. | http://docs.windower.net/addons/inforeplacer/ | 2018-05-20T16:08:21 | CC-MAIN-2018-22 | 1526794863626.14 | [] | docs.windower.net |
AdomdConnection.ConnectionString Property
Gets or sets the string that the AdomdConnection uses to open an analytical data source.
Namespace: Microsoft.AnalysisServices.AdomdClient
Assembly: Microsoft.AnalysisServices.AdomdClient (in Microsoft.AnalysisServices.AdomdClient.dll)
Syntax
'Declaration Public Property ConnectionString As String Get Set 'Usage Dim instance As AdomdConnection Dim value As String value = instance.ConnectionString instance.ConnectionString = value
public string ConnectionString { get; set; }
public: virtual property String^ ConnectionString { String^ get () sealed; void set (String^ value) sealed; }
abstract ConnectionString : string with get, set override ConnectionString : string with get, set
final function get ConnectionString () : String final function set ConnectionString (value : String)
Property Value
Type: System.String
A string that contains the connection string that is used by the AdomdConnection.
Implements
IDbConnection.ConnectionString
Remarks.
Note
Applications should use caution when constructing a connection string based on user input (for example, when retrieving user ID and password information from a dialog box, and appending this information to the connection string). The application should ensure that a user cannot embed extra connection string parameters in these values (for example, a user that is trying to connect to a different database might enter a password as validpassword;database=somedb)..
Note
Properties that are Initialization Properties cannot be changed after a connection has been opened. The set of available initialization properties can be queried through IDBProperties::GetPropertyInfo..
See Also | https://docs.microsoft.com/en-us/previous-versions/sql/sql-server-2008-r2/ms133043(v=sql.105) | 2018-05-20T16:27:17 | CC-MAIN-2018-22 | 1526794863626.14 | [] | docs.microsoft.com |
MutableVector::slice
Returns a subset of the current
MutableVector starting from a given key
up to, but not including, the element at the provided length from the
starting key
public function slice( int $start, int $len, ): MutableVector<Tv>;
$start is 0-based. $len is 1-based. So
slice(0, 2) would return the
elements at key 0 and 1.
The returned
MutableVector will always be a proper subset of this
MutableVector.
Parameters
int $start- The starting key of this Vector to begin the returned
MutableVector.
int $len- The length of the returned
MutableVector.
Return Values
MutableVector<Tv>- A
MutableVectorthat is a proper subset of the current
MutableVectorstarting at
$startup to but not including the element
$start + $len. | https://docs.hhvm.com/hack/reference/interface/MutableVector/slice/ | 2018-05-20T15:31:49 | CC-MAIN-2018-22 | 1526794863626.14 | [] | docs.hhvm.com |
Weight by Chi Squared Statistic
(RapidMiner Studio Core)
SynopsisThis operator calculates the relevance of the attributes by computing for each attribute of the input ExampleSet the value of the chi-squared statistic with respect to the class attribute.
Description
The Weight by Chi Squared Statistic operator calculates the weight of attributes with respect to the class attribute by using the chi-squared statistic. The higher the weight of an attribute, the more relevant it is considered. Please note that the chi-squared statistic can only be calculated for nominal labels. Thus this operator can be applied only on ExampleSets with nominal label.
The chi-square statistic is a nonparametric statistical technique used to determine if a distribution of observed frequencies differs from the theoretical expected frequencies. Chi-square statistics use nominal data, thus instead of using means and variances, this test uses frequencies. The value of the chi-square statistic is given by
X2 = Sigma [ (O-E)2/. available only when the sort weights parameter is set to true. This parameter specifies the sorting order of the attributes according to their weights. Range: selection
- number_of_binsThis parameter specifies the number of bins used for discretization of numerical attributes before the chi-squared test can be performed. Range: integer
Tutorial Processes
Calculating the weights of the attributes of the Golf data set
The 'Golf' data set is loaded using the Retrieve operator. The Weight by Chi Squared Statistic. | https://docs.rapidminer.com/latest/studio/operators/modeling/feature_weights/weight_by_chi_squared_statistic.html | 2018-05-20T15:31:55 | CC-MAIN-2018-22 | 1526794863626.14 | [] | docs.rapidminer.com |
Blist
Author: Ragnarok.Ikonic
More detailed blist with tiered display options. Allows for blist to be active on any or all of several chat types.
Commands
//Blist and
//bl are both valid commands.
bl help
Lists this menu.
status
bl status
Shows current configuration.
list
bl list
Displays blacklist.
chatmode toggles
bl useblist|linkshell|party|tell|emote|say|shout|bazaar|examine
Toggles using Blist for said chat mode.
mutedcolor
bl mutedcolor #
Sets color for muted communication. Valid values 1-255.
add/update
bl add|update name # hidetype reason
Adds to or updates a user on your blist.
- name = name of person you want to blist
- # = number of days to blist said person; 0 = forever
- hidetype = how blacklisted you want said person to be; valid options: hard, soft, muted
- hard = full blist, nothing gets through
- soft = message saying conversation from name was blocked
- muted = message comes through, but in a different color
- reason = reason why you are adding said person to blist
delete/remove
bl delete|remove name
Removes a user from your blist.
qa
bl qa name [reason]
Adds a user to your blist w/o requiring extra details (reason is optional).
Source
The latest source and information for this addon can be found on GitHub. | http://docs.windower.net/addons/blist/ | 2018-05-20T16:05:16 | CC-MAIN-2018-22 | 1526794863626.14 | [] | docs.windower.net |
2. The Attribute Protocol (ATT)¶
2.1. The Bluetooth Low Energy Protocol Stack¶
Bluetooth Low Energy is a protocol that is built on a client/server architecture. Let’s suppose that we have connected our smartphone to a fit tracker device. The device periodically measures our heart rate and needs to pass on these measurements to our smartphone. In this example, our smartphone acts as the client who wants to access this information, and the fit tracker as the server who acquires the measurements and waits to deliver them to the appropriate end.
In networking terminology, the server holds resources, like a heart rate measurement or the battery level. A client requests these resources from the server, using some predefined operations called services, and, if the request is supported, it responds with a specified format.
The figure below depicts the BLE Protocol stack:
In this tutorial we will be mostly concerned with the protocols highlighted in light blue: the Attribute Protocol (ATT) and the Generic Attribute Profile (GATT). These protocols designate how data are represented in a server’s database and the respective actions that a client has to perform to access these data. We will cover some prerequisite theory of the ATT and GATT protocols and we’ll proceed to building our custom GATT profile using Dialog Semiconductor’s SDK 6.0.12.
2.2. The Attribute Protocol (ATT)¶
2.2.1. Attributes¶
As we discussed in the previous section, the server holds resources to which a client needs to have access. These data are stored as attributes on the BLE server.
An attribute is a data representation format which is composed of four fields:
The attribute type, defined by a UUID.
The attribute handle, which is an unsigned number unique for the attribute.
The attribute permissions, which control if the client can read or modify a resource.
The attribute value.
The attribute type specifies what this attribute represents. This is accomplished by the use of a Universally Unique Identifier (or UUID for short). A UUID is a 128-bit value which someone can assign to an attribute without registering it to a central authority. The probability that two different parties will assign the same UUID is extremely low (it’s 1/2128), and for this reason a UUID is considered unique. Since a lot of the functionality that these devices provide is common, a range of UUID values has been reserved for predefined values, each of which exposes a set of action and data for common use cases. To aim in reducing the amount of data transferred, these values have a length of 16-bit or 32-bit and the actual UUID is computed by the use of the Bluetooth Base UUID and a simple arithmetic operation.
The attribute handle is a non-zero value and is used to reference the attribute. All attributes of a BLE server are stored in its database by increasing attribute handle value. It’s not mandatory for a successive attribute to have the next integer handle value. Gaps are permitted between attribute handle values, but the handle values must be in increasing order.
Attribute permissions specify if the resource can be read and/or written, as well as the security level that is required for this. Different security combinations are allowed. For example, an attribute may require no permissions for reading, but the client may have to authenticate to be able to modify the resource.
Attribute values may either be of fixed length or of variable length. For variable length attribute values, only one attribute value is allowed to be sent in a PDU. If the value is too long, it may be split across multiple PDUs.
There is a special type of attribute that is not allowed to be read, but could be written, notified, or indicated (we will discuss later the last two operations). These are called Control Point Attributes, as they are mainly used for application control rather than passing data between the devices.
2.2.2. Attribute methods¶
The ATT protocol also defines methods by which attributes can be read or written. The methods supported are six and consequentially they define six Protocol Data Units (PDU). A PDU, as regards the ATT protocol, is the packet that will be forwarded to (or received from) the lower layer, namely the Logical Link Control and Adaptation Protocol (L2CAP) layer, and will then be encapsulated to be sent over the physical link (or respectively be sent to the upper layers). These six methods and their PDU types are:
Commands: Sent to a server by a client and do not invoke a response
Requests: Sent to a server by a client and invoke a response
Responses: Sent to a client by a server when a request is received.
Notifications: Sent to a client by a server without invoking a response. They are sent without the client requesting them.
Indications: Sent to a client by a server and they invoke a response. They are sent without the client requesting them.
Confirmations: Sent to a server by a client as an acknowledgment to an indication.
Above you can find the format of the ATT Protocol PDU. The attribute opcode field provides information about the method performed, as well as a flag that indicates if the PDU is a command, and a flag used when authentication is required. If no authentication is required, then the authentication signature field is left out.
If all goes well and the packet is received as expected, the protocol dictates the action that the other end has to take on success. In case of error, there is also provision for an error response which indicates what was the source of the error. These exchange flows are summarized in the table below. You can find more information about them in the Bluetooth Core Specification, Vol. 3, Part F, Section 3, Chapter 3.4.
2.3. In a nutshell¶
What is important to remember is that the ATT protocol is concerned with the representation of data (attributes) in a BLE server database and defines the transaction activities on them, either when they are delivered successfully or not. This provides a basis for packet fragmentation and encapsulation for the lower stack protocols, and in the same time the building blocks that will be used by the GATT protocol to define a higher level of abstraction for data access. | http://lpccs-docs.dialog-semiconductor.com/tutorial-custom-profile-DA145xx/att.html | 2020-09-18T21:09:17 | CC-MAIN-2020-40 | 1600400188841.7 | [] | lpccs-docs.dialog-semiconductor.com |
Stripe token missing at checkout
This problem can occur if your browser cache data is conflicting with our checkout page.
Usually you can fix this by refreshing your browser cache, then going back to choose your plan on our product page and checkout.
If clearing your cache and restarting your browser does not solve the issue, you might want to try with another browser.
If this is not working on any browser, please contact us, we'll help you solve the problem. | https://docs.presscustomizr.com/article/286-stripe-token-missing-at-checkout | 2020-09-18T20:19:03 | CC-MAIN-2020-40 | 1600400188841.7 | [] | docs.presscustomizr.com |
.
This page has no comments. | https://docs.trifacta.com/display/SS/String+Data+Type?reload=true | 2020-09-18T20:27:26 | CC-MAIN-2020-40 | 1600400188841.7 | [] | docs.trifacta.com |
Loyalty programs in ERPLY are points-based systems which reward points to customers based on pre-tax dollars spent at your store. Loyalty programs can be used to reward your most loyal customers with discounts or free items, as well as to encourage customer tracking for all of your customers, to let you see who your customers are and what they like.
Customers can be assigned “Customer loyalty” codes, which can be printed on bar-coded Loyalty cards for easy scanning. This encourages your customers to participate in customer tracking while making it easy for your employees to make sure everyone gets the right credit for the purchase.
Loyalty codes are the numbers assigned to your customers that identify them and match them to their customer records in the ERPLY system. Customer Loyalty codes can be used to search for your customers at POS or in the backoffice, and turning them into barcodes that can be scanned at the POS is a great way to reduce employee error and customer mismatch.
Click on the + icon to open a new customer card.
Enter their information including the card numbe and hit SAVE.
To edit an existing cstomer, search for the customer in the search box. Once pulled out, click on their card icon.
Click on the edit button...
Now, enter the loyalty code into the “Card Number” field, then hit Save
Select FIND CUSTOMER to bring up your customer list. Select the customer from the list.
The customer selected is now assigned to this sale. Next to their name is the amount of loyalty points he/she has.
Once a customer is selected, the EDIT CUSTOMER button will be clickable. It is grayed out and unusable on the default POS Customer.
Select EDIT CUSTOMER:
The Customer card will open, and here you can edit information on the customer. The loyalty code can be added in the “Client Card” field.
Once you are done editing, click SAVE. You should receive the “Customer Successfully Changed” screen.
Go to CUSTOMERS and click on All Customers:
This will bring up your entire customer list, go ahead and select a customer. This will open up the customer card. Enter the loyalty code in the “Customer loyalty card code” field, then hit SAVE.
Loyalty card can be created and customized with Actual Reports to your liking. First, you need to make a handful of generic loyalty cards in Actual Reports, with codes included generated by Excel. As people sign up for your loyalty program, scan the barcode into the right field on the POS customer edit tool.
Go to INVENTORY > Product List, then select “Print Labels with Actual Reports”
Select “Design a template with Actual Reports”
This Loyalty card template was designed with Actual Reports, it can generate barcodes out of the loyalty number. Loyalty card can consist of company logo, company address, company email, terms and conditions, and more.
Example:
A loyalty program has two components to it: how to earn the points, and how to spend the points. In ERPLY, points are earned on every purchase on the pre-tax sum, based on settings in SETTINGS > Configuration, or product groups. Specific items or Non-stock Items and Services may be excluded from accruing loyalty points as well.
This section will go over defining how Loyalty Points are earned in three different ways:
Open SETTINGS and then Configuration.
Under “Invoices and sales”, find field “for all purchase, customer collects:”
Define “x” reward points per $1. This may be set in whole numbers or in decimals (if you must spend 10 dollars to earn one point, then .1 should be used). When awarding fractional points per dollar, everything after the decimal gets drop, so the customer receives the whole point per sale.
Reward points are valid for: define the # of months this loyalty program will be effective. This expiration rate is used for all points accrued, no matter where they are set. If blank, then the rewards points do not expire
Open INVENTORY > Product and Service Groups
Right click on the Product or Service Group you’d like to change, and select ‘edit’
Fill in the number of Loyalty Points that a purchase from this Group should award. Putting a zero here will cause this whole group (but not subgroups) to not award Loyalty points.
Go to INVENTORY and click on Product Catalog then select the item to open the Product card.
On the product card there is a checkbox for “This item does not grant customer reward points”
Select that box, and Save.
This item, when sold, will no longer grant rewards points.
Spending Loyalty Points in ERPLY is accomplished through our Sales Promotions module, and can vary based on which POS version you are using. Differences will be noted in this guide, but if you have questions feel free to email us at [email protected] or call us.
Open RETAIL CHAINS and go to Promotions then click the + sign to create a new promotion:
Give a meaningfull to the new promotion, speficy the start date, and define the rules — SAVE
In the example above, the customer must redeem 500 loyalty points to get $10 off purchase. If ‘end date’ is blank, then this promotion will never expire.
Open RETAIL CHAIN and go to Coupon Rules then click on the ‘New’ button to create a new coupon.
Give your coupon a name, and indicate that it should be printed automatically from POS. Use the dropdown to indicate that it should print after the requisite number of rewards points, then indicate the number of rewards points below. (TouchPOS will only deduct points on coupon redemption, not on printing! That is setup in the promotion setup, above.)
Select the promotion you’d like this coupon to apply to the sale (the one you created above), and how much it costs to print this coupon.
Add in the description of the promotion and any terms you may have.
Indicate a issued from date, and an end date if you want one.
SAVE this screen.
Loyalty Program is now set to be used.
Once you have set up how the points accrue to your customer, and what you would like to give your customers for their loyalty, you are ready to begin issuing and redeeming loyalty points.
Loyalty point balances can be checked and adjusted on the customer card for each customer. test.js | http://docs-eng.nimi24.com/the-back-office/retail-chain/customers-promotions-and-loyalty-points | 2020-09-18T20:08:57 | CC-MAIN-2020-40 | 1600400188841.7 | [] | docs-eng.nimi24.com |
Logging AWS Security Hub API calls with AWS CloudTrail
AWS Security Hub is integrated with AWS CloudTrail, a service that provides a record of actions taken by a user, role, or an AWS service in Security Hub. CloudTrail captures API calls for Security Hub as events. The captured calls include calls from the Security Hub console and code calls to the Security Hub API operations. If you create a trail, you can enable continuous delivery of CloudTrail events to an Amazon S3 bucket, including events for Security Hub. If you don't configure a trail, you can still view the most recent events on the CloudTrail console in Event history. Using the information that CloudTrail collects, you can determine the request that was made to Security Hub, the IP address that the request was made from, who made the request, when it was made, and additional details.
To learn more about CloudTrail, including how to configure and enable it, see the AWS CloudTrail User Guide.
Security Hub information in CloudTrail
CloudTrail is enabled on your AWS account when you create the account. When supported event activity occurs in Security Hub, that activity is recorded in a CloudTrail event along with other AWS service events in Event history. You can view, search, and download recent events in your account. For more information, see Viewing events with CloudTrail event history.
For an ongoing record of events in your account, including events for Security Hub, create a trail. A trail enables CloudTrail to deliver log files to an Amazon S3 bucket. By default, when you create a trail:
Security Hub supports logging all of the Security Hub API actions as events in CloudTrail logs. To view a list of Security Hub operations, see the Security Hub API Reference.
When activity for the following actions is logged to CloudTrail, the value for
responseElements is set to
null. This ensures that
sensitive information isn't included in CloudTrail logs.
BatchImportFindings
GetFindings
GetInsights
GetMembers
UpdateFindings: Security HubInsight action. In this example, an insight called
Test
Insight is created. The
ResourceId attribute is specified as the
Group by aggregator, and no optional filters for this insight
are specified. For more information about insights, see Insights in AWS Security Hub.
{ "eventVersion": "1.05", "userIdentity": { "type": "IAMUser", "principalId": "AIDAJK6U5DS22IAVUI7BW", "arn": "arn:aws:iam::012345678901:user/TestUser", "accountId": "012345678901", "accessKeyId": "AKIAIOSFODNN7EXAMPLE", "userName": "TestUser" }, "eventTime": "2018-11-25T01:02:18Z", "eventSource": "securityhub.amazonaws.com", "eventName": "CreateInsight", "awsRegion": "us-west-2", "sourceIPAddress": "205.251.233.179", "userAgent": "aws-cli/1.11.76 Python/2.7.10 Darwin/17.7.0 botocore/1.5.39", "requestParameters": { "Filters": {}, "ResultField": "ResourceId", "Name": "Test Insight" }, "responseElements": { "InsightArn": "arn:aws:securityhub:us-west-2:0123456789010:insight/custom/f4c4890b-ac6b-4c26-95f9-e62cc46f3055" }, "requestID": "c0fffccd-f04d-11e8-93fc-ddcd14710066", "eventID": "3dabcebf-35b0-443f-a1a2-26e186ce23bf", "readOnly": false, "eventType": "AwsApiCall", "recipientAccountId": "012345678901" } | https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-ct.html | 2020-09-18T22:00:31 | CC-MAIN-2020-40 | 1600400188841.7 | [] | docs.aws.amazon.com |
For the table below, all metrics that begin with
elasticsearch.indices.* are duplicated for each index being monitored, with the * replaced by the index name (your indices will vary based on your implementation). All metrics that start with
elasticsearch.thread_pool.* are duplicated for each thread pool, with the * replaced by the thread pool name. The various thread pools are: | https://docs.metricly.com/integrations/elasticsearch/elastisearch-all-metrics/ | 2020-09-18T19:33:01 | CC-MAIN-2020-40 | 1600400188841.7 | [] | docs.metricly.com |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.