content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
cp_mgmt_global_assignment – Manages global-assignment objects on Check Point over Web Services API¶
New in version 2.9.
Synopsis¶
- Manages global-assignment objects on Check Point devices including creating, updating and removing objects.
- All operations are performed over Web Services API.
Examples¶
- name: add-global-assignment cp_mgmt_global_assignment: dependent_domain: domain2 global_access_policy: standard global_domain: Global global_threat_prevention_policy: standard manage_protection_actions: true state: present - name: set-global-assignment cp_mgmt_global_assignment: dependent_domain: domain1 global_domain: Global2 global_threat_prevention_policy: '' manage_protection_actions: false state: present - name: delete-global-assignment cp_mgmt_global_assignment: dependent_domain: domain1 global_domain: Global2 state: absent
Return Values¶
Common return values are documented here, the following are the fields unique to this module:
Status¶
- This module is not guaranteed to have a backwards compatible interface. [preview]
- This module is maintained by the Ansible Community. [community] | https://docs.ansible.com/ansible/latest/modules/cp_mgmt_global_assignment_module.html | 2020-01-18T00:04:18 | CC-MAIN-2020-05 | 1579250591431.4 | [] | docs.ansible.com |
- Security Features and Setup >
- Restrict MongoDB Support Access to Atlas Backend Infrastructure
Restrict MongoDB Support Access to Atlas Backend Infrastructure¶
On this page
As an organization owner, you can set up your Atlas organization so that MongoDB Production Support Employees, including Technical Service Engineers, can only access your production servers with your explicit permission. If a support issue arises and you want to grant access to your servers for MongoDB support staff, you can grant a 24-hour bypass at the cluster level.
Important
Restricting infrastructure access for MongoDB Production Support Employees may increase support issue response and resolution time and negatively impact cluster availability.
Restrict Access at the Organization Level¶
You must be an organization owner to adjust this setting.
Grant a 24-hour Bypass to Allow Access for Support Staff¶
If a support issue arises and you want to allow MongoDB support staff limited-time access to a cluster within your organization, you can do so with the following procedure. | https://docs.atlas.mongodb.com/security-restrict-support-access/ | 2020-01-18T01:50:05 | CC-MAIN-2020-05 | 1579250591431.4 | [] | docs.atlas.mongodb.com |
Control
Channel
Control Trigger Channel
Control Trigger Channel
Control Trigger Channel
Class
Trigger
Definition
Enables real time notifications to be received in the background for objects that establish a TCP connection and wish to be notified of incoming traffic.
Call BackgroundExecutionManager.RequestAccessAsync before using ControlChannelTrigger.
Note
This class is not supported on Windows Phone.
public : sealed class ControlChannelTrigger : IClosable
struct winrt::Windows::Networking::Sockets::ControlChannelTrigger : IClosable
public sealed class ControlChannelTrigger : IDisposable
Public NotInheritable Class ControlChannelTrigger Implements IDisposable
- Attributes
-
Windows 10 requirements
Remarks
The ControlChannelTrigger class and related interfaces are used to enable your app to use the network when your app is not the foreground app. A Universal Windows app. Use control channel triggers when your app needs to maintain a network connection even if it is in the background.
While the ControlChannelTrigger class can be used with DatagramSocket, StreamSocket, or StreamSocketListener, Windows 10 provides an improved mechanism for apps that use those classes and want to maintain connections while in the background. See Network communications in the background for details about SocketActivityTrigger and the socket broker.
The ControlChannelTrigger class is recommended to be used by instances of the following that establish a TCP connection:
- The HttpClient class in the Windows.Web.Http namespace.
-)
There are several types of keep-alive intervals that may relate to network apps. At the lowest level, an app can set a TCP keep-alive option to send TCP keep-alive packets between a client app and a server to maintain an established TCP connection that is not being used. as an argument to the ControlChannelTrigger constructor..
Version history
Constructors
Properties
Methods
See also
Feedback | https://docs.microsoft.com/en-us/uwp/api/windows.networking.sockets.controlchanneltrigger | 2020-01-18T01:30:30 | CC-MAIN-2020-05 | 1579250591431.4 | [] | docs.microsoft.com |
vCenter Enhanced Linked Mode allows you to log in to any single instance of vCenter Server Appliance or vCenter Server and view and manage the inventories of all the vCenter Server systems in the group.
-.
You can create a vCenter Enhanced Linked Mode group during the deployment of vCenter Server Appliance or installation of vCenter Server.
After deployment, you can join a vCenter Enhanced Linked Mode group by moving, or repointing, a vCenter Server with an embedded Platform Services Controller from one vSphere domain to another exisitng domain. See Repoint vCenter Server with Embedded Platform Services Controller to Another vCenter Server with Embedded Platform Services Controller in a Different Domain for information on repointing an embedded vCenter Server node. | https://docs.vmware.com/en/VMware-vSphere/6.7/com.vmware.vcenter.install.doc/GUID-4394EA1C-0800-4A6A-ADBF-D35C41868C53.html | 2020-01-18T01:15:39 | CC-MAIN-2020-05 | 1579250591431.4 | [] | docs.vmware.com |
- value
-
The string to compare.The string to compare.. | http://docs.go-mono.com/monodoc.ashx?link=M%3ASystem.String.StartsWith(System.String%2CSystem.Boolean%2CSystem.Globalization.CultureInfo) | 2020-01-17T23:51:40 | CC-MAIN-2020-05 | 1579250591431.4 | [] | docs.go-mono.com |
fortios_vpn_l2tp – Configure L2TP in Fortinet’s FortiOS and FortiGate¶
New in version 2.9.
Synopsis¶
- This module is able to configure a FortiGate or FortiOS (FOS) device by allowing the user to set and modify vpn feature and l2tp2TP. fortios_vpn_l2tp: host: "{{ host }}" username: "{{ username }}" password: "{{ password }}" vdom: "{{ vdom }}" https: "False" vpn_l2tp: eip: "<your_own_value>" enforce_ipsec: "enable"] | https://docs.ansible.com/ansible/latest/modules/fortios_vpn_l2tp_module.html | 2020-01-18T00:06:40 | CC-MAIN-2020-05 | 1579250591431.4 | [] | docs.ansible.com |
icx_logging – Manage logging on Ruckus ICX 7000 series switches¶
New in version 2.9.
Synopsis¶
- This module provides declarative management of logging on Ruckus ICX 7000 series switches.
Notes¶
Note
- Tested against ICX 10.1.
- For information on using ICX platform, see the ICX OS Platform Options guide.
Examples¶
- name: Configure host logging. icx_logging: dest: host name: 172.16.0.1 udp_port: 5555 - name: Remove host logging configuration. icx_logging: dest: host name: 172.16.0.1 udp_port: 5555 state: absent - name: Disables the real-time display of syslog messages. icx_logging: dest: console state: absent - name: Enables local syslog logging. icx_logging: dest : on state: present - name: configure buffer level. icx_logging: dest: buffered level: critical - name: Configure logging using aggregate icx_logging: aggregate: - { dest: buffered, level: ['notifications','errors'] } - name: remove logging using aggregate icx_logging: aggregate: - { dest: console } - { dest: host, name: 172.16.0.1, udp_port: 5555 } state: absent
Return Values¶
Common return values are documented here, the following are the fields unique to this module:
Status¶
- This module is not guaranteed to have a backwards compatible interface. [preview]
- This module is maintained by the Ansible Community. [community] | https://docs.ansible.com/ansible/latest/modules/icx_logging_module.html | 2020-01-18T00:25:49 | CC-MAIN-2020-05 | 1579250591431.4 | [] | docs.ansible.com |
Using Chat
If the Chat feature is enabled, you can chat with agents to resolve issues. The Chat window appears at the right side of the FootPrints window. You can collapse this window but not close it. This feature replaces the Instant Talk feature available in earlier releases of FootPrints.
Each chat is one-to-one, meaning that only two people can participate in a chat. Agents can set their availability status so you can easily identify which agents are available to help you.
If needed, you can copy the contents of a Chat window and paste it into another application.
To use chat
- Click the Chat bubble or the open iconat the right side of the browser window.
The Chat pane expands, showing the agents assigned to your account.
- Select a contact and start typing your message.
- Click Send.
An agent will respond as soon as one is available.
- To close the Chat pane, click the close iconor click in the header row.
The following video (0:45) presentation provides more information about Using Chat.
Related topic
Was this page helpful? Yes No Submitting... Thank you | https://docs.bmc.com/docs/fpsc121/using-chat-495323180.html | 2020-01-18T01:56:26 | CC-MAIN-2020-05 | 1579250591431.4 | [] | docs.bmc.com |
Troubleshooting issues with Update Management
This article discusses solutions to issues that you might encounter when you use Update Management.
There's an agent troubleshooter for the Hybrid Worker agent to determine the underlying problem. To learn more about the troubleshooter, see Troubleshoot update agent issues. For all other issues, use the following troubleshooting guidance.
If you encounter issues while you're trying to onboard the solution on a virtual machine (VM), check the Operations Manager log under Application and Services Logs on the local machine for events with event ID 4502 and event details that contain Microsoft.EnterpriseManagement.HealthService.AzureAutomation.HybridAgent.
The following section highlights specific error messages and possible resolutions for each. For other onboarding issues see Troubleshoot solution onboarding.
Scenario: Machines don't show up in the portal under Update Management
Issue
You experience the following symptoms:
Your machine shows Not configured from the Update Management view of a VM.
Your machines are missing from the Update Management view of your Azure Automation account.
You have machines that show as Not Assessed under Compliance. However, you see heartbeat data in Azure Monitor logs for the Hybrid Runbook Worker but not for Update Management.
Cause
This issue can be caused by local configuration issues or by improperly configured scope configuration.
You might have to reregister and reinstall the Hybrid Runbook Worker.
You might have defined a quota in your workspace that's been reached and that's preventing further data storage.
Resolution
Run the troubleshooter for Windows or Linux, depending on the OS.
Make sure your machine is reporting to the correct workspace. For guidance on how to verify this aspect, see Verify agent connectivity to Log Analytics. Also make sure this workspace is linked to your Azure Automation account. To confirm, go to your Automation account and select Linked workspace under Related Resources.
Make sure the machines show up in your Log Analytics workspace. Run the following query in the Log Analytics workspace that's linked to your Automation account:
Heartbeat | summarize by Computer, Solutions
If you don't see your machine in the query results, it hasn't recently checked in, which means there's probably a local configuration issue and you should reinstall the agent. If your machine shows up in the query results, you need to verify the scope configuration specified in the next bulleted item in this list.
Check for scope configuration problems. Scope configuration determines which machines get configured for the solution. If your machine is showing up in your workspace but not in the Update Management portal, you'll need to configure the scope configuration to target the machines. To learn how to do this, see Onboard machines in the workspace.
In your workspace, run the following query:
Operation | where OperationCategory == 'Data Collection Status' | sort by TimeGenerated desc
If you get a
Data collection stopped due to daily limit of free data reached. Ingestion status = OverQuotaresult, there's a quota defined on your workspace that's been reached and that has stopped data from being saved. In your workspace, go to Usage and estimated costs > data volume management and check your quota or remove it.
If these steps don't resolve your problem, follow the steps at Deploy a Windows Hybrid Runbook Worker to reinstall the Hybrid Worker for Windows. Or, for Linux, deploy a Linux Hybrid Runbook Worker.
Scenario: Unable to register Automation Resource Provider for subscriptions
Issue
When you work with solutions in your Automation account, you encounter the following error:
Error details: Unable to register Automation Resource Provider for subscriptions:
Cause
The Automation Resource Provider isn't registered in the subscription.
Resolution
To register the Automation Resource Provider, follow these steps in the Azure portal:
- In the Azure service list at the bottom of the portal, select All services, and then select Subscriptions in the General service group.
- Select your subscription.
- Under Settings, select Resource Providers.
- From the list of resource providers, verify that the Microsoft.Automation resource provider is registered.
- If it's not listed, register the Microsoft.Automation provider by following the steps at Resolve errors for resource provider registration.
Scenario: The components for the Update Management solution have been enabled, and now this virtual machine is being configured
Issue
You continue to see the following message on a virtual machine 15 minutes after onboarding:
The components for the 'Update Management' solution have been enabled, and now this virtual machine is being configured. Please be patient, as this can sometimes take up to 15 minutes.
Cause
This error can occur for the following reasons:
- Communication with the Automation account is being blocked.
- The VM being onboarded might have come from a cloned machine that wasn't sysprepped with the Microsoft Monitoring Agent (MMA) installed.
Resolution
- Go to Network planning to learn about which addresses and ports must be allowed for Update Management to work.
- If you're using a cloned image:
- In your Log Analytics workspace, remove the VM from the saved search for the
MicrosoftDefaultScopeConfig-Updatesscope configuration if it's shown. Saved searches can be found under General in your workspace.
- Run
Remove-Item -Path "HKLM:\software\microsoft\hybridrunbookworker" -Recurse -Force.
- Run
Restart-Service HealthServiceto restart the
HealthService. This recreates the key and generates a new UUID.
- If this approach doesn't work, run sysprep on the image first and then install the MMA.
Scenario: You receive a linked subscription error when you create an update deployment for machines in another Azure tenant
Issue
You encounter the following error when you try to create an update deployment for machines in another Azure tenant:
The client has permission to perform action 'Microsoft.Compute/virtualMachines/write' on scope '/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resourceGroupName/providers/Microsoft.Automation/automationAccounts/automationAccountName/softwareUpdateConfigurations/updateDeploymentName', however the current tenant '00000000-0000-0000-0000-000000000000' is not authorized to access linked subscription '00000000-0000-0000-0000-000000000000'.
Cause
This error occurs when you create an update deployment that has Azure VMs in another tenant that's included in an update deployment.
Resolution
Use the following workaround to get these items scheduled. You can use the New-AzureRmAutomationSchedule cmdlet with the
-ForUpdate switch to create a schedule. Then, use the New-AzureRmAutomationSoftwareUpdateConfiguration cmdlet and pass the machines in the other tenant to the
-NonAzureComputer parameter. The following example shows how to do this:
$nonAzurecomputers = @("server-01", "server-02") $startTime = ([DateTime]::Now).AddMinutes(10) $s = New-AzureRmAutomationSchedule -ResourceGroupName mygroup -AutomationAccountName myaccount -Name myupdateconfig -Description test-OneTime -OneTime -StartTime $startTime -ForUpdate New-AzureRmAutomationSoftwareUpdateConfiguration -ResourceGroupName $rg -AutomationAccountName $aa -Schedule $s -Windows -AzureVMResourceId $azureVMIdsW -NonAzureComputer $nonAzurecomputers -Duration (New-TimeSpan -Hours 2) -IncludedUpdateClassification Security,UpdateRollup -ExcludedKbNumber KB01,KB02 -IncludedKbNumber KB100
Scenario: Unexplained reboots
Issue
Even though you've set the Reboot Control option to Never Reboot, machines are still rebooting after updates are installed.
Cause
Windows Update can be modified by several registry keys, any of which can modify reboot behavior.
Resolution
Review the registry keys listed under Configuring Automatic Updates by editing the registry and Registry keys used to manage restart to make sure your machines are configured properly.
Scenario: Machine shows "Failed to start" in an update deployment
Issue
A machine shows a Failed to start status. When you view the specific details for the machine, you see the following error:
Failed to start the runbook. Check the parameters passed. RunbookName Patch-MicrosoftOMSComputer. Exception You have requested to create a runbook job on a hybrid worker group that does not exist.
Cause
This error can occur for one of the following reasons:
- The machine doesn’t exist anymore.
- The machine is turned off and unreachable.
- The machine has a network connectivity issue, and therefore the hybrid worker on the machine is unreachable.
- There was an update to the MMA that changed the SourceComputerId.
- Your update run was throttled if you hit the limit of 2,000 concurrent jobs in an Automation account. Each deployment is considered a job, and each machine in an update deployment counts as a job. Any other automation job or update deployment currently running in your Automation account counts toward the concurrent job limit.
Resolution
When applicable, use dynamic groups for your update deployments. Additionally:
Verify that the machine still exists and is reachable. If it doesn't exist, edit your deployment and remove the machine.
See the network planning section for a list of ports and addresses that are required for Update Management, and then verify that your machine meets these requirements.
Run the following query in Log Analytics to find machines in your environment whose
SourceComputerIdhas changed. Look for computers that have the same
Computervalue but a different
SourceComputerIdvalue.
Heartbeat | where TimeGenerated > ago(30d) | distinct SourceComputerId, Computer, ComputerIP
After you find affected machines, edit the update deployments that target those machines, and then remove and re-add them so that
SourceComputerIdreflects the correct value.
Scenario: Updates are installed without a deployment
Issue
When you enroll a Windows machine in Update Management, you see updates installed without a deployment.
Cause
On Windows, updates are installed automatically as soon as they're available. This behavior can cause confusion if you didn't schedule an update to be deployed to the machine.
Resolution
The
HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows\WindowsUpdate\AU registry key defaults to a setting of 4: auto download and install.
For Update Management clients, we recommend setting this key to 3: auto download but do not auto install.
For more information, see Configuring Automatic Updates.
Scenario: Machine is already registered to a different account
Issue
You receive the following error message:
Unable to Register Machine for Patch Management, Registration Failed with Exception System.InvalidOperationException: {"Message":"Machine is already registered to a different account."}
Cause
The machine has already been onboarded to another workspace for Update Management.
Resolution
- Follow the steps under Machines don't show up in the portal under Update Management to make sure the machine is reporting to the correct workspace.
- Clean up old artifacts on the machine by deleting the hybrid runbook group, and then try again.
Scenario: Machine can't communicate with the service
Issue
You receive one of the following error messages:
Unable to Register Machine for Patch Management, Registration Failed with Exception System.Net.Http.HttpRequestException: An error occurred while sending the request. ---> System.Net.WebException: The underlying connection was closed: An unexpected error occurred on a receive. ---> System.ComponentModel.Win32Exception: The client and server can't communicate, because they do not possess a common algorithm
Unable to Register Machine for Patch Management, Registration Failed with Exception Newtonsoft.Json.JsonReaderException: Error parsing positive infinity value.
The certificate presented by the service <wsid>.oms.opinsights.azure.com was not issued by a certificate authority used for Microsoft services. Contact your network administrator to see if they are running a proxy that intercepts TLS/SSL communication.
Access is denied. (Exception form HRESULT: 0x80070005(E_ACCESSDENIED))
Cause
A proxy, gateway, or firewall might be blocking network communication.
Resolution
Review your networking and make sure appropriate ports and addresses are allowed. See network requirements for a list of ports and addresses that are required by Update Management and Hybrid Runbook Workers.
Scenario: Unable to create self-signed certificate
Issue
You receive one of the following error messages:
Unable to Register Machine for Patch Management, Registration Failed with Exception AgentService.HybridRegistration. PowerShell.Certificates.CertificateCreationException: Failed to create a self-signed certificate. ---> System.UnauthorizedAccessException: Access is denied.
Cause
The Hybrid Runbook Worker couldn't generate a self-signed certificate.
Resolution
Verify that the system account has read access to the C:\ProgramData\Microsoft\Crypto\RSA folder, and try again.
Scenario: The scheduled update failed with a MaintenanceWindowExceeded error
Issue
The default maintenance window for updates is 120 minutes. You can increase the maintenance window to a maximum of 6 hours, or 360 minutes.
Resolution
Edit any failing scheduled update deployments, and increase the maintenance window.
For more information on maintenance windows, see Install updates.
Scenario: Machine shows as "Not assessed" and shows an HResult exception
Issue
- You have machines that show as Not Assessed under Compliance, and you see an exception message below it.
- You have machines that show as not assessed.
- You see an HRESULT error code in the portal.
Cause
The Update Agent (Windows Update Agent on Windows; the package manager for a Linux distribution) isn't configured correctly. Update Management relies on the machine's Update Agent to provide the updates that are needed, the status of the patch, and the results of deployed patches. Without this information, Update Management can't properly report on the patches that are needed or installed.
Resolution
Try to perform updates locally on the machine. If this fails, it typically means there's a configuration error with the update agent.
This problem is frequently caused by network configuration and firewall issues. Try the following:
- For Linux, check the appropriate documentation to make sure you can reach the network endpoint of your package repository.
- For Windows, check your agent configuration as listed in Updates aren't downloading from the intranet endpoint (WSUS/SCCM).
- If the machines are configured for Windows Update, make sure you can reach the endpoints described in Issues related to HTTP/proxy.
- If the machines are configured for Windows Server Update Services (WSUS), make sure you can reach the WSUS server configured by the WUServer registry key.
If you see an HRESULT, double-click the exception displayed in red to see the entire exception message. Review the following table for potential solutions or recommended actions:
Reviewing the %Windir%\Windowsupdate.log file can also help you determine possible causes. For more information about how to read the log, see How to read the Windowsupdate.log file.
You can also download and run the Windows Update troubleshooter to check for any issues with Windows Update on the machine.
Note
The Windows Update troubleshooter documentation indicates that it's for use on Windows clients, but it also works on Windows Server.
Scenario: Update run returns "Failed" status (Linux)
Issue
An update run starts but encounters errors during the run.
Cause
Possible causes:
- Package manager is unhealthy.
- Update Agent (WUA for Windows, distro-specific package manager for Linux) is misconfigured.
- Specific packages are interfering with cloud-based patching.
- The machine is unreachable.
- Updates had dependencies that weren't resolved.
Resolution
If failures occur during an update run after it starts successfully, check the job output from the affected machine in the run. You might find specific error messages from your machines that you can research and take action on. Update Management requires the package manager to be healthy for successful update deployments.
If specific patches, packages, or updates are seen immediately before the job fails, you can try excluding those from the next update deployment. To gather log info from Windows Update, see Windows Update log files.
If you can't resolve a patching issue, make a copy of the following log file and preserve it for troubleshooting purposes before the next update deployment starts:
/var/opt/microsoft/omsagent/run/automationworker/omsupdatemgmt.log
Patches aren't installed
Machines don't install updates
- Try running updates directly on the machine. If the machine can't apply the updates, consult the list of potential errors in the troubleshooting guide.
- If updates run locally, try removing and reinstalling the agent on the machine by following the guidance at Remove a VM from Update Management.
I know updates are available, but they don't show as available on my machines
This often happens if machines are configured to get updates from WSUS or System Center Configuration Manager (SCCM) but WSUS and SCCM haven't approved the updates.
You can check whether the machines are configured for WSUS and SCCM by cross-referencing the UseWUServer registry key to the registry keys in the "Configuring Automatic Updates by Editing the Registry" section of this article.
If updates aren't approved in WSUS, they won't be installed. You can check for unapproved updates in Log Analytics by running the following query:
Update | where UpdateState == "Needed" and ApprovalSource == "WSUS" and Approved == "False" | summarize max(TimeGenerated) by Computer, KBID, Title
Updates show as installed, but I can't find them on my machine
- Updates are often superseded by other updates. For more information, see Update is superseded in the Windows Update Troubleshooting guide.
Installing updates by classification on Linux
- Deploying updates to Linux by classification ("Critical and security updates") has important caveats, especially for CentOS. These limitations are documented on the Update Management overview page.
KB2267602 is consistently missing
- KB2267602 is the Windows Defender definition update. It's updated daily.
Next steps
If you didn't see your problem or can't resolve your issue, try one of the following channels for additional support:
- Get answers from Azure experts through Azure Forums.
- Connect with @AzureSupport, the official Microsoft Azure account for improving customer experience by connecting the Azure community to the right resources: answers, support, and experts.
- File an Azure support incident. Go to the Azure support site and select Get Support.
Feedback | https://docs.microsoft.com/en-us/azure/automation/troubleshoot/update-management | 2020-01-18T01:42:27 | CC-MAIN-2020-05 | 1579250591431.4 | [] | docs.microsoft.com |
Folder object
Provides access to all the properties of a folder.
Remarks
The following code illustrates how to obtain a Folder object and how to return one of its properties.
Sub ShowFolderInfo(folderspec) Dim fs, f, s, Set fs = CreateObject("Scripting.FileSystemObject") Set f = fs.GetFolder(folderspec) s = f.DateCreated MsgBox s End Sub
Collections
Methods
Properties
See also
- Objects (Visual Basic for Applications)
- Object library reference for Office (members, properties, methods)
Support and feedback
Have questions or feedback about Office VBA or this documentation? Please see Office VBA support and feedback for guidance about the ways you can receive support and provide feedback. | https://docs.microsoft.com/en-us/office/vba/Language/Reference/user-interface-help/folder-object | 2020-01-18T00:27:24 | CC-MAIN-2020-05 | 1579250591431.4 | [] | docs.microsoft.com |
Tftp
Applies To: Windows Server 2003, Windows Vista, Windows XP, Windows Server 2008, Windows 7, Windows Server 2003 R2, Windows Server 2008 R2, Windows Server 2000, Windows Server 2012, Windows 8.
Syntax
tftp [-i] [<Host>] [{get | put}] <Source> [<Destination>]
Parameters
Remarks.
Examples
Copy the file boot.img from the remote computer Host1.
tftp –i Host1 get boot.img | https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2012-r2-and-2012/ff698993(v%3Dws.11) | 2020-01-18T01:34:59 | CC-MAIN-2020-05 | 1579250591431.4 | [] | docs.microsoft.com |
TranslateAcceleratorW
int TranslateAcceleratorW( HWND hWnd, HACCEL hAccTable, LPMSG lpMsg );
Parameters
hWnd
Type: HWND
A handle to the window whose messages are to be translated.
hAccTable
Type: HACCEL
A handle to the accelerator table. The accelerator table must have been loaded by a call to the LoadAccelerators function or created by a call to the CreateAcceleratorTable function.
lpMsg
Conceptual
Reference | https://docs.microsoft.com/en-us/windows/win32/api/winuser/nf-winuser-translateacceleratorw | 2020-01-18T01:36:13 | CC-MAIN-2020-05 | 1579250591431.4 | [] | docs.microsoft.com |
. This is. Also, always using LF line endings. If this issue is caused by git, it could be solved with git config --system core.autocrlf input.. | https://docs.conan.io/en/1.17/faq/using.html | 2022-06-25T14:20:36 | CC-MAIN-2022-27 | 1656103035636.10 | [array(['../_images/conan_info_graph.png',
'../_images/conan_info_graph.png'], dtype=object)] | docs.conan.io |
8. The DHCPv4 Server¶
8.1. Starting and Stopping the DHCPv4 Server¶
It is recommended that the Kea DHCPv4 server be started and stopped
using
keactrl (described in Managing Kea with keactrl); however, it is also
possible to run the server directly. It will listen. This is only useful during testing, as a DHCPv4 server listening on ports other than the standard ones will not be able to handle regular DHCPv4 queries.
-P client-port- specifies the remote UDP port to which the server will send all responses. This is only useful during testing, as a DHCPv4 server sending responses to ports other than the standard ones will not be able to handle regular DHCPv4 queries.
-t file- specifies a configuration file to be tested. Kea-dhcp4 will load it, check it, and exit. will detect available network interfaces and will attempt to open UDP sockets on all interfaces mentioned in the configuration file. Since the DHCPv4 server opens privileged ports, it requires root access. This daemon must be run as root.
During startup, the server will attempt will issue a DHCP4_ALREADY_RUNNING log message and exit.. It detects
the key combination and shuts down gracefully.
8.2. DHCPv4 Server Configuration¶
8.2.1. Introduction¶
This section explains how to configure the.4 object. to remember that the configuration file must be well-formed JSON. That means that the parameters for any given scope must be separated by a comma, and there must not be a comma after the last parameter. When reordering a configuration file, keep in mind that. If nothing
changes, a client that got an address is allowed to use it for 4000
seconds. (Note that integer numbers are specified as is, without any
quotes around them.)
renew-timer and
rebind-timer are values
(also in seconds) that define T1 and T2 timers that govern when the
client will begin the renewal and rebind procedures.
Note
Beginning with Kea 1.6.0 the lease valid lifetime is extended from a
single value to a triplet with minimum, default and maximum values using
min-valid-lifetime,
valid-lifetime and
max-valid-lifetime.
As of Kea 1.9.5, these values may be specified in client classes. The recipe
the server uses to select which lifetime value to use is as follows:
If the client query is a BOOTP query, the server will always use the infinite lease time (e.g. 0xffffffff). Otherwise the server must next determine which configured triplet to use by first searching all classes assigned to the query, and then the subnet selected for the query.
Classes are searched in the order they were assigned to the query. The server will use the triplet from the first class that specifies it. If no classes specify the triplet then the server will use the triplet specified by the subnet selected for the client. If the subnet does not explicitly specify it the server will look next at the subnet’s shared-network (if one), then for a global specification, and finally the global default.
If the client requested a lifetime value via DHCP option 51, then the lifetime value used will be the requested value bounded by the configured triplet. In other words, if the requested lifetime is less than the configured minimum the configured minimum will be used; if it is more than the configured maximum the configured maximum will be used. If the client did not provide a requested value, the lifetime value used will be the triplet default value.
Note
Both
renew-timer and
rebind-timer
are optional. The server will only send
rebind-timer to the client,
via DHCPv4 option code 59, if it is less than
valid-lifetime; and it
will only send server configuration
concerning the network interfaces on which the server should listen to
the DHCP messages. The
interfaces parameter specifies a list of
network interfaces on which the server should listen. Lists are opened
and closed with square brackets, with elements separated by commas. To
listen on two interfaces, the
interfaces-config command should look
like this:
"interfaces-config": { "interfaces": [ "eth0", "eth1" ] },
The next couple of has to, we have two contexts open: global and Dhcp4; thus, we need two closing curly brackets to close them.
8.2.2. Lease Storage¶
All leases issued by the server are stored in the lease database. Currently there are four database backends available: memfile (which is the default backend), MySQL, PostgreSQL, and Cassandra.
8.2.2.1. Memfile - Basic Storage for Leases¶
The server is able to store lease data in different repositories. Larger deployments may elect to store leases in a database. Lease Database Configuration describes this option. In typical smaller deployments, though, the server will store lease information in a CSV file rather than a database. As well as requiring less administration, an advantage of using a file for storage is that it eliminates a dependency on third-party database software.
The configuration of the file backend (memfile) is controlled through
the Dhcp4/lease-database parameters. The
type parameter is mandatory
and it specifies which storage for leases the server should use. The
value of
"memfile" indicates that the file should be used as the
storage. will.
"Dhcp4": { "lease-database": { "type": "memfile", "persist": true, "name": "/tmp/kea-leases4.csv", "lfc-interval": 1800, "max-row-errors": 100 } }
This configuration selects the
/tmp/kea-leases4.csv as the storage
for lease information and enables persistence (writing lease updates to
this file). It also configures the backend to perform a periodic cleanup
of the lease file every 30 minutes and sets the maximum number of row
errors to 100.
It is important to know how the lease file contents are organized to understand why the periodic lease file cleanup is needed. Every time the server updates a lease or creates a new lease for the server startup, it assumes that the latest lease entry for the client is the valid one. The previous entries are discarded, meaning that the server can re-construct the configured. Moreover,
triggering a new cleanup adds overhead to the server, which will not both.2. Lease Database Configuration¶", ... }, ... }
(It should be noted that this configuration may have a severe impact on server performance.)
Normally, the database will be on the same machine as the DHCPv4 server. In this case, set the value to the empty string:
"Dhcp4": { "lease-database": { "host" : "", ... }, ... } will automatically attempt to reconnect to the lease database after connectivity has been lost may be specified:
"Dhcp4": { "lease will wait.
It is highly recommended to not change the
stop-retry-exit default
setting for the lease manager as it is critical for the connection to be
active while processing DHCP traffic. Change this only if the server is used
exclusively as a configuration tool.
Note
Note that also the default.)
8.2.2.3. Cassandra-Specific Parameters¶ consistency level. The default is “quorum”. Supported values: any, one, two, three, quorum, all, local-quorum, each-quorum, serial, local-serial, local-one. See Cassandra consistency for more details.
serial-consistency- configures serial consistency level which manages lightweight transaction isolation. The default is “serial”. Supported values: any, one, two, three, quorum, all, local-quorum, each-quorum, serial, local-serial,¶
Kea is also able to store information about host reservations in the database. The hosts database configuration uses the same syntax as the lease database. In fact, a Kea server opens independent connections for each purpose, be it lease or hosts information. This arrangement gives the most flexibility. Kea can keep leases and host reservations separately, but can also point to the same database. Currently the supported hosts database types are MySQL, PostgreSQL, and Cassandra. (configuration file and one of the supported databases) can be used together. If hosts are defined in both places, the definitions from the configuration file are checked first and external storage is checked later, if necessary.
In fact,.
8.2.3.1. DHCPv4 Hosts Database Configuration¶, ... }, ... }
(Again, it should be noted that this configuration may have a severe impact on server performance.)
Normally, the database will will automatically attempt to reconnect to the host database after connectivity has been lost may be specified:
"Dhcp4": { "hosts).
The number of milliseconds the server will wait also the default.)
The multiple storage extension uses a similar syntax; a configuration is placed into a “hosts-databases” list instead of into a “hosts-database” entry, as in:
"Dhcp4": { "hosts-databases": [ { "type": "mysql", ... }, ... ], ... }
For additional¶
In some deployments the database database currently only supported for MySQL and
PostgreSQL databases.
8.2.4. Interface Configuration¶ a wildcard interface name (asterisk) concurrently with explicit interface names:
"Dhcp4": { "interfaces-config": { "interfaces": [ "eth1", "eth3", "*" ] }, ... }
It is anticipated that this form of usage will don DHCP server should be configured to use UDP sockets instead of raw sockets. The following configuration demonstrates how this can be achieved:
"Dhcp4": { "interfaces-config": { "interfaces": [ "eth1", "eth3" ], "dhcp-socket-type": "udp" }, ... }
The
dhcp-socket-type specifies that the IP/UDP sockets will be
opened on all interfaces on which the server listens, i.e. “eth1” and
“eth3” in our case. doesn’t guarantee
that the raw sockets will be used! The use of raw sockets to handle
the traffic from the directly connected clients is currently
supported on Linux and BSD systems only. If the raw sockets are not
supported on the particular OS in use, the server will issue will be sent as regular traffic and
the outbound interface will one,) may not be configured, but if a loopback interface is explicitly configured and IP/UDP sockets are specified, the loopback interface is accepted.
For example, it¶ are focusing will first use the¶
The subnet identifier is a unique number associated with a particular subnet. In principle, it is used to associate clients’ leases with their respective subnets. When a subnet identifier is not specified for a subnet being configured, it will be automatically assigned by the configuration mechanism. The identifiers are assigned will assign¶
The subnet prefix is the second way to identify a subnet. It does not need to have the address part to match the prefix length, for instance this configuration is accepted:
"Dhcp4": { "subnet4": [ { "subnet": "192.0.2.1/24", ... } ] }
Even¶ range 192.0.2.10 to 192.0.2.20 are going to be managed by the Dhcp be also be managed by the server. It could be written as 192.0.2.64 to 192.0.2.127. Alternatively, it can be expressed more simply as 192.0.2.64/26. Both formats are supported by Dhcp4 and can be mixed in the pool list. For example, one could define the following pools:
will also be)¶
According to RFC 2131, servers should send values for T1 and T2 that are 50% and 87.5% of the lease lifetime, respectively. By default, kea-dhcp4 does not send either value. It can be configured to send values that are will only send T2 if it is less than the valid lease time. T1 will only be.
8.2.10. Standard DHCPv4 Options¶ only one of (‘:’ or ‘ ‘)..
As of Kea 1.6.0,): host reservation, pool, subnet, shared network, class, global.
The currently supported standard DHCPv4 options are listed in List of standard DHCPv4 options configurable by an administrator. “Name” and “Code” are the values that should be used as a name/code in the option-data structures. “Type” designates the format of the data; the meanings of the various types are given in List of Standard DHCP Option Types. in such an option.. In order.
Kea supports more options than the listed above. The following list is mostly useful for readers who want to understand whether Kea is able to support certain options. The following options are returned by the Kea engine itself and in general should not be configured manually.
The following table lists all option types used in the previous two tables with a description of what values are accepted for them.
Kea also supports Relay Agent Information (RAI) option, sometimes referred to as relay option, agent option or simply option 82. The option itself is just a container and doesn as Kea only receives them.
All other RAI sub-options can be used in client classification to classify incoming packets to specific classes and/or by flex-id to construct unique device identifier.
8.2.11. Custom DHCPv4 Options¶
Kea supports custom (non-standard) DHCPv4 options. Assume that we want to define a new DHCPv4 option called “foo” which will have code 222 and will convey a single, unsigned, 32-bit integer value. We can define such an option (i.e. “”). simple use of primitives (uint8, string, ipv4-address, etc.); it is possible to define an option comprising a number of existing primitives.
For example, assume option¶ option (code 43) has vendor-specific format, i.e. can carry either raw binary value or sub-options, this mechanism is available for this option too.
In the following example taken from a real configuration, two vendor classes use theoptions..
Note as they use a specific/per vendor option space the sub-options are defined at the global scope.
Note
Option definitions in client classes are allowed only for this limited option set (codes 43 and from 224 to 254), and only for DHCPv4.
8.2.13. DHCPv4 Vendor-Specific Options¶" } ], ... }
We also include the Vendor-Specific Information option, the option that conveys our suboption “vendor option” is option 125. Its proper name is vendor-independent vendor-specific information option or vivso. The idea behind those options is that each vendor has its own unique set of options with their own custom formats. The vendor is identified by a 32-bit unsigned integer called enterprise-id or vendor-id. For example, vivso with vendor-id 4491 represents DOCSIS options, and they are often seen when dealing with cable modems.
In Kea each vendor is represented by its own vendor space. Since there are hundreds of vendors and sometimes they use different option definitions for different hardware, it’s impossible for Kea to support them all out of the box. Fortunately, it’s easy to define support for new vendor options. Let’s take an example of the Genexis home gateway. This device requires sending the vivso 125 option with a suboption 2 that contains a string with the TFTP server URL. To support such a device, three steps are needed: first, we need to define option definitions that will explain how the option is supposed to be formed. Second, we will need to define option values. Third, we will need to tell Kea when to send those specific options. This last step will be accomplished with for most cases and avoids problems with clients
refusing responses with options they don’t understand. Unfortunately,
this is more complex when we consider vendor options. Some vendors (such
as docsis, identified by vendor option 4491) have a mechanism to
request specific vendor options and Kea is able to honor those.
Unfortunately, for many other vendors, such as Genexis (25167) as discussed
above, Kea does not have such a mechanism, so it can’t send any
sub-options on its own. To solve this issue, we came up with the
“always-send” enabled, the option will be sent every time there is a
need to deal with vendor space 25167.
Another possibility is to redefine the option; see DHCPv4 Private Options.
Also, Kea comes with several example configuration files. Some of them showcase
how to configure the option 60 and 43. See
doc/examples/kea4/vendor-specific.json
and
doc/examples/kea6/vivso.json in the Kea sources.
Note
Currently only one vendor is supported for vivco-suboptions (code 124) and vivso-suboptions (code 125) options. It is not supported to specify multiple enterprise numbers within a single option instance or multiple options with different enterprise numbers.
8.2.14. Nested DHCPv4 Options (Custom Option Spaces)¶
It is sometimes useful to define a completely new option space, such as when a user creates a new option in the standard option space (“dhcp4”) and wants this option to convey sub-options. Since they are in a separate space, sub-option codes will).
Assume that we want to have a DHCPv4 option called “container” with code 222 that conveys two sub-options with codes 1 and 2. First we" } ], ... }
Note that¶ will assume that the option data is specified as a list of comma-separated values to be assigned to individual fields of the DHCP option.
8.2.16. Stateless Configuration of DHCPv4 Clients¶
The DHCPv4 server supports the stateless client configuration whereby the client has an IP address configured (e.g. using manual configuration) and only contacts the server to obtain other configuration parameters, such as addresses of DNS servers. In order will respond to the DHCPINFORM when the client is associated with a subnet defined in the server’s configuration. An example subnet configuration will look. Note that the subnet definition does not require the address pool configuration if it will be used solely for the stateless configuration.
This server will associate the subnet with the client if one of the following conditions is met:
- The DHCPINFORM is relayed and the giaddr matches the configured subnet.
- The DHCPINFORM is unicast from the client and the ciaddr matches the configured subnet.
- The DHCPINFORM is unicast from the client and the ciaddr is¶ a lease from subnet B. That segregation is essential to prevent overly curious. So the limit on access based on class information is also available at the pool level; see Configuring Pools With Class Information, within a subnet.¶¶
The server checks whether an incoming packet includes the vendor class identifier option (60). If it does, the content of that option is prepended with “VENDOR_CLASS_”, and it is interpreted as a class. For example, modern cable modems will send this option with value “docsis3.0” and as a result the packet will belong range 192.0.2.10 to 192.0.2.20 are going to be managed by the Dhcp4 server and only clients belonging to the docsis3.0 client class are allowed to use that pool.
"Dhcp4": { "subnet4": [ { "subnet": "192.0.2.0/24", "pools": [ { "pool": "192.0.2.10 - 192.0.2.20" } ], "client-class": "VENDOR_CLASS_docsis3.0" } ], ... }
8.2.17.3. Defining and Using Custom Classes¶¶ an option-data is set in a subnet, it takes precedence over an option-data in a class. If the opposite order in which option-data is processed.
8.2.18. DDNS for DHCPv4¶
As mentioned earlier, kea-dhcp4 can be configured to generate requests to the DHCP-DDNS server, kea-dhcp-ddns, (referred to herein as “D2”) to update DNS entries. These requests are known as Name Change Requests)
Prior to Kea 1.7.1, all parameters for controlling DDNS were within the
global
dhcp-ddns section of the kea-dhcp4. Beginning with Kea 1.7.1
DDNS related parameters were split into two groups:
Connectivity Parameters
These are parameters which specify where and how kea-dhcp4 connects to and communicates with D2. These parameters can only be specified within the top-level
dhcp-ddnssection in the kea-dhcp": "" ... }
As of Kea 1.7.1, there are two parameters which determine whether adds two new parameters. The first new parameter who renew often.
The second parameter added in Kea 1.9.1 is
ddns-use-conflict-resolution.
The value of this parameter is passed by kea-dhcp4 clients.
Note
Setting
ddns-use-conflict-resolution to false disables the overwrite
safeguards that the rules of conflict resolution (
RFC 4703) are intended to
prevent. This means that existing entries for leases changes are processed through Kea, whether they are released, expire, or are deleted via the lease-del4 command, prior to reassigning either FQDNs or IP addresses. Doing so will cause kea-dhcp4 to generate DNS removal requests to D2.
Note
The DNS entries Kea creates contain a value for TTL (time to live). As of Kea 1.9.3, kea-dhcp4 calculates that value based on RFC 4702, Section 5 which suggests that the TTL value be 1/3 of the lease’s life time with a minimum value of 10 minutes. Prior to this the server set the TTL value equal to the lease’s valid life time. Future releases may add one or more parameters to customize this value.
8.2.18.1. DHCP-DDNS Server Connectivity¶
For NCRs to reach the D2 server, kea-dhcp4 must be able to communicate with it. kea-dhcp- the IP address on which D2 listens for requests. The default is the local loopback interface at address 127.0.0.1. Either an IPv4 or IPv6 address may be specified.
server-port- the port on which D2 listens for requests. The default value is 53001.
sender-ip- the IP address which kea-dhcp4 uses to send requests to D2. The default value is blank, which instructs kea-dhcp4 to select a suitable address.
sender-port- the port which kea-dhcp4 uses to send requests to D2. The default value of 0 instructs kea-dhcp4 to select a suitable port.
max-queue-size- the maximum number of requests allowed to queue waiting to be sent to D2. This value guards against requests accumulating uncontrollably if they are being generated faster than they can be delivered. If the number of requests queued for transmission reaches this value, DDNS updating will be turned off until the queue backlog has been sufficiently reduced. The intent is to allow the kea-dhcp4 server to continue lease operations without running the risk that its memory usage grows without limit. The default value is 1024.
ncr-protocol- the socket protocol to use when sending requests to D2. Currently only UDP is supported.
ncr-format-?¶
will generate NCRs and the configuration parameters that can be used to
influence this decision. It assumes that both the connectivity parameter,
enable-updates and the behavioral parameter
ddns-send-updates,
are true.
In general, kea-dhcp4 will generate will be issued: one request to remove entries for the previous FQDN, and a second request to add entries for the new FQDN. In the last case, a lease release, a single DDNS request to remove its entries will be made.
As for the first case, the decisions involved when granting a new lease are more complex. When a new lease is granted, kea-dhcp4 will generate a DDNS update request if the DHCPREQUEST contains either the FQDN option (code 81) or the Host Name option (code 12). If both are present, the server will use the FQDN option. By default, kea-dhcp4 will respect will
honor the client’s wishes and generate a DDNS request to the D2 server
to update only reverse DNS data. The parameter
ddns-override-client-update can be used to instruct the server to
override client delegation requests. When this parameter is “true”,
kea-dhcp4 will disregard requests for client delegation and generate will
generate will always generate DDNS update requests if the client request only contains the Host Name option. In addition, it will include an FQDN option in the response to the client with the FQDN N-S-O flags set to 0-1-0 respectively. The domain name portion of the FQDN option will be the name submitted to D2 in the DDNS update request.
8.2.18.3. kea-dhcp4 Name Generation for DDNS Update Requests¶
Each Name Change Request must of course include the fully qualified domain name whose DNS entries are to be affected. kea-dhcp
Note that “example.com”, and the default value is used for
ddns-generated-prefix, the generated FQDN is:
myhost-172-16-1-10.example.com.
8.2.18.4. Sanitizing Client Host Name and FQDN Names¶’:
The following are some considerations to keep in mind: will need to make sure that the dot, ‘.’, is considered a valid character by the hostname-char-set expression, such as this: “[^A-Za-z0-9.-]”. This will
Since the 1.6.0 Kea release, it is possible to specify hostname-char-set and/or hostname-char-replacement at the global scope. This allows sanitizing host names without requiring a dhcp-ddns entry. When a hostname-char parameter is defined at the global scope and in a dhcp-ddns entry, the second (local) value is used.
8.2.19. Next Server (siaddr)¶ subnet can be
set to 0.0.0.0, which means that
next-server should not be sent. It
may also be set to an empty string, which means the same as if it were
not defined at all; that is, use the global value.
The
server-hostname (which conveys a server hostname, can be up to
64 bytes long, and will be sent in the sname field) and
boot-file-name (which conveys the configuration file, can be up to
128 bytes long, and will)¶
The original DHCPv4 specification (RFC 2131) states that the DHCPv4 server must not send back client-id options when responding to clients. However, in some cases that result confused clients that did not have a MAC address or client-id; see RFC 6842 for details. That behavior changed with the publication of RFC 6842, which updated RFC 2131. That update states that the server must send the client-id if the client sent¶ is trying will try to find out if the client already has a lease in the database; if it does, the server will hand out that lease rather than allocate a new one. Each lease in the lease database is associated with the “client identifier” and/or “chaddr”. The server will first use the “client identifier” (if present) to search for the lease. If the lease is found, the server will treat will perform another lookup using the “chaddr”. If this lookup returns no result, the client is considered as not having a lease and a new lease will. The will store the lease information, including “client identifier”
(if supplied) and “chaddr”, in the lease database. When the setting is
changed and the client renews the lease, the server will determine that
it should use the “chaddr” to search for the existing lease. If the
client hasn the
new lease will be allocated.
8.2.23. DHCPv4-over-DHCPv6: DHCPv4 Side¶; both the DHCPv4 and DHCPv6 sides should be running the same version of Kea. For instance, the support of port relay (RFC 8357) introduced an incompatible change. have been added;.
The following configuration was used during some tests:
{ #¶ such as which subnet it belongs to. However, if the configuration has changed, it is possible that a lease could exist with a subnet-id, but without any subnet that matches it. Also, it may. If not explicitly configured to some other value, this level will be used.
fix- if a data inconsistency is discovered, try to correct it. If the correction is not successful,¶
In order to support such features as DHCP LeaseQuery (RFC 4388)4": { "store-extended-info": true, ... }
When enabled, information relevant to the DHCPREQUEST asking for the lease is added into the lease’s user-context as a map element labeled “ISC”. Currently, the map will contain a single value, the relay-agent-info option (DHCP Option 82), when the DHCPREQUEST received contains it. Other values may be added at a future date. Since DHCPREQUESTs sent as renewals will likely not contain this information, the values taken from the last DHCPREQUEST that did contain it will be retained on the lease. The lease’s user-context will look something like this:
{ "ISC": { "relay-agent-info": "0x52050104AABBCCDD" } }
Note
This feature is intended to be used in conjunction with an upcoming LeaseQuery hook library and at this time there is other use for this information within Kea.¶4": { "multi-threading": { "enable-multi-threading": true, "thread-pool-size": 4, "packet-queue-size": 16 } ... }
8.2.27. Multi-Threading Settings in Different Backends¶
Both kea-dhcp4 and kea-dhcp6 are tested internally to determine which settings gave the best performance. Although this section describes our results, those are just4 are:
thread-pool-size: 4 when using
memfilefor storing leases.
thread-pool-size: 12 or more when using
mysqlfor storing leases.
thread-pool-size: 8 when using
postgresql.
Another very important parameter is
packet-queue-size and in our tests we
used it as multiplier of
thread-pool-size. So actual setting strongly depends
on
thread-pool-size.
Our tests reported best results when:
packet-queue-size: 7 *
thread-pool-sizewhen using
memfilefor storing leases. In our case it’s 7 * 4 = 28. This means that at any given time, up to 28 packets could be queued.
packet-queue-size: 66 *
thread-pool-sizewhen using
mysqlfor storing leases. In our case it’s 66 * 12 = 792. This means that up to 792 packets could be queued.
packet-queue-size: 11 *
thread-pool-sizewhen using
postgresqlfor storing leases. In our case it’s 11 * 8 = 88.
8.2.28. IPv6-only Preferred Networks¶
A RFC8925 recently published the IPv6 connectivity is available and they can shut down their IPv4 stack. The new option v6-only-preferred content is a 32 bit unsigned integer and specifies for how long the device should disable its stack for. The value is expressed in seconds.
The RFC mentions V6ONLY_WAIT timer. This is implemented in Kea by setting the value of v6-only-preferred option. This follows the usual practice of setting options. The option value can be specified on pool, subnet, shared network, or global levels, or even via host reservations.
Note there is no special processing involved. This follows the standard Kea option processing regime. The option will not¶ the
cache-max-ageis configured, it is compared with the age and too old leases are not reusable (this means that the value 0 for
cache-max-agedisables the lease cache feature)
- if the
cache-thresholdis configured and is between 0. and 1. it expresses the percentage of the lease valid lifetime which is allowed for the lease age. Values below and including 0. and values greater than 1. disable the lease cache feature.
In the example a lease with a valid lifetime of 2000 seconds can be reused if it was committed less than 500 seconds ago. With a lifetime of 3000 seconds use the reusable lifetime.
8.3. Host Reservation in DHCPv4¶
There are many cases where it is useful to provide a configuration on a per-host basis. The most obvious one is to reserve a specific, static address for exclusive use by a given client (host); the returning client will receive the same address from the server every time, and other clients will generally not receive that address. Another situation when host reservations are applicable is when a host has specific requirements, e.g. a printer that needs additional DHCP options. Yet another possible use case is to define unique names for hosts.
Note that there may be cases when a new reservation has been made for a client for an address currently in use by another client. We call this situation a “conflict.” These conflicts get resolved automatically over time as described in subsequent sections. Once must have a unique host identifier.
Kea requires that the reserved address must be within the subnet. Kea 1.7.10 is the last release that does not enforce this.. (Note that¶
In a typical will still match inbound client
packets to a subnet as before, but when the subnet’s reservation mode is
set to
"global", Kea will look
Beginning with Kea 1.9.1 reservation mode was replaced by three
boolean flags
"reservations-global",
"reservations-in-subnet"
and
"reservations-out-of-pool" which allow to configure host
reservations both global and in a subnet. In such case a subnet
host reservation has the preference on a global reservation
when both exist for the same client.
8.3.2. Conflicts in DHCPv4 Reservations¶ is not able to may be lost).
The recovery mechanism allows the server to fully recover from a case where reservations conflict with existing leases; however, this procedure will take roughly as long as the value set for renew-timer. The best way to avoid such¶
When the reservation for a client includes the
hostname, the server
will return this hostname to the client in the Client FQDN or Hostname
option. The server responds with the Client FQDN option only if the
client has included the Client FQDN option in its message to the server. The
server will respond with the Hostname option if the client included
the Hostname option in its message to the server, or if the client
requested the Hostname option using the Parameter Request List option.
The server will return assigning the “alice-laptop.example.isc.org.” hostname, } }
8.3.4. Including Specific DHCPv4 Options in Reservations¶ host level have the highest priority. In other words, if there are options defined with the same type on global, subnet, class, and host levels, the host-specific values will be used.
8.3.5. Reserving Next Server, Server Hostname, and Boot File Name¶¶
Using Expressions in Classification explains how to configure
the server to assign classes to a client, based on the content of the
options that this client sends to the server. Host reservations
mechanisms also allow for the static assignment of classes to clients.
The definitions of these classes are placed in the Kea configuration or
a database. The following configuration snippet shows how to specify that
a client belongs using the “member” operator. For example:
{ "client-classes": [ { "name": "dependent-class", "test": "member('KNOWN')", "only-if-required": true } ] } specific use cases.
8.3.7. Storing Host Reservations in MySQL, PostgreSQL, or Cassandra¶
It is possible.
8.3.8. Fine-Tuning DHCPv4 Host Reservation¶. Second, when renewing a lease, an additional check must be performed to see whether the address being renewed is reserved for another client. Finally, when a host renews an address, the server must check whether there is a reservation for this host, so may assume": [ { "subnet": "192.0.2.0/24", "reservation-mode": "disabled", ... } ] }
An example configuration using global reservations is shown below:
"Dhcp4": { "reservation-mode": "global", "reservations": [ { "hw-address": "01:bb:cc:dd:ee:ff", "hostname": "host-one" }, { "hw-address": "02:bb:cc:dd:ee:ff", "hostname": "host-two" } ], "subnet4": [ { "subnet": "192.0.2.0/24", ... } ] }
The meaning of the reservation flags are:
reservations-global: fetch global reservations.
reservations-in-subnet: fetch subnet reservations. For a shared network this includes all subnets member of the shared network.
reservations-out-of-pool: the makes sense only when the
reservations-in-subnetflag is true. When
reservations-out-of-poolis true the server4": { "reservations-global": false, "reservations-in-subnet": false, ... }
global:
"Dhcp4": { "reservations-global": true, "reservations-in-subnet": false, "reservations-out-of-pool": false, ... }
out-of-pool:
"Dhcp4": { "reservations-global": false, 4": { "subnet4": [ { "subnet": "192.0.2.0/24", "reservations-global": false, "reservations-in-subnet": false, ... } ] }
An example configuration using global reservations is shown below:
"Dhcp4": { "reservations-global": true, "reservations-in-subnet": false, "reservations": [ { "hw-address": "01:bb:cc:dd:ee:ff", "hostname": "host-one" }, { "hw-address": "02:bb:cc:dd:ee:ff", "hostname": "host-two" } ], "subnet4": [ { will check host reservation identifiers¶
In some deployments, such as mobile, clients can roam within the network and certain parameters must be specified regardless of the client’s current location. To facilitate such a need, a global reservation mechanism has been DHCP logic implemented in Kea. It is very easy to misuse this feature and get a configuration that is inconsistent. To give a specific example, imagine a global reservation address in global reservation server should lookup global reservations. "reservations-global": true, # Specify if server should lookup in-subnet reservations. "reservations-in-subnet": false, # Specify if server can assume that all reserved addresses # are out-of-pool. "reservations-out-of-pool": false, "pools": [ { "pool": "10.0.0.10-10.0.0.100" } ] } ] }
When using database backends, the global host reservations are distinguished from regular reservations by using a subnet-id value of zero.
8.3.10. Pool Selection with Client Class Reservations¶
Client classes can be specified both in the Kea configuration file and/or.
8.3.11. Subnet Selection with Client Class Reservations¶
There is one specific use case when subnet selection may be influenced by client classes specified within host reservations. This is the case server should lookup global reservations. "reservations-global": true, # Specify if server should lookup in-subnet reservations. "reservations-in-subnet": false, # Specify if server can assume that all reserved addresses # are out-of-pool. the
Pool Selection with Client Class Reservations. This time, however, there
are two subnets, each of them having a pool associated with a different
class. The clients which don’t have a reservation for the
reserved_class
will be assigned an address from the subnet 192.0.3.0/24. Clients having
a reservation for the
reserved_class will be assigned an address from
the subnet 192.0.2.0/24..
8.3.12. Multiple Reservations for the Same IP¶
Host Reservations were designed to preclude creation of multiple
reservations for the same IP address within a particular subnet to avoid
the situation when two different clients compete for the same address.
When using the default settings, the server returns a configuration error
when it finds two or more reservations for the same IP address within
a subnet in the Kea configuration file. The host_cmds: Host Commands hooks the IP addresses. Using different IP addresses for each interface is impractical and is considered a waste of the IPv4 address space, especially since the host typically uses only one interface for communication with the server, hence only one IP address is in use.
This causes a need for creating multiple host reservations for a single
IP address within a subnet and it is supported beginning with Kea 1.9.1
release as optional mode of operation enabled with the
ip-reservations-unique global parameter.
The
ip-reservations-unique is a boolean parameter, which defaults to
true, which forbids the specification of more than one reservation
for the same IP address within address and via the
Configuration Backend in DHCPv4. If the new setting of this parameter conflicts with
the currently used backends (backends do not support the new setting),
the new setting is ignored and the warning log message is output.
The backends continue to use the default setting, i.e.. The administrators must make sure that at most one reservation for server does not verify whether multiple reservations for
the same IP address exist in the host databases (MySQL and/or
PostgreSQL) when
ip-reservations-unique is updated from
true to
false. This may cause issues with lease allocations.
The administrator must ensure that there is at most one reservation
for each IP address within each subnet prior to this configuration
update.
8.5. Server Identifier in DHCPv4¶ will use multiple server identifiers if it is receiving queries on multiple interfaces.
It is possible to override the default server identifier values by specifying the “dhcp-server-identifier” option. This option is only supported at the global, shared network, and subnet levels; it must not be specified on the client class or host reservation levels.¶
The DHCPv4 server differentiates between directly connected clients, clients trying to renew leases, and clients sending their messages through relays. For directly connected clients, the server will check the configuration for the interface on which the message has been received and, if the server configuration doesn’t match any configured subnet, the message is discarded.
Assuming that the server’s interface is configured with the IPv4 address 192.0.2.3, the server will only process messages received through this interface from a directly connected client if there is a subnet configured to which this IPv4 address belongs, such as 192.0.2.0/24. The server will use
Starting with Kea 1.7.9, the order used to find a subnet which matches required conditions to be selected is the ascending subnet identifier order. When the selected subnet is a member of a shared network the whole shared network is selected.
8.6.1. Using a Specific Relay Agent for a Subnet¶
A relay must have an interface connected to the link on which the clients are being configured. Typically the relay has an IPv4 address configured on that interface, which belongs to the subnet from which the server will assign where this is the case are long-lasting network renumbering (where both old and new address space.
Note
The current version of Kea uses the “ip-addresses” parameter, which supports specifying a list of addresses.
8.6.2. Segregating IPv4 Clients in a Cable Network¶ CMTS (Cable Modem Termination System) with one CM MAC (a physical link that modems are connected to). We want the modems to get addresses from the 10.1.1.0/24 subnet, while everything connected behind the modems should get addresses from another subnet (192.0.2.0/24). The CMTS that acts as a relay uses address 10.1.1.1. The following configuration can serve that configuration:
)¶ will do a sanity check (to see whether the client declining an address really was supposed to use it), and then will conduct a clean-up operation. Any DNS entries related to that address will be removed, the fact will be logged, and hooks will be triggered. After that is complete, the address will be marked as declined (which indicates that it is used by an unknown entity and thus not available for assignment) and a probation time will be set on it. Unless otherwise configured, the probation period lasts 24 hours; after that period, will instruct global and subnet-specific variants). (See Statistics in the DHCPv4 Server and Hooks simply use assigned-addresses/total-addresses. This would cause a bias towards under-representing pool utilization. As this has a potential for major issues, ISC decided not to decrease assigned-addresses immediately after receiving DHCPDECLINE, but to do it later when Kea recovers the address back to the available pool.
8.8. Statistics in the DHCPv4 Server¶
The DHCPv4 server supports the following statistics:
Note
This section describes DHCPv4-specific statistics. For a general overview and usage of statistics, see Statistics.
Beginning with Kea 1.7.7 the DHCPv4 server provides two global parameters to control statistics default sample limits:
statistic-default-sample-count- determines the default maximum number of samples which are kept. The special value of zero¶
The management API allows the issuing of specific management commands, such as statistics retrieval, reconfiguration, or shutdown. For more details, see Management API. Currently, the only supported communication channel type¶
Kea allows loading as maybe¶
The following standards are currently supported:
- will kea-dhcp-ddns component do initiate appropriate DNS Update operations.
- Resolution of Fully Qualified Domain Name (FQDN) Conflicts among Dynamic Host Configuration Protocol (DHCP) Clients, RFC 4703: The DHCPv6 server uses DHCP-DDNS server to resolve conflicts.
- Client Identifier Option in DHCP Server Replies, RFC 6842: Server by default sends back client-id option. That capability may be disabled. See Echoing Client-ID (RFC 6842) for details.
- Generalized UDP Source Port for DHCP Relay, RFC 8357: The Kea server is able to handle Relay Agent Information Source Port suboption in a received message, remembers the UDP port and sends back reply to the same relay agent using this UDP port.
- IPv6-Only Preferred Option for DHCPv4, RFC 8925: The Kea server is able to designate its pools and subnets as v6only-preferred and send back the v6-only-preferred option to clients that requested it.
8.11.1. Known RFC Violations¶
In principle Kea seeks to be a reference implementation aiming to implement 100% of the RFC standards. However, in some cases there are practical aspects that make Kea not adhere completely to the text of the RFC documents.
- RFC 2131 on page 30 says that if the incoming REQUEST packet there is no requested IP address option and ciaddr is not set, the server is supposed to respond with NAK. However, there are broken clients out there that will always send a REQUEST without those. As such, Kea accepts such REQUESTs, will assign an address and will respond with an ACK.
- RFC 2131 table 5 says that a DECLINE message must have server-id set and should be dropped if that option is missing. However, ISC DHCP does not enforce this, presumably as a compatibility effort for broken clients. Kea team decided to follow suit.
8.12. DHCPv4 Server Limitations¶
These are the current limitations of the DHCPv4 server software. Most of them are reflections of the current stage of development and should be treated as “not implemented yet,” rather than as actual limitations. However, some of them are implications of the design choices made. Those are clearly marked as such.
- On Linux and BSD system families the DHCP messages are sent and received over the will fail¶
A collection of simple-to-use examples for the DHCPv4 component of Kea is available with the source files, located in the doc/examples/kea4 directory.
8.14. Configuration Backend in DHCPv4¶
In the Kea Configuration Backend section we have described the Configuration Backend feature, its applicability, and its limitations. This section focuses on the usage of the CB with the DHCPv4 server. It lists the supported parameters, describes limitations, and gives examples of the DHCPv4 server configuration to take advantage of the CB. Please also refer to the sibling section Configuration Backend in DHCPv6 for the DHCPv6-specific usage of the CB.
8.14.1. Supported Parameters¶
config-control, which includes the necessary information to connect
to the database. In the Kea 1.6.0 release, however, only a subset of
the DHCPv4 server parameters can be stored in the database. All other
parameters must be specified in the JSON configuration file, if
required.
The following table lists DHCPv4 specific parameters supported by the Configuration Backend, with an indication on which level of the hierarchy it is currently supported. “n/a” is used in cases when a given parameter is not applicable on a particular level of the hierarchy, or in cases when the parameter is not supported by the server at this level of the hierarchy. “no” is used when the parameter is supported by the server on the given level of the hierarchy, but is not configurable via the Configuration Backend.
All supported parameters can be configured via the
cb_cmds hooks library
described in the cb_cmds: Configuration Backend Commands section. The general rule is that
the scalar global parameters are set using
remote-global-parameter4-set; the shared network-specific parameters
are set using
remote-network4-set; and the subnet- and pool-level
parameters are set using
remote-subnet4-set. Whenever
there is an exception to this general rule, it is highlighted in the
table. The non-scalar global parameters have dedicated commands; for example,
the global DHCPv4 options (
option-data) are modified using
remote-option4-global-set.
The Configuration Sharing and Server Tags they can be associated with only one server tag. This rule does not apply to the configuration elements associated with “all” servers. Any configuration element associated with “all” servers (using “all” keyword as a server tag) is used by all servers connecting to the configuration database.
8.14.2. Enabling Configuration Backend¶ which contains one element comprising database type, location,
and the credentials to be used to connect to this database. (Note that
the parameters specified here correspond to the database specification
for the lease database backend and hosts database backend.) Currently
only one database connection can be specified on the
config-databases list. The server will connect to this database
during the startup or reconfiguration, and will fetch for
the configuration changes in the database. The frequency of polling is
controlled by the
config-fetch-wait-time parameter, expressed
in seconds; it is the period between the time when the server
completed last polling (and possibly the local configuration update) and
the time when it will begin polling again. In the example above, this period
is set to 20 seconds. This means that after adding a new configuration
into the database (e.g. adding new subnet), it will take up to 20 seconds
(plus the time needed to fetch and apply the new configuration) before
the server starts using this subnet. The lower the
config-fetch-wait-time value, the shorter the time for the server to
react to the incremental configuration updates in the database. On the
other hand, polling the database too frequently may impact the DHCP
server’s performance, because the server needs to make at least one query
to the database to discover the pending configuration updates. The
default value of the
config-fetch-wait-time is 30 seconds.
The
config-backend-pull command can be used to force the server to
immediately poll the configuration changes from the database and avoid
waiting for the next fetch cycle. The command was added in 1.7.1 Kea
release for DHCPv4 and DHCPv6 servers.
Finally, in the configuration example above, two hooks hooks library,
libdhcp_cb_cmds.so, is optional. It should
be loaded when the Kea server instance is to be used for managing the
configuration in the database. See the cb_cmds: Configuration Backend Commands section for
details. Note that this hooks library is only available to ISC
customers with a support contract. | https://kea.readthedocs.io/en/kea-1.9.7/arm/dhcp4-srv.html | 2022-06-25T14:19:43 | CC-MAIN-2022-27 | 1656103035636.10 | [] | kea.readthedocs.io |
The City of Ann Arbor's GIS department routinely requests (and gets) non-disclosure agreements (NDAs) from consulting firms that are looking for geographical information system data in support of work done in the city.
This set of files includes the request for the NDA form, plus three months worth of routine correspondence on this topic for January through March 2022.
Download
1258 - FOIA GRANTED.pdf
Download
Ann Arbor NDA Template EXT.docx
Download
1258 - Vielmetti Final.pdf | https://a2docs.org/view/587 | 2022-06-25T13:35:01 | CC-MAIN-2022-27 | 1656103035636.10 | [] | a2docs.org |
. It is enabled by default, but if neccessary switch it on explicitly with:
r <- mosek(prob, list(verbose=10))
Run the optimization and analyze the log output, see Sec. 8.1 (Understanding optimizer log). In particular:
check if the problem setup (number of constraints/variables etc.) matches your expectation.
check solution summary and solution status.
Dump the problem to disk if necessary to continue analysis. See Sec. 7.3.3 (Saving a problem to a file).
use a human-readable text format, preferably
*.ptfif you want to check the problem structure by hand. Assign names to variables and constraints to make them easier to identify.
r <- mosek_write(prob, "dump.ptf");
use the MOSEK native format
*.task.gzwhen submitting a bug report or support question.
r <- mosek_write(prob, "dump.task.gz");
Fix problem setup, improve the model, locate infeasibility or adjust parameters, depending on the diagnosis.
See the following sections for details. | https://docs.mosek.com/10.0/rmosek/debugging-tutorials.html | 2022-06-25T14:53:14 | CC-MAIN-2022-27 | 1656103035636.10 | [] | docs.mosek.com |
Note: For each user transaction in the sponsored mode, you will be charged a commission of 0.001 WAVES. If the balance of your account does not have enough WAVES to pay for user transactions, your sponsored transactions will be canceled.
Example: You activated the sponsored transaction. You have 3 WAVES free, 5 WAVES in orders and 10 WAVES in staking. As soon as.
To deactivate sponsored transactions on the Wallet screen find your token in the list, hover cursor over it and in its menu (⋮) click Disable Sponsorship.
In the following window click Sign to deactivate the sponsorship. Deactivation will be processed with the next block.
When the token sponsorship is deactivated, the special % symbol will be removed from the token logo.
See more articles in the Tokens Management chapter.
If you have difficulties with Waves.Exchange, please create a support (opens new window) ticket or write a question on our forum (opens new window). | https://docs.waves.exchange/en/waves-exchange/waves-exchange-online-desktop/online-desktop-asset/online-desktop-sponsored-trx | 2022-06-25T14:32:20 | CC-MAIN-2022-27 | 1656103035636.10 | [] | docs.waves.exchange |
Run QA
Normally, memoQ runs quality checks on a document as you translate it.
However, you may need to check different things - or you need to check the consistency on the segment level, which is not done automatically.
If you want to run different checks than those in your project, or you want to check for consistency, use the Run QA command.
If you need to run different checks: Before you run QA, make sure you set up the new QA options in Project home.
How to get here
- Open a project. You may open a document or two, but this is not necessary.
- On the Review ribbon, click Quality Assurance.
The Run QA window opens.
What can you do?
Click the Check documents radio button.
Choose the documents or segments that memoQ must check: Choose a scope.
A scope tells memoQ which documents to look at. You have the following options - choose one radio button:
- Project: memoQ checks all segments in all documents of the current project. If the project has two or more target languages, memoQ will check segments in every target language.
- Active document: memoQ checks all segments in the active document. The active document is the one you are looking at in the translation editor. You can choose this only if you are working on a document in the translation editor.
- Selected documents: memoQ checks all segments in the selected documents. You can choose this only if you select several documents in Translations under Project home. It doesn't work when the translation editor is in the front.
- From cursor: memoQ checks segments below the current position in the active document. The active document is the one you are looking at in the translation editor. You can choose this only if you are working on a document in the translation editor.
- Open documents: memoQ checks all segments in every document that is open in a translation editor tab.
- Selection: memoQ checks the selected segments in the active document. The active document is the one you are looking at in the translation editor. You can choose this only if you are working on a document in the translation editor.
- Work on views check box: Check this to make memoQ go through segments in the views in the current project. You can choose this only if there is at least one view in the project.
To check segments in just one target language: Before opening the Run QA window, choose a language on the Translations pane of Project home. Select all documents, then open Run QA, and choose Selected documents.
Click the Check translation memory radio button.
You can check one of the translation memories in the project. Select the translation memory in the list.
To check a translation memory: You need to open a project that uses it.
Previous errors and warnings will be gone: When you run QA this way, it will erase any previous warnings and errors from the segments that are checked.
Then, decide what happens after the QA checks are run:
- To review the errors and warnings in the Resolve errors and warnings tab, immediately after running the checks: Check the Proceed to resolve warnings after QA check box.
- To look at the inconsistent translations in the translation editor: Check the Create view of inconsistent translations check box.
- Check the Export QA report to this location check box. Find a folder and a name for the report in the box below the check box.
- Normally, memoQ will save the report in your Documents folder. The file will have the name of the project and the date of the QA check. To change this, click Browse. Choose another folder and another name for the file.
If you are a project manager, you can get a separate report for each translator: You will probably run QA on the work of several translators at once. memoQ can save a separate report for each translator: You can send these reports directly to the translators. They will not see each other's QA reports. To do this, check the Separate reports per translator check box. memoQ will create several files.
Use a web browser to view the report. memoQ will saves a HTML file. The file will be large because it is intelligent:
When you finish
To run the QA checks, and then open the Resolve errors and warnings tab, or open the translation editor, or return to Project home: Click OK.
To return to Project home or to the translation editor, without running the QA checks: Click Cancel.
In the translation editor, there will be a lightning sign in every segment where the QA checks found at least one issue.
Warning may disappear when you confirm a segment: When you confirm a segment, QA will check the segment again. But this time only 'quick' checks are run, consistency checks are not. The warning sign may disappear even if you do not change the segment or the QA settings profile. | https://docs.memoq.com/8-5/en/Places/run-qa.html | 2022-06-25T13:32:49 | CC-MAIN-2022-27 | 1656103035636.10 | [array(['../Images/r-s/run-qa.png', None], dtype=object)
array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object) ] | docs.memoq.com |
that were.
Note
For unit tests, this is the assembly that contains test code, not the assembly that contains the code of the application that you are testing. For example, if your solution contains a project named BankAccount and a corresponding test project named BankAccountTest, specify /testcontainer:BankAccountTest.dll.
Note
Because the test metadata file also lists tests that you can run, you must not specify both the /testcontainer and /testmetadata options in a single command line. Doing this would be ambiguous and would produce an error.
.
Note
Because the test container contains tests that you can run, you must not specify both the /testcontainer and /testmetadata options in a single command line. Doing this would be ambiguous and would produce an error. that are contained in multiple test lists, use the /testlist option multiple times. Any ordered tests in the test list will be run.
Note
You can use the /testlist option only if you also use the /testmetadata option..
Note
You must use the /testcontainer option in order to use the /category option..
Note
If your filter consists of a single category such as /category:group1, you do not have to enclose the filter in quotation marks. However, if your filter references more than one category such as /category:"group1&group2" then the filter has to be enclosed in quotation marks.
/test
/test:[ test name ]
Use the /test option to specify individual tests to run. To run multiple tests, use the /test option multiple times.
Note
You can use the /test option with either the /testcontainer option or with the /testmetadata option, but not with both.
You can use the /testlist option and the /test option together. This is equivalent to selecting both a test list and one or more individual tests in the Test List Editor window and then choosing Run Tests.
The string that'.
Note
The value that you specify with the /test option is tested against not only the name of the test, but also the path of that test, as seen in Solution Explorer, or, with unit tests, to their fully qualified name..
Note
For more information about test settings files, see Create Test Settings for Automated System Tests Using Microsoft Test Manager.
:
Note
The actual selection of property IDs that you can use with the /detail option varies according to test type. Therefore, this list is only an approximation. In particular, if you are using custom test types, the selection of properties will be different. To know which propertyIDs you can use, examine the test results file produced by the test run. For more information about test results files, see How to: Save and Open Web Performance and Load Test Results in Visual Studio. ...
See Also
Reviewing Test Results in Microsoft Test Manager
Running automated tests from the command line | https://docs.microsoft.com/en-us/previous-versions/ms182489(v=vs.140)?redirectedfrom=MSDN | 2022-06-25T15:33:58 | CC-MAIN-2022-27 | 1656103035636.10 | [] | docs.microsoft.com |
Source code for spack.cmd.restage
# Copyright 2013-2021 Lawrence Livermore National Security, LLC and other # Spack Project Developers. See the top-level COPYRIGHT file for details. # # SPDX-License-Identifier: (Apache-2.0 OR MIT) import llnl.util.tty as tty import spack.cmd import spack.cmd.common.arguments as arguments import spack.repo description = "revert checked out package source code" section = "build" level = "long" | https://spack.readthedocs.io/en/v0.17.0/_modules/spack/cmd/restage.html | 2022-06-25T13:22:27 | CC-MAIN-2022-27 | 1656103035636.10 | [] | spack.readthedocs.io |
Issuing Custom Licenses for Extensions
For an extension that delivers a piece of software, which already has a proprietary licensing mechanism in place, it is possible to issue and use proprietary licenses while still handling license management through Plesk Key Administrator. This is done with the help of the ISV endpoint – a custom web service that handles license creation, modification and termination requests for Key Administrator.
See Key Administrator ISV Licensing: Integration Guide for information on implementing the ISV endpoint, ISV Protocol reference, etc.
The workflow of issuing and installing a custom license for an extension can be found here. | https://docs.plesk.com/en-US/onyx/extensions-guide/plesk-extensions-basics/monetizing-extensions/issuing-custom-licenses-for-extensions.78779/ | 2019-11-12T05:35:45 | CC-MAIN-2019-47 | 1573496664752.70 | [] | docs.plesk.com |
API limitations of the pool of public nodes
Pool of public nodes is a set of public nodes which provide data for our products (Waves Wallet, DEX and others) via REST API.
You can use our public nodes to retreive the information from Waves' blockchain, but we recommend you to use your own nodes, because our pool has some limitations listed in the table below.
[!NOTE]
- There is a Nginx-server in front of each node in the pool, which limits the amount of connections and the number of requests per second a client is allowed to send to a node.
- In the table below regular expressions are used to express paths' names, such as, for example, "\d+".
- The term burst below — is a Nginx's setting that defines the maximum size of the "splash" of the requests. If the amount of arriving requests per second exceeds the defined value, all the exceeding requests are put into a queue. The burst is the size of that queue. If the number of the exceeding requests begins to surpass the value of the burst, then new exceeding requests will not be put into the queue, and will be canceled with the error. Read more in the documentation. | http://docs.wavesplatform.com/en/waves-node/api-limitations-of-the-pool-of-public-nodes.html | 2019-11-12T06:37:40 | CC-MAIN-2019-47 | 1573496664752.70 | [] | docs.wavesplatform.com |
SimulateCustomPolicy
Simulate how a set of IAM policies and optionally a resource-based policy works with a list of API operations and AWS resources to determine the policies' effective permissions. The policies are provided as strings.
The simulation does not perform the API operations; it only checks the authorization to determine if the simulated policies allow or deny the operations.
If you want to simulate existing policies attached to an IAM user, group, or role, use SimulatePrincipalCustomPolicy.
If the output is long, you can use
MaxItems and
Marker
parameters to paginate the results.
Request Parameters
For information about the parameters that are common to all actions, see Common Parameters.
- ActionNames.member.N
A list of names of API operations to evaluate in the simulation. Each operation is evaluated against each resource. Each operation must include the service identifier, such as
iam:CreateUser. This operation does not support using wildcards (*) in an action name.
Type: Array of strings
Length Constraints: Minimum length of 3. Maximum length of 128.
Required: Yes
- CallerArn
The ARN of the IAM user that you want to use as the simulated caller of the API operations. permissions API operations. In other words, do not use policies designed to restrict what a user can do while using the temporary credentials..
If you include a
ResourcePolicy, then it must be applicable to all of the resources included in the simulation or you receive an invalid input error. API operations Amazon ARN representing the AWS account ID that specifies the owner of any simulated resource that does not identify its owner in the resource ARN. Examples of resource ARNs include.
The ARN for an account uses the following syntax:
arn:aws:iam::AWS-account-ID:root. For example, to represent the account with the 112233445566 ID, use the following ARN:
arn:aws:iam::112233445566-ID:root.
- PolicyEvaluation
The request failed because a provided policy could not be successfully evaluated. An additional detailed: | https://docs.amazonaws.cn/IAM/latest/APIReference/API_SimulateCustomPolicy.html | 2019-11-12T06:30:17 | CC-MAIN-2019-47 | 1573496664752.70 | [] | docs.amazonaws.cn |
When you upgrade to a new version of PowerBuilder, your existing targets need to be migrated to the new version. Typically, when you open a workspace that contains targets that need to be migrated, or add a target that needs to be migrated to your workspace, PowerBuilder prompts you to migrate the targets. However, there are some situations when you need to migrate a target manually. For example, if you add a library that has not been migrated to a target's library list, you will not be able to open objects in that library until the target has been migrated.
You cannot migrate a target that is not in your current workspace and you must set the root of the System Tree or the view in the Library painter to the current workspace.
Before you migrate
There are some steps you should take before you migrate a target:
Use the Migration Assistant to check for obsolete syntax or the use of reserved words in your code
Check the release notes for migration issues
Make backup copies of the target and libraries
Make sure that the libraries you will migrate are writable
Always back up your PBLs before migrating
Make sure you make a copy of your PBLs before migrating. After migration, you cannot open them in an earlier version of PowerBuilder.
The Migration Assistant is available on the Tool page of the New dialog box. For help using the Migration Assistant, click the Help (?) button in the upper-right corner of the window and click the field you need help with, or click the field and press F1. If the Migration Assistant finds obsolete code, you can fix it in an earlier version of PowerBuilder to avoid errors when you migrate to the current version.
PowerBuilder libraries and migration
PowerBuilder libraries (PBLs) contain a header, source code for the objects in the PBL, and binary code. There are two differences between PowerBuilder 10 and later PBLs and PBLs developed in earlier versions of PowerBuilder:
The source code in PowerBuilder 10 and later PBLs is encoded in Unicode (UTF-16LE, where LE stands for little endian) instead of DBCS (versions 7, 8, and 9) or ANSI (version 6 and earlier).
The format of the header lets PowerBuilder determine whether it uses Unicode encoding. The header format for PowerBuilder 10 is the same as that used for PUL files in PowerBuilder 6.5 and for PKL files in PocketBuilder. These files do not need to be converted to Unicode when they are migrated to PowerBuilder 10 or later.
When PBLs are migrated
Before opening a PBL, PowerBuilder checks its header to determine whether or not it uses Unicode encoding. PBLs are not converted to Unicode unless you specifically request that they be migrated.
You cannot expand the icon for a PBL from PowerBuilder 9 or earlier in the Library painter. To examine its contents, you must migrate it to PowerBuilder 10 or later.
When you attempt to open a workspace that contains targets from a previous release in PowerBuilder, the Targets to be Migrated dialog box displays. You can migrate targets from this dialog box, or clear the No Prompting check box to open the Migrate Application dialog box.
PowerBuilder dynamic libraries
If you plan to reference a PowerBuilder dynamic library (PBD) that was encoded in ANSI formatting (for example, if it was created in PowerBuilder 9 or earlier), you must regenerate the PBD to use Unicode formatting. Dynamic libraries that you create in PowerBuilder 10 or later use Unicode formatting exclusively.
For information on creating PBDs, see Creating runtime libraries.
The Migrate Application dialog box
The Migrate Application dialog box lists each PBL that will be migrated and lets you choose the type of messages that display during the migration process.
If you click OK, each PBL is first migrated to the new version of PowerBuilder. If necessary, PowerBuilder converts source code from DBCS to Unicode. PowerBuilder performs a full build and saves the source code back to the same PBL files. Changes to scripts display in informational messages in the Output window and are written to a log file for each PBL so that you can examine the changes later. Recommended changes are also written to the log file.
Migration from DBCS versions
The migration process automatically converts multibyte strings in DBCS applications to unicode strings. You do not need to select the Automatically Convert DBCS String Manipulation Functions check box for this conversion. If the migration encounters an invalid multibyte string, it sets the invalid string to a question mark and reports the error status. You can modify question marks in the Unicode output string after the migration.
The following two lines from a log file indicate that the FromAnsi function is obsolete and was replaced with the String function, and that an encoding parameter should be added to an existing instance of the String function:
2006/01/27 08:20:11 test.pbl(w_main).cb_1.clicked.4: Information C0205: Function 'FromAnsi' is replaced with function 'String'. 2006/01/27 08:20:11 test.pbl(w_main).cb_2.clicked.4: Information C0206: Append extra argument 'EncodingAnsi!' to function 'String' for backward compatibility.
The log file has the same name as the PBL with the string _mig appended and the extension .log and is created in the same directory as the PBL. If no changes are made, PowerBuilder creates an empty log file. If the PBL is migrated more than once, output is appended to the existing file.
PowerBuilder makes the following changes:
The FromUnicode function is replaced with the String function and the second argument EncodingUTF16LE! is added
The ToUnicode function is replaced with the Blob function and the second argument EncodingUTF16LE! is added
The FromAnsi function is replaced with the String function and the second argument EncodingAnsi! is added
The ToAnsi function is replaced with the Blob function and the second argument EncodingAnsi! is added
An Alias For clause with the following format is appended to declarations of external functions that take strings, chars, or structures as arguments or return any of these datatypes:
ALIAS FOR "functionname;ansi"
If the declaration already has an Alias For clause, only the string ;ansi is appended.
DBCS users only
If you select the Automatically Convert DBCS String Manipulation Functions check box, PowerBuilder automatically makes appropriate conversions to scripts in PowerBuilder 9 applications. For example, if you used the LenW function, it is converted to Len, and if you used the Len function, it is converted to LenA. The changes are written to the Output window and the log file. This box should be selected only in DBCS environments.
Adding PBLs to a PowerBuilder target
When you add PBLs from a previous release to a PowerBuilder target's library list, the PBLs display in the System Tree. The PBLs are not migrated when you add them to the library list. Their contents do not display because they have not yet been converted. To display their contents, you must migrate the target.
You can migrate a target from the Workspace tab of the System Tree by selecting Migrate from the pop-up menu for the target. You can also migrate targets in the Library painter if they are in your current workspace. | https://docs.appeon.com/appeon_online_help/pb2017r2/pbug/ch05s09.html | 2019-11-12T05:34:33 | CC-MAIN-2019-47 | 1573496664752.70 | [] | docs.appeon.com |
Configure a Google Cloud
Configuring a Google cloud is a four step process:
Add a Google Cloud
To add an Google a Google Region
After creating a Google. GCP console image. | https://docs.cloudcenter.cisco.com/display/WORKLOADMANAGER/Configure+a+Google+Cloud | 2019-11-12T05:52:04 | CC-MAIN-2019-47 | 1573496664752.70 | [array(['/download/attachments/43614282/Cloud%20accounts%20list_censored.jpg?version=1&modificationDate=1550533586742&api=v2',
None], dtype=object) ] | docs.cloudcenter.cisco.com |
Subscription form
One you have connected the app with your lists, you may want to add a form to your site to enable site visitors to subscribe. To do this, you need to create a form template.
If it doesn’t exist, create the template
templates/mailchimp/forms/subscribe.html with something like the following:
<perch:form <perch:success> <p>Thank you!</p> </perch:success> <div> <perch:labelEmail</perch:label> <perch:input <perch:errorRequired</perch:error> </div> <div> <perch:labelFirst name</perch:label> <perch:input <perch:errorRequired</perch:error> </div> <div> <perch:labelLast name</perch:label> <perch:input <perch:errorRequired</perch:error> </div> <div> <perch:input <perch:input <perch:input </div> </perch:form>
An important point is that the opening form tag has
app="perch_mailchimp" - this indicates to Perch that the content from this form should be directed to the MailChimp app to process.
Setting
mailer attributes
Use the
mailer attribute on form fields to indicate to the MailChimp app which field in your form you want to serve which purpose.
Setting which list to subscribe to
At the bottom of the template is a hidden field with the ID
list. The value of this field needs to be the ID value you can find listed under Lists within the MailChimp app in the Perch control panel. This is how we know which list to subscribe the user to.
If you wish to give the user the option of subscribing to multiple lists, you can do that with checkboxes, for example:
<div> <perch:labelNewsletter</perch:label> <perch:input </div> <div> <perch:labelSpecial offers</perch:label> <perch:input </div>
Here we give each checkbox field its own ID as usual. The
value attribute of each checkbox should be its list ID from the control panel, and you should set the
mailer="list" attribute.
Note that MailChimp generally discourages the use of multiple lists to segment users. They recomment that it’s better to have one list and use tools like groups to segment within that list.
Confirming the subscription
Many territories require you to ask the user to actively opt-in to receiving emails. This is done using the
confirm_subscribe field. If you don’t need this functionality, you can leave it as a hidden field:
<perch:input
If you do want it, turn it into a checkbox instead:
<perch:input
Unless the
confirm_subscribe field is sent with a positive value, the app won’t attempt to subscribe the user to the list.
Double opt-in
MailChimp offers a double opt-in feature, where the user must click through from a confirmation email to their address before joining the list. This is an extra step for the user, but helps ensure the quality of the list, and avoids malicious sign-ups.
You can enable double opt-in by adding the following attribute on the opening
perch:form tag:
double-optin="true"
Interests / Groups
Experimental feature!
Lists can be segmented with the use of interests or groups (MailChimp refers to this feature in two different ways). To offer a list of groups for the user to categorise themselves with, you need to know the ID of the group. At the moment the only way to get that is through the MailChimp API Playground. If you have the IDs for your groups, you can add them to the template like so:
<perch:input <perch:input
Each option should have its own ID as normal. The
value should be the ID of the interest, and the
mailer attribute should be set to
interests.
Note that interests are specific to a list, so the interest IDs need to match up with the list ID you use. This means that if you use interests, your form should only attempt to subscribe to one list.
We would ideally like to improve this feature in a future update, potentially listing the IDs in the control panel or even giving an automated way to add them to the template. If that’s of interest to you, please let us know by posting to the the forum.
Displaying your form
Once you have the form template all set up, it’s just a case of adding it to your page:
<?php perch_mailchimp_form('forms/subscribe'); ?>
Subscribing from other forms
You don’t need to use a dedicated form just to subscribe users to your list. You can add the functionality to an existing form, such as a contact form. To do that, make sure you’ve added the appropriate
list and
mailer values to the template, and then update the form’s
app attribute to also send the value to MailChimp:
<perch:form ... </perch:form>
The apps will execute in order so if you are doing a redirect then put the perch_mailchimp app first in the list so that processing happens before your form kicks in. | https://docs.grabaperch.com/addons/mailchimp/examples/subscription-form/ | 2019-11-12T05:21:31 | CC-MAIN-2019-47 | 1573496664752.70 | [] | docs.grabaperch.com |
You are viewing documentation for version 2 of the AWS SDK for Ruby. Version 3 documentation can be found here.
Class: Aws::DatabaseMigrationService::Types::DeleteEventSubscriptionMessage
- Inherits:
- Struct
- Object
- Struct
- Aws::DatabaseMigrationService::Types::DeleteEventSubscriptionMessage
- Defined in:
- (unknown)
Overview
Note:
When passing DeleteEventSubscriptionMessage as input to an Aws::Client method, you can use a vanilla Hash:
{ subscription_name: "String", # required }
Instance Attribute Summary collapse
- #subscription_name ⇒ String
The name of the DMS event notification subscription to be deleted.
Instance Attribute Details
#subscription_name ⇒ String
The name of the DMS event notification subscription to be deleted. | https://docs.aws.amazon.com/sdkforruby/api/Aws/DatabaseMigrationService/Types/DeleteEventSubscriptionMessage.html | 2019-11-12T05:21:32 | CC-MAIN-2019-47 | 1573496664752.70 | [] | docs.aws.amazon.com |
- Click the Tasks tab.
2. To add a new task, click the Task button located on the top right of the screen.
3. Enter the Description of your new task and the Code name associated it with it and then click Save.
4. Hover your cursor over your newly created task from the Tasks page and then click the Edit button.
5. Click on the Assigned To box and add the employee(s) you wish to assign to the task and click Done.
You can also un-assign tasks for an employee by going to their direct page (go to the People's page and click on an employees name to view their page) and under Assigned Tasks, hover over assigned tasks click Un-assign. | https://docs.crowdkeep.com/en/articles/490001-how-do-i-create-and-assign-tasks-to-employees | 2019-11-12T05:38:27 | CC-MAIN-2019-47 | 1573496664752.70 | [array(['https://downloads.intercomcdn.com/i/o/45065837/2eacc166fced484eee815440/image.png',
None], dtype=object) ] | docs.crowdkeep.com |
The Sentiance SDK can be configured to detect vehicle accidents/crashes during trips. You can be notified of these crash events by setting a callback as follows:
// Signature- (void)setCrashListener:(void (^)(NSDate *date, CLLocation *lastKnownLocation))crashCallback;// Usage[[SENTSDK sharedInstance] setCrashListener:^(NSDate *date, CLLocation *lastKnownLocation) {// Handle vehicle crash event}];
sentianceSdk.setCrashCallback(new CrashCallback() {@Overridepublic void onCrash(long time, @Nullable Location lastKnownLocation) {// crash detected}}); | https://docs.sentiance.com/sdk/appendix/detecting-vehicle-crashes | 2019-11-12T05:38:40 | CC-MAIN-2019-47 | 1573496664752.70 | [] | docs.sentiance.com |
Installing a node on Windows
Install OpenJDK 8
Install OpenJDK 8 from this link.
Now check the JDK version using the cmd: Windows Command line app
cmd.exe, navigate to the folder with the jar file with the command
cd C:/waves and start waves node with command
java -jar waves.jar waves.conf.
Additional security
For added security, it is recommended to store your wallet and configuration applications on an encrypted partition. You can use software like BitLocker, TrueCrypt, AxCrypt, DiskCryptor, FreeOTFE, GostCrypt, VeraCrypt or else. You choose this application at your own risk!
Also, you may want to limit the use of these folders to designated users only. You can read about it here.
If you decide to use RPC, you should protect it with Windows embedded or any other firewall. You can read about it here. If your server is public and available to the Internet and you decide to enable and use RPC, then allow only certain methods using Nginx's proxy_pass moduleand do not forget to set the API key hash in waves config.
Also, do not forget to install an anti-virus and to keep the OS and all other security software up-to-date. | http://docs.wavesplatform.com/en/waves-node/how-to-install-a-node/on-windows.html | 2019-11-12T06:53:03 | CC-MAIN-2019-47 | 1573496664752.70 | [] | docs.wavesplatform.com |
What Is Amazon Aurora?
Amazon Aur terabytes.
- Amazon Aurora DB Clusters
- Amazon Aurora Connection Management
- Using the Instance Endpoints
- How Aurora Endpoints Work with High Availability
- Amazon Aurora Storage and Reliability
- Amazon Aurora Security
- High Availability for Aurora
- Working with Amazon Aurora Global Database
- Replication with Amazon Aurora
Using the Instance Endpoints
In day-to-day operations, the main way that you use instance endpoints is to diagnose capacity or performance issues that affect one specific instance in an Aurora cluster. While connected to a specific instance, you can examine its status variables, metrics, and so on. Doing this can help you determine what's happening for that instance that's different from what's happening for other instances in the cluster.
In advanced use cases, you might configure some DB instances differently than others. In this case, use the instance endpoint to connect directly to an instance that is smaller, larger, or otherwise has different characteristics than the others. Also, set up failover priority so that this special DB instance is the last choice to take over as the primary instance. We recommend that you use custom endpoints instead of the instance endpoint in such cases. Doing so simplifies connection management and high availability as you add more DB instances to your cluster.
Each DB instance in an Aurora cluster has its own built-in instance endpoint, whose name and other attributes are managed by Aurora. You can't create, delete, or modify this kind of endpoint.
How Aurora Endpoints Work with High Availability becomes unavailable.
If the primary DB instance of a DB cluster fails, Aurora automatically fails over to a new primary DB instance. It does so by either promoting an existing Aurora Replica to a new primary DB instance or creating a new primary DB instance. If a failover occurs, you can use the cluster endpoint to reconnect to the newly promoted or created primary DB instance, or use the reader endpoint to reconnect to one of the Aurora Replicas in the DB cluster. During a failover, the reader endpoint might direct connections to the new primary DB instance of a DB cluster for a short time after an Aurora Replica is promoted to the new primary DB instance.
If you design your own application logic to manage connections to instance endpoints, you can manually or programmatically discover the resulting set of available DB instances in the DB cluster. You can then confirm their instance classes after failover and connect to an appropriate instance endpoint.
For more information about failovers, see Fault Tolerance for an Aurora DB Cluster. | https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/CHAP_AuroraOverview.html | 2019-11-12T05:30:49 | CC-MAIN-2019-47 | 1573496664752.70 | [] | docs.aws.amazon.com |
Healthchecks.io
Healthchecks supports Cron Monitoring to monitor nightly backups, weekly reports, cron jobs and background tasks and receive alerts when your tasks don't run on time. To integrate Healthchecks with Zenduty, complete the following steps:
In Zenduty:In Zenduty:
To add a new Healthchecks "Healthchecks" from the dropdown menu.
Go to "Configure" under your integrations and copy the webhooks URL generated.
In Healthchecks:In Healthchecks:
Login to Healthchecks.
Click on Integrations from the dashboard. Select "Add Integration" corresponding to Webhook.
- Paste the link copied earlier in URL for “down” events and “up” events under the Integration settings.
Enter the following code under POST data:
{"name":"$NAME","status":"$STATUS","current time":"$NOW","code":"$CODE"}
Under Request Headers, set the "Content Type" as JSON and save.
Rename the Webhook as “Zenduty Webhook”. Click on "My First Check".
Click on "Change Schedule" under Schedule. Set the "Period" and "Grace Time" and save.
- Healthchecks is now integrated. | https://docs.zenduty.com/docs/Healthchecks | 2019-11-12T06:43:40 | CC-MAIN-2019-47 | 1573496664752.70 | [array(['/img/Integrations/Healthchecks/1.png', None], dtype=object)
array(['/img/Integrations/Healthchecks/2.png', None], dtype=object)
array(['/img/Integrations/Healthchecks/3.png', None], dtype=object)
array(['/img/Integrations/Healthchecks/6.png', None], dtype=object)] | docs.zenduty.com |
Logentries
Log management and analytics by Logentries for development, IT operations and Security teams. To integrate Logentries with Zenduty, complete the following steps:
In Zenduty:In Zenduty:
To add a new Logentriesentries" from the dropdown menu.
Go to "Configure" under your integrations and copy the webhooks URL generated.
In Logentries:In Logentries:
Select "insightOps" which will take you to the "homepage".
Then go to "Data collection" and configure the data as per your operating system.
- Go to "Log search" where you will find the data that has been recieved.
- Select "Add alert" and fill in all the required details.
- Finally paste the copied URL in the "Web hook".
- Logentries is now integrated. | https://docs.zenduty.com/docs/Logenteries | 2019-11-12T06:36:47 | CC-MAIN-2019-47 | 1573496664752.70 | [array(['/img/Integrations/Logenteries/1.png', None], dtype=object)
array(['/img/Integrations/Logenteries/2.png', None], dtype=object)
array(['/img/Integrations/Logenteries/3.png', None], dtype=object)
array(['/img/Integrations/Logenteries/4.png', None], dtype=object)] | docs.zenduty.com |
OctoberCMS plugin
Cumulus Core
Create a SaaS (multi-tenant) application: manage users, clusters (groups) and their plans.
OctoberCMS plugin
Cumulus Plus
Add dashboard and settings pages within seconds in your Cumulus based app.
OctoberCMS plugin
Power Components
Create forms and lists in frontend the way you love from backend (but with overriding views ;) ) | https://docs.init.biz/ | 2019-11-12T06:32:10 | CC-MAIN-2019-47 | 1573496664752.70 | [array(['/cumuluscore/assets/images/cumulus-icon.png', 'Cumulus Core logo'],
dtype=object)
array(['/cumulusplus/assets/images/cumulus-plus-icon.png',
'Cumulus Plus logo'], dtype=object)
array(['/powercomponents/assets/images/powercomponents-icon.png',
'Power Components logo'], dtype=object) ] | docs.init.biz |
Some hosting companies offer script installers for commonly used website software such as Joomla!. This allows someone to Install Joomla! on a hosting server easily. There is no need to create a database, upload files or configure the programs for use. All things are done by these "Auto Installers" with a small amount of interaction. This article covers two of the most commonly offered automatic script installers:
Most web host providers have either one of them.
Thats it you are done!
Thats it you are done, Softaculous take only one step.
Its really fast to install and update using Auto Installers. | http://docs.joomla.org/index.php?title=Installing_Joomla_using_an_Auto_Installer&diff=102357&oldid=15417 | 2014-04-16T09:26:26 | CC-MAIN-2014-15 | 1397609521558.37 | [] | docs.joomla.org |
Create a repositoryEstimated reading time: 1 minute
These are the docs for DTR version 2.3.6
To select a different version, use the selector below.
Since DTR is secure by default, you need to create the image repository before you can push the image to DTR.
In this example, we create the ‘golang’ repository in DTR.
Create a repository
To create a new repository, navigate to the DTR web application, and click the New repository button.
Add a name and description for the repository, and choose whether your repository is public or private:
- Public repositories are visible to all users, but can only be changed by users granted with permission to write them.
- Private repositories can only be seen by users that have been granted permissions to that repository.
Click Save to create the repository.
When creating a repository in DTR, the full name of the repository becomes
<dtr-domain-name>/<user-or-org>/<repository-name>. In this example, the full
name of our repository will be
dtr.example.org/dave.lauper/golang. | https://docs.docker.com/datacenter/dtr/2.3/guides/user/manage-images/ | 2018-02-17T23:39:56 | CC-MAIN-2018-09 | 1518891808539.63 | [array(['../../images/create-repository-1.png', None], dtype=object)
array(['../../images/create-repository-2.png', None], dtype=object)] | docs.docker.com |
an operating system image, such as one running Windows 8.1. Office 365 ProPlus on a shared virtual machine:
Create the operating system image:
- Follow the instructions to deploy Office 365 ProPlus) | https://docs.microsoft.com/en-us/DeployOffice/deploy-office-365-proplus-by-using-remote-desktop-services?ui=en-US&rs=en-US&ad=US | 2018-02-18T00:29:50 | CC-MAIN-2018-09 | 1518891808539.63 | [] | docs.microsoft.com |
iSCSI port binding creates connections for the traffic between the software or dependent hardware iSCSI adapters and the physical network adapters.
About this task
The following tasks discuss the iSCSI network configuration with a vSphere Standard switch.
You can also use the VMware vSphere® Distributed Switch™ and VMware NSX® Virtual Switch™ in the iSCSI port biding configuration. For information about NSX virtual switches, see the VMware NSX documentation.
If you use a vSphere distributed switch with multiple uplink ports, for port binding, create a separate distributed port group per each physical NIC. Then set the team policy so that each distributed port group has only one active uplink port. For detailed information on distributed switches, see the vSphere Networking documentation. | https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.storage.doc/GUID-3AC6E75E-E318-4A29-B1B0-B52D7C854B75.html | 2018-05-20T17:19:19 | CC-MAIN-2018-22 | 1526794863662.15 | [] | docs.vmware.com |
IPsec Policy Agent Service Runtime
Applies To: Windows Server 2008
The IPsec Policy Agent Service applies IPsec policy and rule changes to the current operating state of the IPsec filtering software.
Note: The IPsec Policy Agent service provides compatibility with Internet Protocol security (IPsec) policies created by using Group Policy editing tools on computers that are running when the service cannot perform its required tasks, such as properly processing filters, or cannot protect traffic sent or received by one or more of the network adapters attached to the computer.
Events
Related Management Information
IPsec Policy Agent Service
Windows Firewall with Advanced Security | https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc733277(v=ws.10) | 2018-05-20T18:38:50 | CC-MAIN-2018-22 | 1526794863662.15 | [] | docs.microsoft.com |
What’s New in 0.12.2¶
Summary¶
This release of OpenMC is primarily a hotfix release with numerous important bug fixes. Several tally-related enhancements have also been added.
New Features¶
Three tally-related enhancements were added to the code in this release:
- A new
CollisionFilterclass that allows tallies to be filtered by the number of collisions a particle has undergone.
- A translation attribute has been added to
MeshFilterthat allows a mesh to be translated from its original position before location checks are performed.
- The
UnstructuredMeshclass now supports libMesh unstructured meshes to enable better ingration with MOOSE-based applications.
Bug Fixes¶
- Reset particle coordinates during find cell operation
- Cover quadric edge case
- Prevent divide-by-zero in bins_crossed methods for meshes
- Fix for translational periodic boundary conditions
- Fix angle sampling in CorrelatedAngleEnergy
- Fix typo in fmt string for a lattice error
- Nu-fission tally and stochastic volume bug fixes
- Make sure failed neighbor list triggers exhaustic search
- Change element to element.title to catch lowercase entries
- Disallow non-current scores with a surface filter
- Depletion operator obeys Materials.cross_sections
- Fix for surface_bins_crossed override
Contributors¶
This release contains new contributions from the following people: | https://docs.openmc.org/en/stable/releasenotes/0.12.2.html | 2021-07-23T18:21:23 | CC-MAIN-2021-31 | 1627046150000.59 | [] | docs.openmc.org |
The URL can be used to link to this page
Your browser does not support the video tag.
AG 13-012
t3rook Lmdquist J CITY OF FEDERAL WAY LAW DEPARTMENT R�U"1�1NU r�U� 1. ORIGINATING DEPT./DIV 2. ORIGINATING STAFF PERSON CED/COMMUNITY SERVICES DEPARTMENT BxoOx LirrDQu1ST EXT: _x2401_ 3. DATE REQ. BY: 4. TYPE OF DOCUMENT (CHECK ONE): ❑ CONTRACTOR SELECTION DOCUMENT (E.G., RFB, RFP, RFQ) ❑ PUBLIC WORKS CONTRACT ❑ SMALL OR LIMITED PUBLIC WORKS CONTRACT � PROFESSIONAL SERVICE AGREEMENT ❑ MAINTENANCE AGREEMENT ❑ GOODS AND SERVICE AGREEMENT ❑ HUMAN SERVICES / CDBG ❑ REAL ESTATE DOCUMENT ❑ SECURITY DOCUMENT �E.G. BOND RELATED DOCUMENTS) ❑ ORDINANCE O RESOLUTION � CONTRACTAMENDMENT(AG#): O INTERLOCAL X OTHER PROMISSORY NOTE 5. PROJECTNAME:_IV�Y�--CS'���a�� ��,5 `�fl� IV� 6. NAME OF CONT CTOR: � ADDRESS: � S � TELEPHONL E-MAIL: ` - FAX: SIGNATURE NAME: �_ /�i.c�Nl ___ TITLE r•-- r—= 7. EXHIBITS AND ATTACHMENTS: ❑ SCOPE, WORK OR SERVICES ❑ COMPENSATION ❑ INSURANCE REQUIREMENTS/CERTIFICATE X ALL OTHER REFERENCED EXHIBITS ❑ PROOF OF AUTHORITY TO SIGN ❑ REQUIRED LICENSES ❑ PRIOR CONTRACT/AMENDMENTS 8. TERM: COMMENCEMENT DATE: N/A COMPLETION DATE: ( 2��% I I�' 9. TOTAL COMPENSATION $ N/A (INCLUDE EXPENSES AND SALES TAX, IF ANY� (IF CALCULATED ON HOURLY LABOR CHARGE - ATTACH SCHEDULES OF EMPLOYEES TITLES AND HOLIDAY RATES) REIMBURSABLE EXPENSE: ❑ 1�S Xtvo IF YES, MAXIMUM DOLLAR AMOUNT: $ IS SALES TAX OWED ❑ YES X NO IF YES, $ PAID BY: ❑ CONTRACTOR � CITY � PURCHASING: PLEASE CHARGE TO: N/A 10. DOCUMENT/CONTRACT REVIEW INITIAL / DATE REVIEWED ❑ PROJECT MANAGER 0 DIRECTOR ❑ RISK MANAGEMENT (�F �PL1CASLE) ❑ LAW —, � • �.1 Z INITIAL / DATE APPROVED 11. COUNCIL APPROVAL (�F .�rPLICaBLE) COMMITTEE APPROVAL DATE: COUNCIL APPROVAL DATE: 12. CONTRACT SIGNATURE ROUTING ❑ SENT TO VENDOR/CONTRACTOR DATE SENT: DATE REC'D: ❑ ATTACH: SIGNATURE AUTHORITY, INSURANCE CERTIFICATE, LICENSES, EXHIBITS � LAW DEPARTMENT ❑ SIGNATORY (1�YOx Ox Dit�CTOx) �CITY CLERK �"ASSIGNED AG# �SIGNED COPY RETURNED COMMENTS: INI IA / DATE SIGNED �• �" _ AG# ( � — ��'Z. DATE SENT: I - �I - �� I1/9 CITY OF CITY HALL � Federa I Way 33325 8th Avenue South Federal Way, WA 98003-6325 (253)835-7000 www cityoflederahvay com \PROMISSORY NOTE City of Federal Way Emergency Housing Repair Program (EHRP) LENDER: City of Federal Way, a Municipal corporation 33325 8�'Avenue South Federal Way, WA 98003 BORROWER: EHRP RESIDENCE: PRINCIPAL AMOUNT: DUE DATE: Nora Johnson 33418 22"d Avenue Southwest Federal Way, WA 98023 $4,872.75 December 21, 2017 $5,000.00 or such lesser sum as sha11 REPAIIt PROGRAM Page 1 CITY OF ��,,..., Federal CITY HALL ��� 33325 8th Avenue South Federal Way, WA 98003-6325 (253)835-7000 �nvw cityoffedereAvay com shall debit to Borrower's Loan Account the amount of each advance, and credit the amount of each repayrnent made by Borrower or "Forgiveness of Debt" granted by Lender. 6. Forgiveness of Debt. Commencing upon the date on which the Lender issues the last advance of funds to or on behalf of Borrower under this Note (the "Repayment Commencement Date"), Borrower sha11 be entitled to receive, and Lender sha11 grant, forgiveness of Twenty Percent (20%) of the Principal Amount for each consecutive twelve-month period after the Repayrnent Commencement Date (a "Repayment Year") that Borrower actually occupies the Residence identified above as Borrower's principal place of residence. If Borrower resides in the Residence for five (5) Repayment Years, the entire Principal Amount sha11 be forgiven and this Note shall be satisfied in full. If Borrower fails for any reason to live in the Residence for five Yeaz shall be prorated for the number of days that Borrower actually occupied the Residence in that Repayrnent Year. 7. Default. Upon the occurrence of any of the following events ("Events of DefaulY'), Lender, at its option, and without notice to Borrower, may declare the entire unpaid Principal Amount to be immediately due and payable: a. The Borrower no longer occupies the Residence as Borrower's primary residence; b.. The Borrower defaults under the terms of this Note or the Deed of Trust granted in connection herewith; e. The Borrower is enjoined, restrained or in any way prevented by court order from continuing to reside in the Residence; f. Formal chazges CITY Of ,�.- Federal CITY HALL W�� 33325 8th Avenue South Federai Way, WA 98003�325 (253)835-7000 wtivw crtyoflederahvey sha11 not constitute a waiver of Lender's right to receive the entire amount due. 14. Consent. Borrower hereby jointly and severally (i) waives presentment for payment, demand, notice of non-payment, notice of protest or protest of this Note, (ii) waives Lender's diligence in collection or bringing suit, and (iii) waives consent to any and a11,�,,.- Federal CITY HALL W�� 33325 8th Avenue South Federai Way, WA 98003-6325 (253)835-�000 wt�vtv crtyo/federahvey com ORAL AGREEMENTS OR ORAL COMMITMENTS TO LOAN MONEY, EXTEND CREDIT, OR TO FORBEAR FROM ENFORCING REPAYMENT OF A DEBT ARE NOT ENFORCEABLE UNDER WASHINGTON LAW. Agreed to and accepted by: LENDER: CITY OF FEDERAL WAY � Skip Priest, Mayor AP OVED A O FORM: 'FQK City Att y, Patricia A Richardson �nn� n�x��n . Y17riieU N arile ATTEST: City Clerk, Carol McN illy, CM STATE OF WASHINGTON ) ) ss. COUNTY OF ) On this day personally appeared before me, ��G" , �C day of � 20� � �`�°� "'c "���k ��t��. ���... ��'/�1.�G �„r�a ,tA.�b ��: h�pq�.� WA��..�N� 1 Notary's signature Notary's printed name � `' Notary Public in and for the State of Was gton. My commission expires �- -- � q, —�. �p PROMISSORY NOTE 10/2012 EMERGENCY HOUSING REPAIR PROGRAM Page 4 | https://docs.cityoffederalway.com/WebLink/DocView.aspx?id=500666&dbid=0&repo=CityofFederalWay | 2021-07-23T19:36:49 | CC-MAIN-2021-31 | 1627046150000.59 | [] | docs.cityoffederalway.com |
This page provides answers to frequently asked questions and tips for troubleshooting your Teamwork Cloud (TWCloud) system.
Warning
When configuring cassandra.yaml, take note of the following:
- There is a space before each IP address for parameters "listen_address" and "broadcast_rpc_address". The space is required for Cassandra to start.
- When entering the parameters to configure cassandra.yaml, be sure that there is no '#' (pound sign) or 'space' before the parameter name. If there is a #, for example, #broadcast_rpc_address: 10.1.1.123, this value will become a comment. If there is a space before the parameter name, for example, <space>#broadcast_rpc_address: 10.1.1.123, you will get an error after starting Cassandra.
Frequently Asked Questions
I cannot login with the default Administrator user via TWCloud Admin UI. What could be the cause of the problem?
There are several causes of Administratior's login failure such as:
- Cassandra is not started so TWCloud cannot connect to it.
- Cassandra is started but TWCloud cannot connect to it due to Cassandra and/or network configuration problem.
To find out what the cause of the problem is, you need to look at the log file and check the status of Cassandra. See Troubleshooting section for more information.
Where can I find the TWCloud log file?
The default log file location is <user home folder>/.twcloud/<version number>.
- The log file of TWCloud is called server.log.
How do I check the Cassandra status?
Cassandra provides a nodetool utility, which is a command line interface for managing a cluster. You can use it for checking the cluster status.
- On Linux, type:
$ nodetool status
- On Windows, type:
C:\> "%CASSANDRA_HOME%"\bin\nodetool status
The following is an example output from running the nodetool status.
Starting NodeToolDatacenter: datacenter1========================Status=Up/Down|/ State=Normal/Leaving/Joining/Moving-- Address Load Tokens Owns Host ID RackUN 10.1.1.123 16.71 MB 256 ? 32f17f36-baec-409a-a389-f002d1dd0f9b rack1
UN means that the status is up and normal.
How do I check if TWCloud has a problem connecting to Cassandra?
You can check it from the log file of TWCloud. If TWCloud cannot connect to Cassandra during the TWCloud startup process , a log message similar to the following will appear and the server will shut itself down.
INFO 2016-01-04 11:12:32.131 Connecting to Cassandra [CoreManagerComponent, Component Resolve Thread]ERROR 2016-01-04 11:12:50.351 Connection to Cassandra failed due to timeout [CoreManagerComponent, Component Resolve Thread]
You may also find yourself in a situation where TWCloud has been started successfully but the problem connecting to Cassandra occurs later. If this happens, a log message similar to the following will appear.
WARN 2016-01-04 13:57:26.034 There was a problem with the connection between the TWCLOUD server and Cassandra
[AbstractActor, twcloud-akka.actor.default-dispatcher-24]
What should I see in the TWCloud log file, which indicates that TWCloud has been successfully started?
If TWCloud has been successfully started, you will see a log message similar to the following in the log file.
========== Console Server Configurations ==========Console Server: Server: Server: [100.64.17.152:3579]Selector: RoundRobinServerSelectorSecured Connection: false===================================================
INFO 2016-01-04 14:06:15.971 TWCLOUD Cluster with 1 node(s) : [10.1.1.123] [LoginActor, twcloud-akka.actor.default-dispatcher-15]
How can I use the netstat command to see if the ports are bound?
- On Linux, type
netstat –an | grep :8111 and netstat –an | grep :8555
- On Windows, type
Netstat –ano | find “:8111” and Netstat –ano | find “:8555”
I cannot see the project history or branch or perform any operations related to the project history from MagicDraw. What could be the cause of the problem and how to solve it?
You need to check if it is the known issue by looking at the server.log file. If you see the following ERROR message, it is the known issue.
com.esotericsoftware.kryo.KryoException: Buffer overflow
You can solve this issue by adding the following code line at the end of the configuration file, <TWC install folder>/configuration/application.conf.
esi.serializer.max-buffersize = -1
After updating the configuration, you will need to restart TeamworkCloud.
Troubleshooting
The TWCloud installer is not started on Windows
If you run the installer file and the installer UI does not show up or receive the message: "Windows error 2", try the following workaround.
- Start the command prompt by selecting "Run as administrator".
- Start the installer from the command line and use the option LAX_VM followed by the location of java.exe. For example:
C:\> twcloud_190_beta_installer_win64.exe LAX_VM "C:\Program Files\java\jdk1.8.0_152\bin\java.exe"
I cannot uninstall TWCloud using uninstall.exe
If you run uninstall.exe and the uninstall UI does not show up or receive the message "Windows errors", try the following workaround.
- Start the command prompt by selecting "Run as administrator".
- Start uninstall.exe from the command line and using the option LAX_VM followed by the location of java.exe. For example:
C:\> uninstall.exe LAX_VM "C:\Program Files\java\jdk1.8.0_152\bin\java.exe
TWCloud cannot connect to Cassandra
If the TWCloud server log shows the following message:
INFO 2016-01-04 11:12:32.131 Connecting to Cassandra [CoreManagerComponent, Component Resolve Thread]ERROR 2016-01-04 11:12:50.351 Connection to Cassandra failed due to timeout [CoreManagerComponent, Component Resolve Thread]
It means that TWCloud cannot connect to Cassandra and this usually caused by the following reasons:
- Cassandra is not started.
- Cassandra is started but it is not properly configured for TWCloud . You need to make sure that you follow the Cassandra configuration instruction and that you restart Cassandra after updating the configuration. You can find the configuration instruction from the following links:
- The Cassandra configuration instruction for Linux:
- The Cassandra configuration instruction for Windows:
TWCloud fails to start, and a message about AssociationError appears in the log file
If the TWCloud server fails to start and an error message similar to the following appears in the log file:
ERROR 2016-01-04 13:12:58.104 AssociationError [akka during TWCloud installation, however, your hostname is resolved to loopback IP 127.0.0.1.
You can configure how your machine resolves the hostname from the hosts file. The location of the hosts file:
- On Linux: /etc/hosts
- On Windows: %SystemRoot%\system32\drivers\etc\hosts
You can check what IP address is resolved from the hostname by using the following steps:
- On Linux:
Execute the following command
$ resolveip -s $(hostname)
- On Windows
Step 1: Find hostname of the machine.
C:\> hostname
Step 2: Use the ping command followed by the hostname you got from Step 1. For example, if your hostname is my-machine, use the command.
c:\> ping my-machine
TWCloud Admin UI does not refresh information when opened with Internet Explorer
Every time you edit and save information on the TWCloud Admin, the updates will appear in the TWCloud Admin UI. If you are using the IE browser and your UI does not reflect what you have just updated, you may need to configure the internet settings.
To make sure that your TWCloud Admin UI refreshes new updates every time you edit and save information
- On the IE browser, click> Internet options to open the Internet Options dialog.
- On the General tab, click. The Website Data Settings dialog will open.
- Select the option Every time I visit the webpage to update website data.
- Click> .
Compatibility issue when logging into TWCloud Admin using Internet Explorer 11
You may experience some incompatibility issue while logging into TWCloud Admin using Internet Explorer 11, which might cause error in display.
To fix the problem
- On the Internet Explorer browser, click > Compatibility View Settings.
- Clear the option Display intranet sites in Compatibility View.
- Click .
Related pages: | https://docs.nomagic.com/display/TWCloud190SP4/FAQs+and+troubleshooting | 2021-07-23T19:05:28 | CC-MAIN-2021-31 | 1627046150000.59 | [] | docs.nomagic.com |
OnClientAdded
The OnClientAdded client-side event occurs when a new row has just been added to the RadAsyncUpload control.
This event occurs for the initial rows that are added when the RadAsyncUpload control is first loaded, as well as any rows automatically added after a file has been uploaded or after invocation of the client-side addFileInput() method.
The event handler receives two parameters:
The instance of the RadAsycUpload control firing the event.
An eventArgs parameter containing the following methods:
get_row returns the row that was just added.
get_rowIndex returns the index of the row
Use the OnClientAdded event to perform any last minute changes to the rows in the RadAsyncUpload control.
<telerik:RadAsyncUpload</telerik:RadAsyncUpload>
function OnClientAdded(sender, args) { var rowIndex = args.get_index(); alert(rowIndex); } | https://docs.telerik.com/devtools/aspnet-ajax/controls/asyncupload/client-side-programming/onclientadded | 2021-07-23T20:37:47 | CC-MAIN-2021-31 | 1627046150000.59 | [] | docs.telerik.com |
Choosing, the choices are limited by items supported by both sides.
Note
Interoperability in this sense isn’t applicable with VPN types not listed here since they are not intended for site-to-site applications.
IPsec¶
IPsec is usually the best choice since it is included with nearly every VPN-capable device. It also prevents being locked into any particular firewall or VPN solution. For interoperable site-to-site connectivity, IPsec is usually the only choice.
WireGuard¶
WireGuard, like OpenVPN, is compatible with only a few other packaged firewall/VPN solutions. However, as it is a more recently developed protocol, support is even more rare..
Authentication considerations¶
All VPN types on the firewall support user authentication except for WireGuard..
OpenVPN¶
OpenVPN requires the use of certificates for remote access in most environments, which comes with its own learning curve and can be a bit arduous to manage. There is a wizard to handle the most common OpenVPN remote access configurations and the OpenVPN client export packages eases the process of getting the clients up and running.
Multi-WAN capable¶
If users require the ability to connect to multiple WANs, IPsec, OpenVPN, and WireGuard or WireGuard, as is the case with many Linux distributions. If using built-in clients is a must, consult the operating system documentation for all required client platforms to see if a common option is available and then check and are covered in Mobile IPsec.
The Cisco-style IPsec client included with OS X and iOS devices is fully compatible with IPsec using xauth. Configuration for the iOS client is covered in Configuring IPsec IKEv2 Remote Access VPN Clients on iOS.
Many Android phones also include a compatible IPsec client, which is discussed in Configuring IPsec IKEv2 Remote Access VPN Clients on Android.
OpenVPN¶
OpenVPN has clients available for Windows, Mac OS X, all the BSDs, Linux, Solaris, and Windows Mobile, but the client does not come pre-installed in any of these operating systems.
Android devices can use a freely available OpenVPN client that works well and doesn’t require rooting the device. That client is covered in Installing the OpenVPN Client on Android. There are other options available if the device is rooted, but that is beyond the scope of this documentation.
iOS also has a native OpenVPN client. For more information, see Installing the OpenVPN Client on iOS. very firewall-friendly. Since it uses a single UDP or TCP port and is not affected by¶
Tunnels using pre-shared keys can be broken if a weak key is used. Use a strong key, at least 10 characters in length containing a mix of upper and lowercase letters, numbers and symbols. Use of certificates is preferred, though somewhat more complicated to implement.
OpenVPN¶
Encryption is compromised if the PKI or shared keys are disclosed, though the use of multiple factors such as TLS authentication on top of PKI can mitigate some of the danger.
Support for NAT inside tunnels¶
While any use of NAT is undesirable, there are some occasions which can benefit from its use inside tunnels. Primarily, it can be useful for working around subnet conflicts or for setting up “outbound” style NAT when the remote endpoint only expects a single address (e.g. a VPN provider with no LAN-to-LAN routing)
IPsec¶
Support for NAT with IPsec depends on the mode, either tunnel or VTI.
Tunnel¶
Phase 2 entries in Tunnel mode support BINAT (1:1) and Overload/PAT style NAT. See NAT with IPsec Phase 2 Networks for details.
OpenVPN¶
OpenVPN supports inbound (e.g. port forwards) and outbound NAT using the group OpenVPN tab and also on assigned interfaces. Depending on the environment and configuration there may be some special considerations, such as ensuring proper return routing for post-NAT subnets.
WireGuard¶
WireGuard supports inbound (e.g. port forwards) and outbound NAT using the group WireGuard tab and also on assigned interfaces. Some cases may require using single peer tunnels or carefully crafted Allowed IPs lists to ensure correct return routing. See Design Considerations and WireGuard and Rules / NAT.
Per-tunnel Firewall Rules¶
Each VPN type has a common group tab for rules, and some also support rules for individual tunnels.
Warning
Rules on group tabs are considered before per-interface rules. For per-interface rules to match, rules on the group tab must not match the same packets.
IPsec¶
IPsec does not currently support per-tunnel rules, its traffic can only be filtered by rules on the IPsec tab.
Even though phase 2 entries in VTI mode have an interface which can be assigned, they do not currently support interface rules. See #8686 for details.
OpenVPN¶
When assigned as an interface, OpenVPN instances fully support per-tunnel rules. See Assigning OpenVPN Interfaces.
WireGuard¶
When assigned as an interface, WireGuard instances fully support per-tunnel rules. See Assign a WireGuard Interface and WireGuard and Rules / NAT.
Automatic Mobile Client Configuration¶
Depending on the deployment, mobile (Remote Access) clients can receive automatic configuration in certain cases.
IPsec¶
In IKEv2 mode, clients can automatically receive an IP address allocated from a pool, along with DNS configuration.
In IKEv1 mode with Xauth, in addition to the above, clients can also receive a list of networks to route across the VPN.
OpenVPN¶
OpenVPN clients can automatically receive an IP address allocated from a pool, and numerous additional options can be pushed to clients to control their behavior from the server side (routing, DNS, and many others).
WireGuard¶
WireGuard mobile clients must be configured statically. On the server side, a client tunnel address must be setup in the Allowed IPs for a peer. The same address must be configured on the client. This varies by OS/Platform, some read it from the configuration and other require it to be configured on interfaces via CLI commands. Networks to route must likewise be manually added on the client configuration Allowed IPs list and, depending on the client, may also need to be added to its operating system routing table.
Routing Support¶
IPsec¶
IPsec in Tunnel mode uses policies, not routes, and thus does not respect the operating system routing table.
IPsec in VTI mode supports static and dynamic routing (e.g. BGP, OSPF) and works with the operating system routing table.
OpenVPN¶
In SSL/TLS tun mode with multiple clients, OpenVPN uses its internal routing on client-specific configurations to determine which clients receive traffic for specific subnets. In this type of configuration, dynamic routing is not possible.
In SSL/TLS tun mode with a /30 subnet (one client per server) or in Shared Key mode, dynamic routing is possible using OSPF or BGP. This is due to the fact that in these cases, OpenVPN does not need to track internal routing and can rely on the operating system routing table alone.
In tap mode, dynamic routing is possible as packets can be handed off using L2/ARP information rather than relying on internal routing in OpenVPN.
WireGuard¶
WireGuard routes specific subnets to peers based on the Allowed IPs list, but also requires operating system routing table entries for traffic to enter a WireGuard tunnel.
When a WireGuard tunnel has more than one peer, the Allowed IPs list lets WireGuard determine internally which clients receive traffic for specific subnets. Due to this internal routing, dynamic routing is not possible in a configuration where WireGuard has multiple peers per tunnel.
For a WireGuard tunnel with a single peer, WireGuard can forward arbitrary networks to the peer without having them all listed in Allowed IPs. Thus, in this situation, it can take advantage of dynamic routing using BGP. OSPF is also possible but requires additional configuration.
See WireGuard Routing for more information.
Recap¶
Table Features and Characteristics by VPN Type shows an overview of the considerations provided in this section. | https://docs.netgate.com/pfsense/en/latest/vpn/selection.html | 2021-07-23T19:36:52 | CC-MAIN-2021-31 | 1627046150000.59 | [] | docs.netgate.com |
Automated test suite / nightly builds¶
The ZTK builds on the automated test suites (unit and functional tests) from the individual projects it keeps track of. We use automated build systems, like travis-ci, to run various combinations of differing Python versions, operating systems and packages and ensure everything works as expected.
The automated test suite¶
The ZTK’s automated test suite builds on the individual packages’ unit and functional tests and creates a combined test runner that runs each packages’ test suite in isolation but ensures that the dependencies are satisfied using the ZTK versions under test.
The combined test runner is created using z3c.recipe.compattest – check the documentation for details.
If you take a ZTK checkout, you can run the tests yourself like this:
$ git clone [email protected]:zopefoundation/zopetoolkit.git $ python bootstrap.py $ bin/buildout $ bin/test-ztk
If you work on a ZTK package and want to ensure that your changes are compatible with all other ZTK libraries, you can use a checkout of the individual package inside the zopetoolkit checkout:
$ bin/develop co zope.component $ bin/develop rb $ bin/test-ztk
The develop commands get a git checkout of the specified package and puts it into the develop/ folder, so you make your changes there. | https://zopetoolkit.readthedocs.io/en/latest/process/buildbots.html | 2021-07-23T19:08:43 | CC-MAIN-2021-31 | 1627046150000.59 | [] | zopetoolkit.readthedocs.io |
Cameo Systems Modeler 18.5 Documentation
Released on: August 18, 2017
Key Issues Fixed in this Service Pack
Modeling related issues
- Fixed the issue in the Activity Diagram where several Output Pins to call a Behavior are created instead of one.
- You can select properties of a custom type element to display as columns of the Generic table.
- Users are warned about the limitation of reversing the flow direction if the flow source and target are the same elements.
- In tables, you can simply select a cell when pasting a group of words, instead of pasting a cursor in it.
- The size of Requirement shapes no longer changes to default if you align the Requirement diagram in the Grid Layout type and set the Make Preferred Size option to false.
- Eliminated he ability to add Entry/Exit Point to State or Transition. To meet UML specification, you can add an Entry/Exit Point to the Composite or Orthogonal State.
- Fixed the issue related to losing the Sync Element property value after converting an Input Pin to a Value Pin in the Activity Diagram.
- In Object Diagram, you can assign several classifier types to one Instance Specification.
DSL related issues
- Evaluating expressions in JRuby language: a parameter THIS is always available.
- Derived properties defined as an Integer type: the Long type value returned by the JRuby script of the derived property expression is converted to Integer.
- Automatic numbering of custom elements: the continuity of unique IDs for every new instance of the custom element is correct in all numbering schemes.
Collaboration related issues
- Enhanced performance of updating Used Projects by optimizing diagram-related operations.
- Connectors can no longer be locked without their classifiers while mapping port types. This prevents automatically locking irrelevant connectors.
- Fixed displaying dates of the Norwegian, Bokmal (Norway) format in server project dialogs.
- Updated the locking mechanism to prevent lock-stealing related issues after saving a model containing locked elements as an offline project.
- Fixed the issue related to opening a server project after it was saved and edited offline and thus treating it as outdated.
- Added the progress bar to show indexing progress when importing a server project with a deep hierarchical structure to the DataHub.
- Fixed issue with potentially failing commits after changing package permissions.
Model validation issues
- Updated the No Type for Parameter UML completeness validation rule.
- Fixed the spell check issue for validating tagged values.
- Enhanced validation rules to detect issues related to invalid item flow direction.
- The validation doesn't fail for the Connector between properties typed by Component or Class because nested property path is now calculated for them.
Other issues
- Full element names (without ellipsis) are visible in the column names while printing matrices in PDF. The issue appeared depending on the screen resolution. Now it is solved.
- You can generate an html report from thousands of files with the Web Publisher 2.0 template from the command line.
- Z ordering of dialog windows issues are fixed in the native full screen mode in Mac OS X.
- The SysMLProfile class in OpenAPI now matches the SysMLprofile.mdzip URI.
You can check the list of publicly available issues or your own reported issues fixed in Cameo Systems Modeler 18.5 SP2.
Note: You will be required to login. Use the same username and password as for.
Fixes in servers and plugins
- Teamwork Cloud 18.5 SP2
- Teamwork Server 18.5 SP2
- Cameo Simulation Toolkit 18.5 SP2
- Methodology Wizard 18.5 SP2
Plugins updated due to compatibility purposes
CSM Documentation
News of Earlier Versions
Other Resources
Overview
Content Tools
Apps | https://docs.nomagic.com/display/CSM185/What+is+New+in+Cameo+Systems+Modeler+18.5+SP2 | 2021-07-23T19:06:17 | CC-MAIN-2021-31 | 1627046150000.59 | [] | docs.nomagic.com |
Raw Velodyne data block.
Each block contains data from either the upper or lower laser bank. The device returns three times as many upper bank blocks.
use stdint.h types, so things work with both 64 and 32-bit machines
Raw Velodyne packet.
revolution is described in the device manual as incrementing (mod 65536) for each physical turn of the device. Our device seems to alternate between two different values every third packet. One value increases, the other decreases.
status has either a temperature encoding or the microcode level
(DISTANCE_MAX / DISTANCE_RESOLUTION + 1.0)
Definition at line 56 of file rawdata.h. | https://docs.ros.org/en/hydro/api/velodyne_pointcloud/html/namespacevelodyne__rawdata.html | 2021-07-23T18:52:09 | CC-MAIN-2021-31 | 1627046150000.59 | [] | docs.ros.org |
Introduction to the rpm.versions System
Last modified: June 21, 2021
Overview may refer to any of the following items:
- The packaged
.rpmfile (for example,
MySQL56-client-5.6.14-1.cp1142.i386.rpm).
- The software that the package contains (for example, MySQL® version 5.6).
- The package manager itself.
For more information about RPMs, visit the RPM website.
What is an SRPM?
Source RPMs (SRPMs) contain the source code for each RPM on your system. Unlike RPMs, SRPMs are not compiled. For more information, read our The rpm.versions File documentation..
For more information, read our The rpm.versions File, RPM Targets, and How to Override the rpm.versions System documentation.
/usr/local/cpanel/etc/rpm.versionsfile for any reason.. | https://docs.cpanel.net/knowledge-base/rpm-versions/introduction-to-the-rpm-versions-system/ | 2021-07-23T19:57:31 | CC-MAIN-2021-31 | 1627046150000.59 | [] | docs.cpanel.net |
Adding Background Videos
You can easily add a responsive background video to a section/container of your page. It will auto scale to fill the available space and looks nice on any device.
First, select the container you want to add background video to:
Then click the Make Background Video button:
Click the little folder icon to browse to your video and select it:
You can see the background video added to our page header. The video will auto play and loop:
You can preview it directly in Wappler design view: | https://docs.wappler.io/t/adding-background-videos/10209 | 2021-07-23T19:07:32 | CC-MAIN-2021-31 | 1627046150000.59 | [] | docs.wappler.io |
mGet #
Gets multiple documents.
Arguments #
Future<Map<String, dynamic>> mGet( String index, String collection, List<String> ids, )
Return #
A
Map<String, dynamic> which has a
successes and
errors:
Each created document is an object of the
successes array with the following properties:
The
errors array contains the IDs of not found documents.
Usage #
final result = await kuzzle .document .mGet('nyc-open-data', 'yellow-taxi', [ 'some-id', 'some-id2' ]); | https://docs-v2.kuzzle.io/sdk/dart/2/controllers/document/m-get/ | 2021-07-23T18:13:24 | CC-MAIN-2021-31 | 1627046150000.59 | [] | docs-v2.kuzzle.io |
Navigating Nurses’ Pages in PowerSchool
The following is quick reference guide of where all of the nurses tools can be found.
Functions Specific to a Single Student:
- Select the student.
- Click Health from the menu on the left under the Information Heading.
- Click on one of the following tabs in the center of the page:
- Immunizations: Record and update dates each dose of a vaccine was given.
- Office Visits: Record, update, or delete office visit records.
- Screenings: Record, update, or delete health & vision screening records.
Reports & Functions for Multiple Students:
- From the PowerSchool start page, click Extended Reports under the Functions heading on the left.
- Click on the Health tab.
- The Health reports are broken down under the following headings:
- Immunizations:
- Locate students that are non-compliant for vaccines
- Print PDF versions of students’ immunization histories.
- Office Visits:
- Run graphs of office visit types and dispositions
- Search through all student office visit records
- Locate today’s office visit records that are missing a time-out or disposition.
- Determine the amount of time being spent in office visits for different types of health conditions.
- Screenings:
- Search students’ screening status. | https://docs.glenbard.org/index.php/ps-2/admin-ps/health/navigating-nurses-pages-in-powerschool/ | 2021-07-23T18:12:56 | CC-MAIN-2021-31 | 1627046150000.59 | [] | docs.glenbard.org |
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region.
Update-WAFXssMatchSet-ChangeToken <String>-Update <XssMatchSetUpdate[]>-XssMatchSetId <String>-Force <SwitchParameter>
XssMatchTupleobject, you specify the following values:
Action: Whether to insert the object into or delete the object from the array. To change a
Xss cross-site scripting attacks.
XssMatchSetobjects to specify which CloudFront requests you want to allow, block, or count. For example, if you're receiving requests that contain cross-site scripting attacks in the request body and you want to block the requests, you can create an
XssMatchSetwith the applicable settings, and then configure AWS WAF to block the requests. To create and configure an
XssMatchSet, perform the following steps:
ChangeTokenparameter of an UpdateIPSet request.
UpdateXssMatchSetrequest to specify the parts of web requests that you want AWS WAF to inspect for cross-site scripting attacks.
XssMatchSetUpdateobjects that you want to insert into or delete from a XssMatchSet. For more information, see the applicable data types:
Actionand
XssMatchTuple
FieldToMatchand
TextTransformation
Dataand
Type
XssMatchSetIdof the
XssMatchSetthat you want to update.
XssMatchSetIdis returned by CreateXssMatchSet and by ListXssMatchSets. | https://docs.aws.amazon.com/powershell/latest/reference/items/Update-WAFXssMatchSet.html | 2018-08-14T16:30:29 | CC-MAIN-2018-34 | 1534221209165.16 | [] | docs.aws.amazon.com |
The first annual eCrime Researchers Sync-Up, organised in conjunction with University College Dublin's Centre for Cybercrime Investigation, on March 15th and 16th, 2011 is a two-day exchange of presentations and discussions related to eCrime research in progress - and for networking of researchers within the disciplines that are defining the eCrime research field today.
The eCrime Researchers Sync-Up has been established as an extension of the annual APWG eCrime Research Summit held every year in the fall in the United States, an event that and government to discuss their research, find axes of common interest and establish Foy Shiver for details via email at foy [at] apwg.org. January 31, 2011.
The APWG eCR Sync-Up conferences investigate the modern phenomena of electronic crime from the point of view of the responder who must
undertake forensic exercises to investigate the crimes and the
managers who must protect consumers and enterprise computer and
network users every day. eCR Sync-Up presenters discuss all aspects of
electronic crime and ways to combat it.
Topics of interest include (but are not limited to):
We have tried to keep registration for this event as low as possiable to help cover cost. The rates listed below offer a discounted "early bird" registration rate prior to Feruary 21st.
General Registration
APWG Members
$ 50.00
$ 75.00
Students & University Faculty
Law Enforcement & Government Employees
All Others
$ 100.00
Venue:
Our first Stillorgan Park Hotel for the special discounted rate of 89 Euro per night single and 99 Euro per night double occupancy. These rates are good for the week prior to the event until March 17. To access this rate please call the hotel directly and provide the event code "ECRS 92655".
Stillorgan Park Hotel
Stillorgan Road,
Dublin 18
Ireland
Tel: +353 (0)1 200 1800
Travel from Dublin Airport:
Travelling from Dublin Airport to the hotel the Aircoach bus service leaves the airport every 15 minutes and stops directly outside the hotel. This service costs €8.00 for a single journey and €14.00 for a return journey. The Aircoach departs from outside the arrivals hall in Dublin Airport (turn left on exiting the terminal). Alternatively you can get a taxi from Dublin Airport to the hotel. Taxis cost €35 - €50 depending on time of day. To get a taxi turn right on exiting the arrivals hall. Journey times will take 30 – 60 minutes depending on time of day.
Travel to UCD (Conference Venue):
If coming direct from the airport, the Aircoach stops outside the gates of UCD. For those travelling from the hotel, there will be a shuttle bus available at 8:30am each morning to take delegates to the venue. This shuttle bus will drop people back to the hotel at 5:30pm each day.. | http://docs.apwg.org/ecrimeresearch/2011syncup/cfp.html | 2018-08-14T16:03:13 | CC-MAIN-2018-34 | 1534221209165.16 | [] | docs.apwg.org |
Basic Push Notifications Sample App for PhoneGap/Cordova
Overview
This repository contains a basic sample app that can receive push notifications sent from its Telerik Platform backend. It is a hybrid app built using Telerik AppBuilder and Cordova.
The sample app utilizes the following Telerik products and SDKs:
- Telerik Backend Services JavaScript SDK—to connect the app to Telerik Platform
- Settings tab.
- Take note of your App ID.
- Go to the Code tab.
- Open the
.
- Finally, got to the Notifications > Push Notifications tab and set up push notifications as explained in Enabling.
Push notifications are not supported when running the app on device simulators/emulators.
Ensure that the device that you are using has Internet connectivity when running the sample. | http://docs.telerik.com/platform/samples/backend-services-push-hybrid/ | 2018-08-14T16:12:17 | CC-MAIN-2018-34 | 1534221209165.16 | [] | docs.telerik.com |
Manage Parse Server
It is easier than ever to keep up to date with the latest Parse Server version and so ensure full compatibility between app and server version. All it takes is a single click on the Change Version option to upgrade or downgrade server versions.
In the last case showed above,you see that the last version was made effective,so if you want to downgrade to the version 2.2.14, you need to click on the circle
on the other hand,if you want to upgrade to the last version,you need to click on the circle
If you chose the version that you think is better, click on the green box
and click on OK
| https://docs.back4app.com/docs/manage-parse-server/ | 2018-08-14T15:14:44 | CC-MAIN-2018-34 | 1534221209165.16 | [array(['https://blog.back4app.com/wp-content/uploads/2016/11/545x629xparse_server_version.png.pagespeed.ic.AyLI6fkTpM.png',
'parse_server_version'], dtype=object)
array(['https://docs.back4app.com/wp-content/uploads/2017/01/Captura-de-Tela-2017-01-10-às-18.00.00.png',
'captura-de-tela-2017-01-10-as-18-00-00'], dtype=object)
array(['https://docs.back4app.com/wp-content/uploads/2017/01/Captura-de-Tela-2017-01-10-às-18.03.08.png',
'captura-de-tela-2017-01-10-as-18-03-08'], dtype=object)
array(['https://docs.back4app.com/wp-content/uploads/2017/01/Captura-de-Tela-2017-01-10-às-17.06.48.png',
'captura-de-tela-2017-01-10-as-17-06-48'], dtype=object)
array(['https://docs.back4app.com/wp-content/uploads/2017/01/Captura-de-Tela-2017-01-10-às-18.06.41-300x69.png',
'captura-de-tela-2017-01-10-as-18-06-41'], dtype=object) ] | docs.back4app.com |
What's New in DNS in Windows Server 2008
Applies To: Windows Server 2008.
Windows Server® 2008 provides a number of enhancements to the DNS Server service that improve how DNS performs. For details about these changes, see DNS Server Role.
Overview of the Improvements in DNS. | https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc753143(v=ws.10) | 2018-02-17T23:56:24 | CC-MAIN-2018-09 | 1518891808539.63 | [] | docs.microsoft.com |
.
Installation
To use the Java agent:
- Make sure your system meets the Java agent's compatibility requirements.
- Install the Java agent by following the standard installation procedures. Depending on your tools and frameworks, refer to additional installation procedures to install or configure the Java agent.
- To view your app's performance in the New Relic UI: Go to rpm.newrelic.com/apm > (select an app) > Monitoring > Overview. Insights..
- Collect custom attributes: Collect custom attributes via API or XML file.
- Java agent API: Use the API to control, customize, or extend the functionality of the Java agent.
- Browser instrumentation: Integrate the Java agent with New Relic Browser.
Troubleshooting procedures
If you encounter problems with the Java agent, see the troubleshooting documentation. | https://docs.newrelic.com/docs/agents/java-agent/getting-started/introduction-new-relic-java | 2018-02-17T23:01:04 | CC-MAIN-2018-09 | 1518891808539.63 | [] | docs.newrelic.com |
CONCOCT - Clustering cONtigs with COverage and ComposiTion¶
In this excercise you will learn how to use a new software for automatic and unsupervised binning of metagenomic contigs, called CONCOCT. CONCOCT uses a statistical model called a Gaussian Mixture Model to cluster sequences based on their tetranucleotide frequencies and their average coverage over multiple samples.
The theory behind using the coverage pattern is that sequences having similar coverage pattern over multiple samples are likely to belong to the same species. Species having a similar abundance pattern in the samples can hopefully be separated by the tetranucleotide frequencies.
We will be working with an assembly made using only the reads from this single sample, but since CONCOCT is constructed to be ran using the coverage profile over multiple samples, we’ll be investigating how the performance is affected if we add several other samples. This is done by mapping the reads from the other samples to the contigs resulting from this single sample assembly.
Getting to know the test data set¶
Today we’ll be working on a metagenomic data set from the baltic sea. The sample we’ll be using is part of a time series study, where the same location have been sampled twice weekly during 2013. This specific sample was taken march 22.
Start by copying the contigs to your working directory:
mkdir -p ~/binning-workshop mkdir -p ~/binning-workshop/data cd ~/binning-workshop/ cp /proj/g2014113/nobackup/concoct-workshop/Contigs_gt1000.fa data/ cp /proj/g2014113/nobackup/concoct-workshop/120322_coverage_nolen.tsv data/
You should now have one fasta file containing all contigs, in this case only contigs longer than 1000 bases is included to save space, and one comma separated file containing the coverage profiles for each contig. Let’s have a look at the coverage profiles:
less -S ~/binning-workshop/data/120322_coverage_nolen.tsv
Try to find the column corresponding to March 22 and compare this column to the other ones. Can you draw any conclusions from this comparison?
We’d like to first run concoct using only one sample, so we remove all other columns in the coverage table to create this new coverage file:
cut -f1,3 ~/binning-workshop/data/120322_coverage_nolen.tsv > ~/binning-workshop/data/120322_coverage_one_sample.tsv
Running CONCOCT¶
CONCOCT takes a number of parameters that you got a glimpse of earlier, running:
concoct -h
The contigs will be input as the composition file and the coverage file obviously as the coverage file. The output path is given by as the -b (–basename) parameter, where it is important to include a trailing slash if we want to create an output directory containing all result files. Last but not least we will set the length threshold to 3000 to speed up the clustering (the less contigs we use, the shorter the runtime):
mkdir -p ~/binning-workshop/concoct_output concoct --coverage_file ~/binning-workshop/data/120322_coverage_one_sample.tsv --composition_file ~/binning-workshop/data/Contigs_gt1000.fa -l 3000 -b ~/binning-workshop/concoct_output/3000_one_sample/
This command will normally take a couple of minutes to finish. When it is done, check the output directory and try to figure out what the different files contain. Especially, have a look at the main output file:
less ~/binning-workshop/concoct_output/3000_one_sample/clustering_gt3000.csv
This file gives you the cluster id for each contig that was included in the clustering, in this case all contigs longer than 3000 bases.
For the comparison we will now run concoct again, using the coverage profile over all samples in the time series:
concoct --coverage_file ~/binning-workshop/data/120322_coverage_nolen.tsv --composition_file ~/binning-workshop/data/Contigs_gt1000.fa -l 3000 -b ~/binning-workshop/concoct_output/3000_all_samples/
Have a look at the output from this clustering as well, do you notice anything different?
Evaluating Clustering Results¶
One way of evaluating the resulting clusters are to look at the distribution of so called Single Copy Genes (SCG:s), genes that are present in all bacteria and archea in only one copy. With this background, a complete and correct bin should have exactly one copy of each gene present, while missing genes indicate an inclomplete bin and several copies of the same gene indicate a chimeric cluster. To predict genes in prokaryotes, we use Prodigal that we then use as the query sequences for an RPS-BLAST search against the Clusters of Orthologous Groups (COG) database. This RPS-BLAST search takes about an hour and a half for our dataset so we’re going to use a precomputed result file. Copy this result file along with two files necessary for the COG counting scripts:
cp /proj/g2014113/nobackup/concoct-workshop/Contigs_gt1000_blast.out ~/binning-workshop/data/ cp /proj/g2014113/nobackup/concoct-workshop/scg_cogs_min0.97_max1.03_unique_genera.txt ~/binning-workshop/data/ cp /proj/g2014113/nobackup/concoct-workshop/cdd_to_cog.tsv ~/binning-workshop/data/
Before moving on, we need to install some R packages, please run these commands line by line:
R install.packages("ggplot2") install.packages("reshape") install.packages("getopt") q()
With the CONCOCT distribution comes scripts for parsing this output and creating a plot where each cog present in the data are grouped accordingly to the clustering results, namely COG_table.py and COGPlot.R. These scripts are added to the virtual environment, try check out their usage:
COG_table.py -h COGPlot.R -h
Let’s first create a plot for the single sample run:
COG_table.py -b ~/binning-workshop/data/Contigs_gt1000_blast.out -c ~/binning-workshop/concoct_output/3000_one_sample/clustering_gt3000.csv -m ~/binning-workshop/data/scg_cogs_min0.97_max1.03_unique_genera.txt --cdd_cog_file ~/binning-workshop/data/cdd_to_cog.tsv > ~/binning-workshop/cog_table_3000_single_sample.tsv COGPlot.R -s ~/binning-workshop/cog_table_3000_single_sample.tsv -o ~/binning-workshop/cog_plot_3000_single_sample.pdf
This command might not work for some R related reason. If you’ve tried getting it to work more than you wish to do, just copy the results from the workshop directory:
cp /proj/g2014113/nobackup/concoct-workshop/cogplots/* ~/binning-workshop/
This command should have created a pdf file with your plot. In order to look at it, you can download it to your personal computer with scp. OBS! You need to run this in a separate terminal window where you are not logged in to Uppmax:
scp [email protected]:~/binning-workshop/cog_plot_3000_single_sample.pdf ~/Desktop/
Have a look at the plot and try to figure out if the clustering was successful or not. Which clusters are good? Which clusters are bad? Are all clusters present in the plot? Now, lets do the same thing for the multiple samples run:
COG_table.py -b ~/binning-workshop/data/Contigs_gt1000_blast.out -c ~/binning-workshop/concoct_output/3000_all_samples/clustering_gt3000.csv -m ~/binning-workshop/data/scg_cogs_min0.97_max1.03_unique_genera.txt --cdd_cog_file ~/binning-workshop/data/cdd_to_cog.tsv > ~/binning-workshop/cog_table_3000_all_samples.tsv COGPlot.R -s ~/binning-workshop/cog_table_3000_all_samples.tsv -o ~/binning-workshop/cog_plot_3000_all_samples.pdf
And download again from your separate terminal window:
scp [email protected]:~/binning-workshop/cog_plot_3000_all_samples.pdf ~/Desktop
What differences can you observe for these plots? Think about how we were able to use samples not included in the assembly in order to create a different clustering result. Can this be done with any samples? | http://2014-5-metagenomics-workshop.readthedocs.io/en/latest/binning/concoct.html | 2018-02-17T23:18:49 | CC-MAIN-2018-09 | 1518891808539.63 | [] | 2014-5-metagenomics-workshop.readthedocs.io |
User Guide TTY support
Previous topic: Reset a call timer
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/smartphone_users/deliverables/17692/About_TTY_support_50_858206_11.jsp | 2013-05-18T10:56:34 | CC-MAIN-2013-20 | 1368696382261 | [] | docs.blackberry.com |
This page represents the current plan; for discussion please check the tracker link above.
Description
Currently, GeoTools is tuned for use by a very small number of concurrent users. When deploying this subsystem in a highly threaded environment with thousands of users some shortcomings are revealed:
- Allowing Multiple Users - currently multiple threads are only supported by virtue of being serviced by a cache hit, only one thread is allowed to work on a cache hit at a time.
- Cache Handling - we need to ensure the two caching techniques used (
pooland
findPool) are able to function in the face of multiple threads (both reading and writing).
- Connection Issues - this is a related issue only of interest to
CRSAuthorityimplementation making use of an EPSG Database, we need to ensure that supporting multiple threads does not result in Connections being leaked from the
java.sql.DataSourceprovided.
Status
This proposal has been accepted; implemented and is waiting the attentions of the Module Maintainer for referencing. We are currently running two implementations of the various base classes.
Resources
Tasks
2.4-M4:
Jody Garnett Rename CRSAuthortyFactory classes for clarity
Cory Horner Isolate ObjectCache
Martin Desruisseaux Review ObjectCache API
Jody Garnett Review and document ObjectCache
Cory Horner Create DefaultObject Cache and stress test
Jody Garnett Break BufferedAuthorityFactory into three parts; one for each responsibility
2.4-RC0:
Jody Garnett Create WeakObject and FixedObjectCache for stress testing
Cory Horner AbstractCachedAuthorityFactory - inject ObjectCache into worker
Cory Horner AbstractCachedAuthorityMediator - provide Hints to control Object Pool
Jody Garnett Set up OracleDialectEpsgFactory lifecycle for Connection use
Jody Garnett Using HsqlDialectEpsgMediator - confirm ObjectPool can manage a single worker
Jody Garnett Finish renaming the EpsgFactory classes for clarity
Martin Desruisseaux Review and update Naming in response to Implementation
Cory Horner Provided a test to Break ObjectCache memory limit
Cory Horner Use WeakReferences to keep withing memory limit
2.4-RC1:
Martin Desruisseaux Review HSQL implementation and deprecate old approach
Jody Garnett Complete documentation
Jody Garnett Stress test epsg-oracle module
Jody Garnett Bring oracle-epsg up to supported status |
API Changes
Public API change
This results in no change to client code - all client code should be making use of one of
AuthorityFactory sub-interfaces or the CRS facade as documented here:
Internal API change
So what is this change about then? This change request is for the internals of the referencing module, and the relationship between the super classes defined therein and the plug in modules providing implementations.
Requests:
- Stop calling everything Factory
- Try and communicate the difference between an authority "using oracle to host the EPSG database" and "using the Oracle SRID wkt definition"
- Keep in mind that most authority factories are about
CoordinateReferenceSystem,
CoordinateSystem,
Datum,
CoordinateOperation, etc.
- Sort order of the names matter for people reading javadocs - they should be able to see alternatives sorted together
- (note from Martin: I agree about grouping the related alternatives, but I don't think that we should do that through alphabetical order if it produces confusing names. Grouping is Javadoc's job. Current javadoc tool can do that at the packages level. Future javadoc tool will be able to do that at the classes and methods level. So lets not produce bad names for working around a temporary javadoc limitation)
- Please don't hold up your vote over naming - if you see something you want edit the page and vote +1
Design Changes
We are documenting this as a refactoring with BEFORE and AFTER pictures. For design alternatives please review the comments of GEOT-1286
BEFORE
For background reading on the design of the GeoTools referencing system:
Allowing Multiple Users
FactoryOnOracle (ie a
BufferedAuthorityFactory) allows multiple threads, making use of a an internal pool as a cache for objects already constructed. In the event of a cache miss the backingStore is used to create the required object.
FactoryUsingOracleSQL (ie a
DirectAuthorityFactory acting as a "backingStore") has synchronized each and every public method call (internally it makes use of a Thread lock check to ensure that subclasses do not confuse matters).
When creating compound objects will make a recursive call to its parent buffered
FactoryOnOracle. This recursive relationship is captured in the sequence diagram above.
A Timer is used to dispose of the backingStore when no longer in use.
Cache Handling
FactoryOnOracle (ie a
BufferedAuthorityFactory) makes use a pool (a
HashMap of strong and weak references) in order to track referencing objects created for use by client code. By default, the 20 most recently used objects are hold by strong references, and the remainding ones are hold by weak references. A second cache, findPool, makes use of a
HashMap of
WeakReferences in order to keep temporary referencing objects created during the use of the find method.
The garbage collector is used to clean out weak references as needed.
Connection Issues
A single connection is opened by
FactoryOnOracle, and handed over to the backingStore (ie
FactoryUsingSQL) on construction. This connection is closed after a 20 mins idle perriod (at which point the entire backingStore is shut down). This work is performed by a timer task in
DeferredAuthorityFactory, not to be confused with the thread shutdown in
DefaultFactory, which is a shutdown hook used to ensure the connection is closed at JVM shutdown time.
AFTER
The referencing module functions as normal, classes have been renamed according to function:
Allowing Multiple Users
EPSGOracleThreadedAuthority allows multiple threads, making use of
ReferencingObjectCache in order to return objects previously constructed and an
ObjectPool of workers to create new content in the event of a cache miss.
To build compound objects the workers will need to share the cache with the parent.
The following sequence diagram shows the behaviour of
EPSGOracleThreadedAuthority when responding to a createDatum request. Initially the requested datum is not in the cache, a worker is retrieved from the
ObjectPool and used to perform the work. Of interest is the use of the shared
ReferencingObjectCache to block subsequent workers from duplicating this activity.
Cache Handling
The cache has been isolated into a single class -
ReferencingObjectCache. This class is responsible for storing strong references to objects already created and released to code outside of the referencing module.
The ReferencingObjectCache stores an internal Map<Obj,Reference> as described in the following table.
- During a Cache Hit: The entry is looked up in the internal map, if the value is found it is returned. If a soft reference is found the value is extracted if needed (the soft reference may be changed into a strong reference if needed).
- During a Cache Miss: A worker is produced to construct the appropriate object; the worker will reserve the spot in the cache with a placeholder reference, produce the required object and place it into the cache. The placeholder will then be removed and all threads waiting for the value released.
- During a Cache Conflict: More than one worker is released to construct the appropriate object; the first worker will reserve using a placeholder object - and the second will block. When the placeholder is released the second worker will consider itself as having undergone a cache hit and behave as normal.
As noted above the
ReferencingObjectCache class is thread safe.
The find method makes use of fully created referencing objects (like Datum and CoordinateReferenceSystem) in order to make comparisons using all available meta data. This workflow involves creating (and throwing away) lots of objects; and falls outside of our normal usage patterns.
To facilitate this work flow:
- A separate softCache is maintained - configured to only use weak references. This softCache uses the real cache as its parent.
ReferencingObjectFactory uses an internal CanonicalSet to prevent more than one referencing object with the same definition being in circulation. The internal pool makes use of weak references.
Connection Issues
EPSGOracleThreadedAuthority is the keeper of a
DataSource which is provided to
EPSGOracleDirectAuthority workers on construction. The
EPSGOracleDirectAuthority workers use their dataSource to create a connection as needed, they will also keep a cache of
PreparedStatements opened against that connection.
The
ObjectPool lifecycle methods are implemented allowing
EPSGOracleDirectAuthority object to be notified when no longer in active use. At this point their
PreparedStatements and
Connection can be closed - and reclaimed by the
DataSource
We will need to make use of a single worker (and use it to satisfy multiple definitions) when implementing the find method.
By providing hints to tune the
ObjectPool we can allow an application to:
- Ensure that less workers are in play than number of Connections managed by the DataSource (so other oracle modules do not starve)
- Emulate the current 20 min timeout behavior
- Arrive at a compromise for J2EE applications (where a worker can free it's connection the moment it is no longer in constant use)
Documentation Changes
Update Module matrix pages
Update User Guide:
- ObjectCache - done
- Update Referencing Developers Guide
Issue Tracker:
- check related issues to see of problems are affected | http://docs.codehaus.org/display/GEOTOOLS/Improve+CRSAuthority+Concurrency+Caching+and+Connection+Use | 2013-05-18T10:31:16 | CC-MAIN-2013-20 | 1368696382261 | [] | docs.codehaus.org |
There SOAP stack space has gotten more crowded recently. This chart is to help you decide which stack to use.
If you have any corrections/additions please direct them to the mailing list.
While feature matrices can be helpful, we think you should keep some other points in mind which are equally important.
- Performance - XFire is one of the fastest SOAP stacks available. We'll have some benchmarks coming soon, but a rough guide is that we're 2-5x faster than Axis 1.
- Robustness - XFire is now at 1.2, it has been in development for over 2.5 years, and it has deployed in many large organizations around the world.
- Ease of Use - XFire is significantly easier to use than a lot of SOAP stacks.
- Embeddability - The SOAP stacks below have various degrees of embeddability. This may not be a factor in your application design, but here are our thoughts on the issue:
- Axis 1's big embeddability flaw is that its API was never meant to be used by an end user. Also, it uses static references to the AxisEngine everywhere in the code, making it impossible to run two completely seperate instances side by side.
- Axis 2 seems to be a bit more embeddable, although the API seems kind of ugly and there isn't any documentation on the subject. (Dan Diephouse: It also makes the mistake in my opinion of trying to be the equivalent of a J2EE container, which means it mucks with classloaders, has its own deployment model, etc. That is what Spring, JBI, other containers are for, so XFire doesn't feel the need to replicate that.)
- Celtix - seems embeddable.
- Glue - definitely embeddable.
- JBossWS - We haven't had time to play with this one yet.
- XFire - Check out our sample.
Last Date of Comparison: 9/1/2006 | http://docs.codehaus.org/display/XFIRE/Stack+Comparison?showComments=true&showCommentArea=true | 2013-05-18T10:15:03 | CC-MAIN-2013-20 | 1368696382261 | [] | docs.codehaus.org |
API15:JCacheOutput
From Joomla! Documentation
(Difference between revisions)
Latest revision as of 12:28, 25 March 2010
Joomla! version which is no longer supported. It exists only as a historical reference, will not be improved and its content may be incomplete.
[Edit Descripton] JCacheOutput is a concrete cache handler. It caches and returns output.
[edit] Defined in
libraries/joomla/cache/handler/output.php
[edit] Methods
[edit] Importing
jimport( 'joomla.cache.handler.output' );
[Edit See Also] SeeAlso:JCacheOutput
[edit] Examples
<CodeExamplesForm /> | http://docs.joomla.org/index.php?title=API15:JCacheOutput&diff=prev&oldid=26046 | 2013-05-18T10:16:42 | CC-MAIN-2013-20 | 1368696382261 | [] | docs.joomla.org |
JoomlaCode FAQs
Latest revision as of 17:14, 27 July 2008
[edit] What is JoomlaCode?
Joomlacode.org is the repository for the Joomla! source code as well as many open source Joomla! extensions. It requires separate registration from the the other joomla.org sites.
[edit] Where can I find information to checkout Joomla! using SVN?
See this link;
[edit] I registered an account on JoomlaCode.org, but I never received an activation email. What do I do?
Please ensure that the activation email is not in your spam folder. If you still can not find it, then email a JoomlaCode.org team member at and ask him/her to email the activation link to you directly.
[edit] Why did my project get rejected?.
[edit]:.
[edit] When I created my account, I typed in the wrong UNIX Name. Can you change it for me?
Unfortunately, this is something that we can not do. The only option is to request that we delete your project, so that you can resubmit it for approval.
[edit] Why I get a "Permission Denied" message when trying to view a project?
The project was migrated and still needs to be activated by the admin. As soon as we get a request from the admin, we will assign the project to them. It is the admin's responsibility to make a project public.
[edit] My project got rejected and I was asked to re-register it. Now I cannot re-submit it, I get a message saying that the project already exists. What do I do?
Re-register your project using a different "Unix Project Name". (This is a known issue that is being worked on as we speak, apologies for the inconvenience ) | http://docs.joomla.org/index.php?title=JoomlaCode_FAQs&diff=9566&oldid=881 | 2013-05-18T10:45:19 | CC-MAIN-2013-20 | 1368696382261 | [] | docs.joomla.org |
:mod:`HTMLParser` --- Simple HTML and XHTML parser ================================================== .. module:: HTMLParser :synopsis: A simple parser that can handle HTML and XHTML. .. note:: The :mod:`HTMLParser` module has been renamed to :mod:`html.parser` in Python 3.0. The :term:`2to3` tool will automatically adapt imports when converting your sources to 3.0. .. versionadded:: 2.2 .. index:: single: HTML single: XHTML This module defines a class :class:`HTMLParser` which serves as the basis for parsing text files formatted in HTML (HyperText Mark-up Language) and XHTML. Unlike the parser in :mod:`htmllib`, this parser is not based on the SGML parser in :mod:`sgmllib`. ... Unlike the parser in :mod:`htmllib`,:: :class:`HTMLParser` base class method :meth:`close`. .. method:: HTMLParser.getpos() Return current line number and offset. .. method::.). .. method::', '')])``. .. versionchanged:: 2.6 All entity references from :mod:`htmlentitydefs` are now: | http://docs.python.org/release/2.6/_sources/library/htmlparser.txt | 2013-05-18T10:41:11 | CC-MAIN-2013-20 | 1368696382261 | [] | docs.python.org |
Object
DelegateFeatureIterator<F>DelegateFeatureIterator<F>
public class DelegateFeatureIterator<F extends Feature>
A feature iterator that completely delegates to a normal Iterator, simply allowing Java 1.4 code to escape the caste (sic) system.
This implementation is not suitable for use with collections that make use of system resources. As an alterantive please see ResourceFetaureIterator.
public DelegateFeatureIterator(FeatureCollection<? extends FeatureType,F> collection, Iterator<F> iterator)
iterator- Iterator to be used as a delegate.
public boolean hasNext()
FeatureIterator
Iterator defin: Returns true if the iteration has more elements. (In other words, returns true if next would return an element rather than throwing an exception.)
hasNextin interface
FeatureIterator<F extends Feature>
public F next() throws NoSuchElementException
FeatureIterator
nextin interface
FeatureIterator<F extends Feature>
NoSuchElementException- If no more Features exist.
public void close()
FeatureIterator
closein interface
FeatureIterator<F extends Feature> | http://docs.geotools.org/stable/javadocs/org/geotools/feature/collection/DelegateFeatureIterator.html | 2013-05-18T10:40:59 | CC-MAIN-2013-20 | 1368696382261 | [] | docs.geotools.org |
Enabling Search Engine Friendly (SEF) URLs on Apache.
- Enable 'System - SEF' plugin
- This plugin adds SEF support to links in your Joomla articles. It operates directly on the HTML and does not require a special tag.
See also Why does your site get messed up when you turn on SEF (Search Engine Friendly URLs)?
If No Avail please see How to check if mod rewrite is enabled on your server for these Admin is responsible for ensuring alias names are unique for each page of your site. If there are two pages with the same alias, browsing to either historical search value for that page unless you take more technical measures. | http://docs.joomla.org/index.php?title=Enabling_Search_Engine_Friendly_(SEF)_URLs_on_Apache&oldid=58822 | 2013-05-18T10:13:14 | CC-MAIN-2013-20 | 1368696382261 | [] | docs.joomla.org |
Errors/1001
Message: Failed to create data load adapter "%s"
Causes
This can happen when the data load system is trying to load an adapter that doesn't exist or is unavailable on the filesystem or if the file exists however the appropriate class name does not.
Solutions
Data loader adapters are stored in the "/libraries/joomla/database/loader" folder, so if the file is not located there either an invalid adapter has been requested (via JDataLoad::getInstance) or the file exists however an invalid class name has been specified.
If the file doesn't exist, review the code that triggered the issue in question and validate that it is requesting an appropriate adapter. If it is requesting the appropriate adapter then ensure that the adapter has been installed onto the target site.
If the file does exist, open the file up and validate that it is using the correct naming convention. Data load adapters use the prefix "JDataLoader" followed by the name of the file.
Occurrences
- /libraries/joomla/backup/adapters/sql.php | http://docs.joomla.org/index.php?title=Errors/1001&oldid=12907 | 2013-05-18T10:12:37 | CC-MAIN-2013-20 | 1368696382261 | [] | docs.joomla.org |
.
.. If any component is an absolute path, all previous components (on Windows, including the previous drive letter, if there was one) are thrown away, and joining continues. The return value is the concatenation of path1, and optionally path2, etc., with exactly one directory separator (os.sep) following each non-empty part except the last. (This means that an empty last part will result in a path that ends with a separator.) Note that on Windows,. Raise a TypeError if the type of path is not str or bytes.).). | http://docs.python.org/3.4/library/os.path.html | 2013-05-18T10:52:42 | CC-MAIN-2013-20 | 1368696382261 | [] | docs.python.org |
The Profile class of module profile was written so that derived classes could be developed to extend the profiler. The details are not described here, as doing this successfully requires an expert understanding of how the Profile class works internally. Study the source code of module profile().
The function.
See About this document... for information on suggesting changes.See About this document... for information on suggesting changes. | http://docs.python.org/release/2.2.1/lib/node289.html | 2013-05-18T11:03:11 | CC-MAIN-2013-20 | 1368696382261 | [] | docs.python.org |
Appendix A: Configuration Templates Structure
A set of configuration template files is structured as follows, assuming that the root folder is
default/ or
custom/. Editing these files changes the Apache configuration.
Leave your feedback on this topic here
If you have questions or need support, please visit the Plesk forum or contact your hosting provider.
The comments below are for feedback on the documentation only. No timely answers or help will be provided. | https://docs.plesk.com/en-US/onyx/advanced-administration-guide-linux/appendix-a-configuration-templates-structure.68820/ | 2018-05-20T13:50:55 | CC-MAIN-2018-22 | 1526794863570.21 | [] | docs.plesk.com |
Scorecard mobile interface The scorecard mobile interface enables you to interact with scorecards. You can perform many of the same actions on a scorecard in the mobile interface as in the standard web interface. For example, you can apply aggregates and breakdowns, view the score at specific dates, and view target and gap information. The mobile scorecard interface is divided into three main sections. The top section shows the indicator details such as the indicator name, score, the selected aggregate, and target information if targets are defined for the indicator. You can change the aggregation by tapping on the current aggregate in the top-right corner, such as Daily. Tap on the information icon () to view metadata about the indicator, such as the formula for formula indicators. The center section shows all collected scores as a graph. You can pinch to zoom in and out, or select a specific date by tapping on the graph. Selecting a specific date causes the top section to display details for the selected date instead of for the most recent score. The bottom section displays breakdown information. You can select a breakdown by tapping on the breakdown name, such as Priority. Available breakdown elements and the score for each element appears below the breakdown. Tap on a breakdown element to filter the scorecard by that breakdown and element. The breakdown section does not appear if you have already selected both first and second level breakdowns. | https://docs.servicenow.com/bundle/jakarta-performance-analytics-and-reporting/page/use/performance-analytics/concept/c_ScorecardsInMobile.html | 2018-05-20T14:15:13 | CC-MAIN-2018-22 | 1526794863570.21 | [] | docs.servicenow.com |
Configuring Bareos to store backups on Gluster
This description assumes that you already have a Gluster environment ready and
configured. The examples use
storage.example.org as a Round Robin DNS name
that can be used to contact any of the available GlusterD processes. The
Gluster Volume that is used, is called
backups. Client systems would be able
to access the volume by mounting it with FUSE like this:
# mount -t glusterfs storage.example.org:/backups /mnt
Bareos contains a plugin for the Storage Daemon that uses
libgfapi. This makes it possible for Bareos to access the Gluster Volumes
without the need to have a FUSE mount available.
Here we will use one server that is dedicated for doing backups. This system is
called
backup.example.org. The Bareos Director is running on this host,
together with the Bareos Storage Daemon. In the example, there is a File Daemon
running on the same server. This makes it possible to backup the Bareos
Director, which is useful as a backup of the Bareos database and configuration
is kept that way.
Bareos Installation
An absolute minimal Bareos installation needs a Bareos Director and a Storage Daemon. In order to backup a filesystem, a File Daemon needs to be available too. For the description in this document, CentOS-7 was used, with the following packages and versions:
- glusterfs-3.7.4
- bareos-14.2 with bareos-storage-glusterfs
The Gluster Storage Servers do not need to have any Bareos packages installed.
It is often better to keep applications (Bareos) and storage servers on
different systems. So, when the Bareos repository has been configured, install
the packages on the
backup.example.org server:
# yum install bareos-director bareos-database-sqlite3 \ bareos-storage-glusterfs bareos-filedaemon \ bareos-bconsole
To keep things as simple as possible, SQlite it used. For production deployments either MySQL or PostgrSQL is advised. It is needed to create the initial database:
# sqlite3 /var/lib/bareos/bareos.db < /usr/lib/bareos/scripts/ddl/creates/sqlite3.sql # chown bareos:bareos /var/lib/bareos/bareos.db # chmod 0660 /var/lib/bareos/bareos.db
The
bareos-bconsole package is optional.
bconsole is a terminal application
that can be used to initiate backups, check the status of different Bareos
components and the like. Testing the configuration with
bconsole is
relatively simple.
Once the packages are installed, you will need to start and enable the daemons:
# systemctl start bareossd # systemctl start bareosfd # systemctl start bareosdir # systemctl enable bareossd # systemctl enable bareosfd # systemctl enable bareosdir
Gluster Volume preparation
There are a few steps needed to allow Bareos to access the Gluster Volume. By default Gluster does not allow clients to connect from an unprivileged port. Because the Bareos Storage Daemon does not run as root, permissions to connect need to be opened up.
There are two processes involved when a client accesses a Gluster Volume. For the initial phase, GlusterD is contacted, when the client received the layout of the volume, the client will connect to the bricks directly. The changes to allow unprivileged processes to connect, are therefore twofold:
- In
/etc/glusterfs/glusterd.volthe option
rpc-auth-allow-insecure onneeds to be added on all storage servers. After the modification of the configuration file, the GlusterD process needs to be restarted with
systemctl restart glusterd.
- The brick processes for the volume are configured through a volume option. By executing
gluster volume set backups server.allow-insecure onthe needed option gets set. Some versions of Gluster require a volume stop/start before the option is taken into account, for these versions you will need to execute
gluster volume stop backupsand
gluster volume start backups.
Except for the network permissions, the Bareos Storage Daemon needs to be allowed to write to the filesystem provided by the Gluster Volume. This is achieved by setting normal UNIX permissions/ownership so that the right user/group can write to the volume:
# mount -t glusterfs storage.example.org:/backups /mnt # mkdir /mnt/bareos # chown bareos:bareos /mnt/bareos # chmod ug=rwx /mnt/bareos # umount /mnt
Depending on how users/groups are maintained in the environment, the
bareos
user and group may not be available on the storage servers. If that is the
case, the
chown command above can be adapted to use the
uid and
gid of
the
bareos user and group from
backup.example.org. On the Bareos server,
the output would look similar to:
# id bareos uid=998(bareos) gid=997(bareos) groups=997(bareos),6(disk),30(tape)
And that makes the
chown command look like this:
# chown 998:997 /mnt/bareos
Bareos Configuration
When
bareos-storage-glusterfs got installed, an example configuration file
has been added too. The
/etc/bareos/bareos-sd.d/device-gluster.conf contains
the
Archive Device directive, which is a URL for the Gluster Volume and path
where the backups should get stored. In our example, the entry should get set
to:
Device { Name = GlusterStorage Archive Device = gluster://storage.example.org/backups/bareos Device Type = gfapi Media Type = GlusterFile ... }
The default configuration of the Bareos provided jobs is to write backups to
/var/lib/bareos/storage. In order to write all the backups to the Gluster
Volume instead, the configuration for the Bareos Director needs to be modified.
In the
/etc/bareos/bareos-dir.conf configuration, the defaults for all jobs
can be changed to use the
GlusterFile storage:
JobDefs { Name = "DefaultJob" ... # Storage = File Storage = GlusterFile ... }
After changing the configuration files, the Bareos daemons need to apply them.
The easiest to inform the processes of the changed configuration files is by
instructing them to
reload their configuration:
# bconsole Connecting to Director backup:9101 1000 OK: backup-dir Version: 14.2.2 (12 December 2014) Enter a period to cancel a command. *reload
With
bconsole it is also possible to check if the configuration has been
applied. The
status command can be used to show the URL of the storage that
is configured. When all is setup correctly, the result looks like this:
*status storage=GlusterFile Connecting to Storage daemon GlusterFile at backup:9103 ... Device "GlusterStorage" (gluster://storage.example.org/backups/bareos) is not open. ...
Create your first backup
There are several default jobs configured in the Bareos Director. One of them
is the
DefaultJob which was modified in an earlier step. This job uses the
SelfTest FileSet, which backups
/usr/sbin. Running this job will verify if
the configuration is working correctly. Additional jobs, other FileSets and
more File Daemons (clients that get backed up) can be added later.
*run A job name must be specified. The defined Job resources are: 1: BackupClient1 2: BackupCatalog 3: RestoreFiles Select Job resource (1-3): 1 Run Backup job JobName: BackupClient1 Level: Incremental Client: backup-fd ... OK to run? (yes/mod/no): yes Job queued. JobId=1
The job will need a few seconds to complete, the
status command can be used
to show the progress. Once done, the
messages command will display the
result:
*messages ... JobId: 1 Job: BackupClient1.2015-09-30_21.17.56_12 ... Termination: Backup OK
The archive that contains the backup will be located on the Gluster Volume. To check if the file is available, mount the volume on a storage server:
# mount -t glusterfs storage.example.org:/backups /mnt # ls /mnt/bareos
Further Reading
This document intends to provide a quick start of configuring Bareos to use Gluster as a storage backend. Bareos can be configured to create backups of different clients (which run a File Daemon), run jobs at scheduled time and intervals and much more. The excellent Bareos documentation can be consulted to find out how to create backups in a much more useful way than can get expressed on this page. | https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Bareos/ | 2018-05-20T13:45:31 | CC-MAIN-2018-22 | 1526794863570.21 | [] | gluster.readthedocs.io |
Purpose¶
This extension loads a tex file whenever a notebook is loaded, then re-runs mathjax. It’s useful if you have several notebooks that use a common set of latex macros, so you don’t have to copy the macros to each notebook.
Usage¶
Simply put your latex macros in a file named latexdefs.tex, in the same directory as your notebook. | https://jupyter-contrib-nbextensions.readthedocs.io/en/latest/nbextensions/load_tex_macros/readme.html | 2018-05-20T13:36:23 | CC-MAIN-2018-22 | 1526794863570.21 | [] | jupyter-contrib-nbextensions.readthedocs.io |
Activate sub. List of plugins (Kingston)Plugins available in the Kingston release.ServiceNow pluginsPlugins are software components that provide specific features and functionalities within a ServiceNow instance.Related TopicsList of plugins (Kingston) | https://docs.servicenow.com/bundle/kingston-platform-administration/page/administer/plugins/task/t_ActivateAPlugin.html | 2018-05-20T13:52:14 | CC-MAIN-2018-22 | 1526794863570.21 | [] | docs.servicenow.com |
Set up Multi-Provider SSO You must perform several steps to set up Multi-Provider SSO, including configuring properties, creating identity providers (IdPs), and configuring users to use SSO. Configure Multi-Provider SSO propertiesConfigure SSO properties and also add a property to the System Properties table to configure an IdP white list.Create a SAML 2.0 configuration using Multi-Provider SSOYou can create or update a SAML 2.0 SSO configuration from the Multi-Provider SSO feature.Create and update identity providersAfter you have configured the multi-provider SSO properties, you can update or create new SAML 2.0 Update 1 or digest token identity providers.Configure users for Multi-Provider SSOAdministrators can configure Multi-Provider SSO for individual users or for all users who belong to a company. You cannot configure Multi-Provider SSO for groups.Testing IdP connectionsTesting the connection to an IdP validates the settings before enabling external authentication. Log in using Multi-Provider SSOThe recommended and most efficient method for users to log in using Multi-Provider SSO is to use a specifically configured URL.Allow users to choose the identity provider for loginSSO federation support allows users to choose which IdP to log into.Use ESS pages with Multi-Provider SSOYou can redirect ESS users to an employee self-service page by adding a system property.Use Multi-Provider SSO to set up an SSO approval for a SAML 2.0 authenticationAn SSO approval with e-signature requires configuration on the SAML IdP and the ServiceNow instance. | https://docs.servicenow.com/bundle/kingston-platform-administration/page/integrate/single-sign-on/task/t_SettingUpMultiProviderSSO.html | 2018-05-20T13:52:35 | CC-MAIN-2018-22 | 1526794863570.21 | [] | docs.servicenow.com |
Variable Inspector¶
Description and main features¶
The Variable Inspector extension, which currently supports python and R kernels, enables to collect all defined variables and display them in a floating window. The window not only display the name of variables but also their type, size in memory and content. The columns are sortable. The window is draggable, resizable, collapsable. The list of displayed variables is automatically updated at each cell execution. Variables can be deleted from workspace by clicking a link. Position and state (displayed/collapsed) are stored in the notebook’s metadata and restored at startup.
The extension supports multiple kernels. To add support for a new kernel, one has to
- provide a library which loads required modules and define a function which lists all variables, together with their name, type, size and content. The output of this function must be a JSON representation of a list of objects (one for each variable) with keys ‘varName’,’varType’, ‘varSize’, ‘varContent’,
- provide the command for deleting a variable, as
delete_cmd_prefixand
delete_cmd_postfix, eg. for
rm(variable), specify
rm(and
).
- give the command to refresh the list of variables (usually this is a call to the function defined in the library above). This information can be provided either in the source file or in the yaml config file.
In any case, contributions to support further kernels will be very welcome!
Configuration¶
The initial configuration can be given using the IPython-contrib nbextensions facility. It includes:
- varInspector.window_display - Display at startup or not (default: false)
- varInspector.cols.lenName: (and .lenType, .lenVar) - Width of columns (actually the max number of character to display in each column)
- varInspector.kernels_config - json object defining the kernels specific code and commands.
Notes¶
- The displayed size of variables use the
getsizeof()python method. This method doesn’t work for all types, so the reported size is to be considered with some caution. The extension includes some code to correctly return the size of numpy arrays, pandas Series and DataFrame but the size for some other types may be incorrect.
- The extension builds on some code provided here (essentially the
_fillmethod)
- The extension uses Christian Bach’s table sorter jquery plugin. License file is included. | http://jupyter-contrib-nbextensions.readthedocs.io/en/latest/nbextensions/varInspector/README.html | 2018-05-20T13:43:36 | CC-MAIN-2018-22 | 1526794863570.21 | [] | jupyter-contrib-nbextensions.readthedocs.io |
Configuring contacts
Contacts represent website visitors and store marketing-related information about them. In the Kentico CMS edition, contacts cover only subscribed visitors (identified by email address). In the Kentico EMS edition, contacts cover both anonymous visitors (identified by an HTTP cookie) and registered users or customers (identified by email address). The system automatically gathers data about contacts based on the actions and input of the associated visitors.
To be able to work with contacts in the Contact management application, you need to have the permissions for the On-line marketing and Contact management modules.
Enabling on-line marketing in Kentico EMS
To be able to track visitors on the live site as contacts in the Kentico EMS edition, you need to enable the on-line marketing functionality in the Settings application.
- Open the Settings application
- Navigate to On-line marketing.
- Select the Enable on-line marketing check box.
- Click Save.
Configuring contacts in Kentico EMS
In the Kentico EMS edition, you can make the following adjustments to enable marketers to work with contacts more efficiently.
To make the system automatically collect and update the data of each contact based on the information provided by the corresponding user, you need to:
To store and organize data about contacts that is not collected by the default fields, you need to:
To keep track of the contacts who are currently visiting your websites and to monitor how many visitors a site has at any given time, you need to:
To achieve better performance with Kentico EMS websites by reducing the volume of contact management data, you need to:
Was this page helpful? | https://docs.kentico.com/k10/on-line-marketing-features/configuring-and-customizing-your-on-line-marketing-features/configuring-contacts | 2018-05-20T13:53:27 | CC-MAIN-2018-22 | 1526794863570.21 | [] | docs.kentico.com |
Device Imaging Troubleshooting
5/4/2012
This section provides instructions for resolving issues with device imaging.
Device Imaging Reports Are Outdated
If you rename a device imaging request, two status reports will be listed with the same GUID. The status report with the original name will not be updated. Make sure to check the status report with the new name.
Device Imaging Deployments Might Re-image Indefinitely or Reports Are Incorrect.
Devices Re-Image Indefinitely If Device Imaging Deployments Don't Exclude Devices After Re-Imaging.
Schedule Home Page Summarization Does Not Work Correctly
Clicking on the Device Imaging node of the Configuration Manager will refresh the summary information for all device imaging requests, regardless of the time set in the Schedule Home Page Summarization. There is no workaround for this issue at this time.
The Device Imaging Home Page Lists More Devices Than the Device Imaging Reports.
Previously Created Device Imaging Deployments No Longer Display.
Note
For more information about the Application Event Log, see How to Check the Application Event Log for Errors in Configuration Manager Help.
To stop the Device Imaging service and repair the Osdjobs.xml file
To stop the Device Imaging service, open a Command Prompt window and enter the following Services Snap-in command:
net stop “EDM Device Imaging Service”
Note
For more information about how to use the Services Snap-in commands, see Start, stop, pause, resume, or restart a service on TechNet..
Note
If the problem continues with that deployment, run a deployment that has completed successfully in the past. Browse to %programdata%\Microsoft\EDM\LocalStore\OSD and compare the successful deployment’s Osdjobs.xml file to the Osdjobs.xml_old file.
The All Windows Embedded Devices Collection and Subcollections Behave Unexpectedly After Device Imaging. | https://docs.microsoft.com/en-us/previous-versions/windows/embedded/gg749260(v=technet.10) | 2018-05-20T14:26:38 | CC-MAIN-2018-22 | 1526794863570.21 | [] | docs.microsoft.com |
sys.databases (Transact-SQL).
SQL Database Remarks.) | https://docs.microsoft.com/en-us/sql/relational-databases/system-catalog-views/sys-databases-transact-sql?view=sql-server-2017 | 2018-05-20T13:52:02 | CC-MAIN-2018-22 | 1526794863570.21 | [array(['../../includes/media/yes.png?view=sql-server-2017', 'yes'],
dtype=object)
array(['../../includes/media/yes.png?view=sql-server-2017', 'yes'],
dtype=object)
array(['../../includes/media/yes.png?view=sql-server-2017', 'yes'],
dtype=object)
array(['../../includes/media/yes.png?view=sql-server-2017', 'yes'],
dtype=object) ] | docs.microsoft.com |
Switch security provides stateless Layer2 and Layer 3 security by checking the ingress traffic to the logical switch and dropping unauthorized packets sent from VMs by matching the IP address, MAC address, and protocols to a set of allowed addresses and protocols. You can use switch security to protect the logical switch integrity by filtering out malicious attacks from the VMs in the network.
You can configure the Bridge Protocol Data Unit (BPDU) filter, DHCP Snooping, DHCP server block, and rate limiting options to customize the switch security switching profile on a logical switch. | https://docs.vmware.com/en/VMware-NSX-T/2.0/com.vmware.nsxt.admin.doc/GUID-7ED7CEC6-A8D0-4EA5-B7C6-05412A2A1322.html | 2018-05-20T14:02:54 | CC-MAIN-2018-22 | 1526794863570.21 | [] | docs.vmware.com |
Preparing a release¶
Things to do for releasing:
announce intent to release on gitter
check for open issues / pull requests that really should be in the release
come back when these are done
… or ignore them and do another release next week
check for deprecations “long enough ago” (two months or two releases, whichever is longer)
remove affected code
Do the actual release changeset
bump version number
increment as per Semantic Versioning rules
remove
+devtag from version number
Run
towncrier
review history change
git rmchanges
commit
push to your personal repository
create pull request to
python-trio/trio’s “master” branch
verify that all checks succeeded
tag with vVERSION, push tag on
python-trio/trio(not on your personal repository)
push to PyPI:
git clean -xdf # maybe run 'git clean -xdn' first to see what it will delete python3 setup.py sdist bdist_wheel twine upload dist/*
update version number in the same pull request
add
+devtag to the end
merge the release pull request
announce on gitter | https://trio.readthedocs.io/en/latest/releasing.html | 2021-01-16T00:17:39 | CC-MAIN-2021-04 | 1610703497681.4 | [] | trio.readthedocs.io |
@ConsumerType public interface ResolverHook.
ResolverHookFactory.begin(Collection)method to inform the hooks about a resolve process beginning and to obtain a Resolver Hook instance that will be used for the duration of the resolve process.
Resolvable. For each resolver hook call the
filterResolvable(Collection)method with the shrinkable:
osgi.identity, are singletons, and have the same symbolic name as the singleton bundle revision
Band place each of the matching capabilities into a shrinkable collection
Collisions.
osgi.identitycapability provided by bundle revision
Bfrom shrinkable collection
Collisions. A singleton bundle cannot collide with itself.
filterSingletonCollisions(BundleCapability, Collection)with the
osgi.identitycapability provided by bundle revision
Band the shrinkable collection
Collisions
Collisionsnow contains all singleton
osgi.identitycapabilities that can influence the ability of bundle revision
Bto resolve.
Bis already resolved then any resolvable bundle revision contained in the collection
Collisionsis not allowed to resolve.
Resolvable. For each bundle revision
Bleft in the shrinkable collection
Resolvablewhich the framework attempts to resolve the following steps must be followed:
Rspecified by bundle revision
Bdetermine the collection of capabilities that satisfy (or match) the requirement and place each matching capability into a shrinkable collection
Candidates. A capability is considered to match a particular requirement if its attributes satisfy a specified requirement and the requirer bundle has permission to access the capability.
filterMatches(BundleRequirement, Collection)with the requirement
Rand the shrinkable collection
Candidates.
Candidatesnow contains all the capabilities that may be used to satisfy the requirement
R. Any other capabilities that got removed from the shrinkable collection
Candidatesmust not be used to satisfy requirement
R.
end()method to inform the hooks about a resolve process ending.
Bundle.start() or
FrameworkWiring.resolveBundles(Collection)
). The framework must detect this and throw an
IllegalStateException.
ResolverHookFactory
void filterResolvable(java.util.Collection<BundleRevision> candidates)
candidates- the collection of resolvable candidates available during a resolve process.
void filterSingletonCollisions(BundleCapability singleton, java.util.Collection<BundleCapability> collisionCandidates).
singleton- the singleton involved in a resolve process
collisionCandidates- a collection of singleton collision candidates
void filterMatches(BundleRequirement requirement, java.util.Collection<BundleCapability> candidates)
All of the candidates will have the same namespace and will match the specified requirement.
If the Java Runtime Environment supports permissions then the collection of candidates will only contain candidates for which the requirer has permission to access.
requirement- the requirement to filter candidates for
candidates- a collection of candidates that match the requirement
void end()
Copyright © OSGi Alliance (2000, 2020). All Rights Reserved. Licensed under the OSGi Specification License, Version 2.0 | http://docs.osgi.org/javadoc/osgi.core/8.0.0/org/osgi/framework/hooks/resolver/ResolverHook.html | 2021-01-16T00:24:18 | CC-MAIN-2021-04 | 1610703497681.4 | [] | docs.osgi.org |
Appspace cards are HTML-5 based templates available in the Library that you can use to create engaging, interactive content. Versatile and easy to create using the available themes in the Appspace console, cards support a wide variety of content, including images, text, video, data, and more. Cards are an easy but powerful medium to create the desired messaging ideal for any type of information needed for digital signage and workplace initiatives.
All cards are officially created by Appspace. Most of our cards receive periodic updates, new templates, and enhanced features from Appspace, and are known as Appspace supported cards. We also have cards that are developed in collaboration with vendors and cards that will be customized and supported by any third-party community developer using Appspace card APIs – these are known as community cards.
In this article, we have categorized cards based on the different experiences that they are ideal for, as follows:
- Cards are only supported on devices compatible with the Appspace App. Legacy devices are not supported.
- All cards are Appspace supported cards unless specified otherwise.
- LG: It is recommended that animations be turned off as LG devices have performance issues with transitions.
- BrightSign:
-.
- While BrightSign HD devices support cards, animation performance may be affected.
- Refer to Compatible Content & Device Capabilities to view other device capabilities.
Messaging
The cards ideal for messaging that we currently have are:
- Alert card – to display important or critical messages immediately, or at a scheduled time.
- Announcement card – ideal for displaying corporate messages, announcements, wallpapers, and signage, with interactivity and transitions.
- Countdown card – ideal to display a count-down or count-up timer with a message.
- Milestone card – to display multiple occasions or milestones in advance, all with its own messaging, event type, and display dates.
- Existing Appspace Announcement card templates (prior to September 22 2018) will be disabled by default. Administrators may enable it if needed. However, user created themes will still remain in the Library.
- 1 Video backgrounds not supported on Announcement card for BrightSign devices, as these devices do not support layering on top of videos.
- 2 Video backgrounds not supported on Countdown card for Crestron devices.
Data
These room scheduling cards are ideal for displaying room schedules with real-time booking capabilities:
- Data Visualization card – choose from a myriad of chart types to display such as leaderboard, line, bar, donut, progress bar, and progress donut.
- Table card – display data in a table, either by manually adding the data into the table, or importing it from an Excel spreadsheet.
- 1 Only supported on Crestron TSS.
Feeds
We currently have these cards to display charts, tables, or spreadsheets:
- Facebook Card – to display posts and hashtags from the Facebook platform.
- Google Sheets card – display a Google Sheets spreadsheet directly on your card using a link to your file.
- Google Slides card – display a Google Slides presentation directly on your card using a link to your file.
- Instagram Card – to display posts and hashtags from the Instagram platform.
- RSS card – ideal for displaying one or multiple RSS feeds.
- Twitter Card – to display tweets and hashtags from the Twitter platform.
- Weather card – to display local weather conditions.
- Web View card – display the contents from an external website on the card with multiple embedding options available.
- Webex Recording card – displays recorded Cisco Webex Meetings.
- World Clock card – display the time of up to eight cities simultaneously.
- YouTube card – display YouTube videos with customizable video resolutions.
- Zoom Recording card – displays recorded Zoom Meetings.
- 1 Only supported on Crestron TSS.
- 1 Supported RSS formats: RSS 1.0, RSS 2.0, mRSS, Atom 1.0
Services
Cards in this category include:
- Room Schedule card – ideal for single-room scheduling, with devices places outside conference rooms for instant touch-screen bookings.
- Schedule Board card – ideal for displaying the schedules of multiple rooms in an organization on large TVs.
- 1 Interactivity is not supported on BrightSign, LG, and MediaVue devices, thus only the meeting room information is displayed, while bookings must be done via the calendar.
- 2 Only supported on Crestron TSS.
Community Cards
We currently have the following community card available:
We currently have the following community card available:
- Social Media by Seenspire card – a card to display real-time social media feeds available through a Seenspire account. | https://docs.appspace.com/latest/how-to/appspace-supported-cards/ | 2021-01-16T00:34:43 | CC-MAIN-2021-04 | 1610703497681.4 | [] | docs.appspace.com |
Clone & Commit via HTTP
Clone, edit, commit, push and pull can be performed using Git directly from the command line, by using a Git client, or via the web interface. The first option is shown below and in the section Clone & Commit via SSH. The last option is detailed in the section Clone & Commit via Web..
Clone with the Git command
clone followed by the repo URL.
~$ git clone
Cloning into 'foobar'...
remote: Enumerating objects: 3, done.
remote: Counting objects: 100% (3/3), done.
remote: Total 3 (delta 0), reused 0 (delta 0)
Unpacking objects: 100% (3/3), 214 bytes | 1024 bytes/s, done.
Edit
Modify an existing file:
~$ cd foobar
~/foobar$ vim README.md
Commit
A commit is a record of the changes to the repository. This is like a snapshot of your edits. A commit requires a commit message. For the example below, the message is "test". Keep in mind that "test" is not a very informative message, though. In the real world, make sure your commit message is informative, for you, your collaborators and anyone who might be interested in your work. Some advice on how to write a good commit message can be found on countless websites and blogs!
Command lines:
~/foobar$ git commit -am 'test'
[master 10074d7] test
1 file changed, 2 insertions(+), 1 deletion(-)
Push
The last step is to synchronize (push) the modifications (commit) from the local repository to the remote one on Codeberg.
~/foobar$ git push
Username for '': knut
Pull. | https://docs.codeberg.org/git/clone-commit-via-http/ | 2021-01-16T00:10:41 | CC-MAIN-2021-04 | 1610703497681.4 | [] | docs.codeberg.org |
x-docdb¶
Syntax¶
x-docdb: docdb-01: Properties: {} Settings: {} Services: [] Lookup: {} MacroParameters: {}
Tip
For production workloads, to avoid any CFN deadlock situations, I recommend you generate the CFN templates for docdb, and deploy the stacks separately. Using Lookup you can use existing DocDB clusters with your new services.
Properties¶
DocDB Cluster is rather very simple in its configuration. There aren’t 200+ combinations of EngineName and Engine Version as for RDS, make life very easy.
Howerver you can copy-paste all the properties you would find in the DocDB Cluster properties, some properties will be ignored in order to keep the automation going:
- MasterUsername and MasterUserPassword
These two will be auto generated and stored in secrets manager. The services linked to it will be granted GetSecretValue to it.
- VpcSecurityGroupIds
The security group will be generated for the DB specifically and allow services listed only.
- AvailabilityZones
Under trial, but not sure given that we give a Subnet Group why one would also define the AZs and it might conflict.
- DBClusterIdentifier
As usual, named resources make for a nightmare to rename etc. Instead, there will be a Name tag associated with your Cluster.
- DBSubnetGroupName
Equally gets created only. For now.
- SnapshotIdentifier
Untested - 2020-11-13 - will support it later.
Services¶
The syntax for listing the services remains the same as the other x- resources.
Services: - name: <service/family name> access: <str>
MacroParameters¶
These parameters will allow you to define extra parameters to define your cluster successfully. In the future you should be able to define your DocDB Cluster Parameters there.
Instances¶
List of DocDB instances. The aspiration is to follow the same syntax as the DocDB Instance.
Note
Not all Properties are respected, instead, they follow logically the attachment to the DocDB Cluster.
Instances: - DBInstanceClass: <db instance type> PreferredMaintenanceWindow: <window definition> AutoMinorVersionUpgrade: bool
Hint
If you do not define an instance, ECS ComposeX automatically creates a new one with a single node of type db.t3.medium
Settings¶
The only setting for DocumentDB is EnvNames as for every other resources.
Hint
Given that the DB Secret attachment populates host, port etc., we expose as env vars the Secret associated to the DB, not the DB itself.
Lookup¶
Lookup for Document DB is available!
Warning
For some reason the group resource tag API returns two different clusters even though they are the same one. Make sure to specify the Name along with Tags until we figure an alternative solution. Sorry for the inconvenience.
Credentials¶
The credentials strucutre remains the same as for RDS SQL versions
Examples¶
--- # DOCDB Simple use-case. Creating new DBs x-docdb: docdbA: Properties: {} Settings: EnvNames: - DOCDB_A Services: - name: app03 access: RW docdbB: Properties: {} Settings: EnvNames: - DOCDB_A Services: - name: app03 access: RW MacroParameters: Instances: - DBInstanceClass: db.r5.large - DBInstanceClass: db.r5.xlarge AutoMinorVersionUpgrade: True
--- # DOCDB Simple use-case. Creating new DBs x-docdb: docdbA: Properties: {} Settings: EnvNames: - DOCDB_A Services: - name: app03 access: RW docdbB: Settings: EnvNames: - DOCDB_A Services: - name: app03 access: RW Lookup: cluster: Name: docdbb-purmjgtgvyqr Tags: - CreatedByComposeX: "true" - Name: docdb.docdbB secret: Tags: - aws:cloudformation:logical-id: docdbBSecret | https://docs.ecs-composex.lambda-my-aws.io/syntax/composex/docdb.html | 2021-01-15T23:18:26 | CC-MAIN-2021-04 | 1610703497681.4 | [] | docs.ecs-composex.lambda-my-aws.io |
Up:
1.00.
minutes(5).:
90.
Check to ensure that your process flow looks similar to the image at the beginning of this step.
In this step, you'll add the logic for the process flow you created in the previous step. You'll create the logic that will randomly generate phone calls and then instruct the clerk to answer those calls.
To build this logic:
minutes(5).
tokeninstead.
minutes(1).
token.Patient):
200.00instead.
Save, then reset and run the simulation model.
As you watch, notice that when a token is created in the Phone Calls flow an RN or other available staff is diverted to answer the phone.:
Leave the Time Table properties window open and continue to the next step..
At this point, the model is practically done. You can stop here or if you'd like to learn one more useful trick continue on to the last task where you'll learn how to make a custom location object Tutorial Task 1.6 - Add Custom Location. | https://docs.flexsim.com/en/20.0/Tutorials/FlexSimHC/1-5CreateIndependentTasks/1-5CreateIndependentTasks.html | 2021-01-16T00:52:04 | CC-MAIN-2021-04 | 1610703497681.4 | [] | docs.flexsim.com |
How to use the CIM data model reference tables
Each topic in this section contains a use case for the data model, a breakdown of the required tags for the event datasets or search datasets in that model, and a listing of all extracted and calculated fields included in the model.
A dataset is a component of a data model. In versions of the Splunk platform prior to version 6.5.0, these were referred to as data model objects.
The tags tables communicate which tags you must apply to your events in order to make them CIM-compliant. These tags act as constraints to identify your events as relevant to this data model, so that this data is included in Pivot reports, searches, and dashboards based on this model.
There might be additional constraints outside the scope of these tables. Refer to the data model itself using its editor view in Splunk Web for required fields, field=value combinations, or base searches that the model depends on.
Apply tags to your events to ensure your data is populated in the correct dashboards, searches, and Pivot reports.
- Identify the CIM data model relevant to your events.
- Identify the dataset within that model that is relevant to your events.
- Observe which tags are required for that dataset.
- Observe which tags are required for any parent datasets.
- Observe any other constraints relevant to the dataset or its parents.
- Apply those tags and other constraints to your events using event types.
- Repeat for any additional relevant CIM datasets.
For a detailed walkthrough of these steps, see Use the CIM to normalize data at search time.
How to read the fields tables
The fields tables list the extracted fields and calculated fields for the event and search datasets in the model and provide descriptions and expected values (if relevant) for these fields.
How to find a field
The table presents the fields in alphabetical order, starting with the fields for the root datasets in the model, then proceeding to any unique fields for child datasets. The table does not repeat any fields that a child dataset inherits from a parent dataset, so refer to the parent dataset to see the description and expected values for that field.
Because the fields tables exclude inherited fields, many child datasets have no fields listed in the table at all. Those child datasets include only inherited fields from one or more of their parent datasets, so there are no unique extracted or calculated fields to display. All data models inherit the fields
_time,
host,
source, and
sourcetype, so those fields are always available to you for use in developing Pivot reports, searches, and dashboards.
How to interpret the expected values
For some fields, the tables include one or more expected values for that field. These expected values include:
- values that are used in knowledge objects in downstream applications such as Splunk Enterprise Security (in the table as "ES expects")
- values that are used in the CIM model as constraints for a dataset (in the table as "Other")
In some cases, the expected values also include additional values that Splunk suggests as the normalized standards for a field. The expected values are provided to help you make normalization decisions when developing add-ons. They are not exhaustive or exclusive.
Use the tables to apply the Common Information Model to your data
The tables in this section of documentation are intended to be supplemental reference for the data models themselves. Use the documentation and the data model editor in Splunk Web together. You can also access all of the information about a data model's dataset hierarchy, fields, field descriptions, and expected values in the JSON file of the model. You can browse the JSON in the
$SPLUNK_HOME/etc/apps/Splunk_SA_CIM/default/data/models directory.
Prerequisite
You need Write access to a data model in order to browse it in its editor view. If you do not have this access, request it from your Splunk administrator.
Steps
- In Splunk Web, go to Settings > Data Models to open the Data Models page.
- Click a data model to view it in an editor view. There, you can see the full dataset hierarchy, a complete listing of constraints for each dataset, and full listing of all inherited, extracted, and calculated fields for each dataset.
- Compare this information with the reference tables in the documentation for descriptions and expected values of the fields in each datasets.
How to access information directly from the JSON files
As shown in the table in the previous section, each data model's JSON file contains all the information about the model structure and its fields, so you can access this information programmatically. Several parameters formerly available only in the documentation are now available in the JSON's
comment field. The format for this field is
{"description": "Description of the field.", "expected_values": ["val 1", "val 2"], "ta_relevant": true|false}.
This documentation applies to the following versions of Splunk® Common Information Model Add-on: 4.8.0, 4.9.0, 4.9.1, 4.10.0, 4.11.0, 4.12.0, 4.13.0, 4.14.0, 4.15.0, 4.16.0, 4.17.0, 4.18.0
Feedback submitted, thanks! | https://docs.splunk.com/Documentation/CIM/4.14.0/User/Howtousethesereferencetables | 2021-01-16T00:40:08 | CC-MAIN-2021-04 | 1610703497681.4 | [array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'],
dtype=object) ] | docs.splunk.com |
<CSP:QUERY>
Define and execute a predefined class query
Synopsis
<CSP:QUERY>
Attributes
General Attributes
Description
The CSP:QUERY tag creates a %ResultSet object based on a query defined within a Caché class. This tag defines a server-side variable, whose name is specified by the tag's name attribute, that refers to a %ResultSet object. The %ResultSet object is automatically executed (by calling its Execute method using the parameter values given by the various Pn attributes of the tag) and is ready for use within the page. The %ResultSet object is automatically closed at the end of the generated OnPage method.
For example, the following opens a %ResultSet object (named query) based on the ByName query within the Sample.Person class and then displays the results of the query in an HTML unordered list (<UL>):
<CSP:QUERY <UL> <CSP:WHILE <LI>#(query.Get("Name"))# </CSP:WHILE> </UL>
Copy code to clipboard | https://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=RCSP_CSP_QUERY | 2021-01-15T23:54:39 | CC-MAIN-2021-04 | 1610703497681.4 | [] | docs.intersystems.com |
Index Keywords
This reference describes the keywords that apply to an index, which you can define in persistent classes. These keywords (also known as class attributes) generally affect the compiler.
For general information on index definitions, see “Index Definitions.”
- Condition – Defines a conditional index and specifies the condition that must be met for a record to be included in the index.
-.
- SqlName – Specifies an SQL alias for the index.
- Type – Specifies the type of index.
- Unique – Specifies whether the index should enforce uniqueness. | https://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=ROBJ_INDEX | 2021-01-15T23:41:38 | CC-MAIN-2021-04 | 1610703497681.4 | [] | docs.intersystems.com |
The REV Control System is an affordable robotics control platform providing the interfaces required for building robots. These devices are most commonly used within the FIRST Tech Challenge (FTC), FIRST Global Challenge (FGC), and in the classroom for educational purposes.
This documentation is intended as the place to answer any questions related to the REV Robotics Control System, used in the FIRST Tech Challenge and FIRST Global Challenge.
Looking to get an idea of how to use the system before your Control Hub arrives? Reading through each section will help, but we specifically recommend the guides on getting started with the Control Hub and the programming language options section.
Have a specific question? Feel free to head straight to it using the navigation bar to the left. Each section is grouped with other topics that are similar.
Having trouble finding what you are looking for? Try the search bar in the upper right or read the section descriptions below to find the best fit.
Getting started building robots can be an intimidating process. The following documentation is here to make getting started a bit easier. There are a number of examples to get started with the Control System and we are committed to adding content to make it more accessible for people to use REV. If there is a question that is not answered by this space, send our support team an email; [email protected]. We are happy to help point you in the right direction.
This section contains information regarding all of the major mechanical specifications of the REV Control Hub and Expansion Hub. These sections include port pinout information, protection features, and the types of cables used with the devices.
Take the Control Hub or Expansion Hub from out of the box through generating the first configuration file. This includes the process for changing your Control Hub's Name and Password as well as connecting to your Driver Station. Also includes information on ways to add additional motors to the Control System through adding a SPARKmini Motor Controller or an Expansion Hub.
This section covers how the information needed to keep your Control System up to date and how to troubleshoot your Control System if issues arise.
From just getting started by writing your first Op Mode to working with closed loop control, this section covers the information needed to start programming.
Sensors are often vital for robots to gather information about the world around them. Use this section to find how to use REV sensors and information on the different sensor types. | https://docs.revrobotics.com/rev-control-system/ | 2021-06-12T17:48:36 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.revrobotics.com |
charAt Summary Extracts the character from a given position in the string. Usage public char+ charAt(int index) Returns The character at the given position. If the position is invalid, undefined will be returned. Parameters index The position in the string to get the character. Description Extracts the character from a given position in the string. If the position is invalid (e.g. the provided position is a negative number or goes beyond the end of the string), undefined will be returned. Performance This method uses existent types for safety, which requires runtime bounds checking. In browser environments, where performance is critical, consider casting the string to external before calling charAt on it. In other environments, casting to external may not provide a discernable performance improvement, and it may even decrease performance. Examples Basic Usage 123456import System; string abc = "abc";Console.log(abc.charAt(0)); // `a`Console.log(abc.charAt(1)); // `b`Console.log(abc.charAt(2)); // `c` Share HTML | BBCode | Direct Link | https://docs.onux.com/en-US/Developers/JavaScript-PP/Standard-Library/System/String/charAt | 2021-06-12T17:12:17 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.onux.com |
The Website Settings allow you to configure how your journal’s website looks and operates. It consists of 3 main tabs for Appearance, Setup, and Plugins.
The theme determines the overall design or layout of your site. Several different theme options are available and you can try them out without affecting your site’s content or configuration.
First you can ensure that all available themes have been enabled on your site.
You can also look for additional themes in the Plugin Gallery and install and enable those.
Now that you have all available themes, return to the Appearance tab to try out different themes.
If you would like to make minor changes to your site’s design and layout, you can upload a Journal Stylesheet.
Hit Save to record your changes.
Use these fields to modify the text in the For Readers, For Authors, For Librarians pages on the journal website.
Remember to hit Save to record any changes.
To remove these fields and their contents from displaying publicly on the website’s user interface, deselect the Information Block in Website Settings > Appearance > Sidebar Management.
OJS is multilingual, which means that the interface, emails, and published content can be available in multiple languages and authors can make submissions in one or more languages on a single site or journal. When you install OJS, you can select one or more languages for your site.
Under Website Settings > Languages you can see a list of languages or locales installed on your site and configure how the languages are used in your journal. Consider carefully how you want to configure and use languages in your journal because significant problems can occur if you change the settings later.
Primary Locale: One language must be set as the primary locale, which means the language the journal appears in by default.
UI: If you want the journal’s interface to be available in other languages, select them here..
Additional languages can be installed on your site by an Administrator – see Chapter 4 for details.
If enabling multiple languages to appear in the UI, make sure that in Website Settings > Appearance > Sidebar Management the Language Toggle Block is selected to make that feature available to users.
This section allows you to configure your navigation menus, such as including new links.
Some menu item types will only be displayed under certain conditions. For example, the Login menu item type will link to your login page, but it will only appear in the menu when your website visitor is logged out. Similarly, the Logout menu item type will only appear when a website visitor is logged in.
When you assign a Menu Item with display conditions to a Menu, you will see an icon of an eye with a slash through it. You can click that icon to learn more about when it will be displayed or hidden..
This section allows you to create and display news announcements on the journal’s website.
Once the Announcements setting is enabled, click “Save.” An “Announcements” menu item now appears in the main navigation on the left hand side. Click on this menu item and select “Add Announcement.” Here you can include the title of the announcement, a short description and / or full text of the announcement, and an (optional) expiry date. If you wish to send an email notification to all users (who have not opted out of email notifications), select “Send notification email to all registered users.” The announcement should now appear on an “Announcements” tab on the public-facing journal site.
Limit the number of items (for example, submissions, users, or editing assignments) to show in a list before showing subsequent items on another page. Also, limit the number of links to display to subsequent pages of the list.
Enter the privacy statement you want to appear on your site.
This option allows for the configuration of different format for dates and times for each journal and locale, which could previously only be set up in the ‘config.inc.php’ file. Note that the
config.inc.php file can still be used to set the time and format across multiple journals, and the settings for the primary locale will be the default for other locales, unless otherwise configured. A custom format can be entered using the special format characters.
Use this page to see all of the installed plugins and find new plugins.
All of the plugins listed here are available in your OJS installation. Check the Enable link to use them.
You will notice that some plugins are required for the system and cannot be disabled.
Click the blue arrow next to the plugin name to reveal links to Delete, Upgrade, or Configure settings for the plugin.
The Plugin Gallery provides access to externally-created plugins, that may not be included in your OJS installation, but are available for download and activation. Only an Administrator user can install a new plugin.
Selecting the plugin title will provide additional details, including the author, status, description, and compatibility.
Sometimes new plugins or plugins that are developed by folks outside of PKP will not appear in the Plugin Gallery and you need to install them separately.
If upload fails you may get an error message that says, “The uploaded plugin archive does not contain a folder that corresponds to the plugin name.” Usually this means you have to change the name of the plugin folder inside the zipped folder to a more simple name. For example, change “translator-ojs-3_0_0-0” to “translator.”: | https://docs.pkp.sfu.ca/learning-ojs/en/settings-website | 2021-06-12T17:19:21 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.pkp.sfu.ca |
10. Vector Spatial Analysis (Buffers)¶
10.1. Overview¶
Spatial analysis uses spatial information to extract new and additional meaning from GIS data. Usually spatial analysis is carried out using a GIS Application. GIS Applications normally have spatial analysis tools for feature statistics (e.g. how many vertices make up this polyline?) or geoprocessing such as feature buffering. The types of spatial analysis that are used vary according to subject areas. People working in water management and research (hydrology) will most likely be interested in analysing terrain and modelling water as it moves across it. In wildlife management users are interested in analytical functions that deal with wildlife point locations and their relationship to the environment. In this topic we will discuss buffering as an example of a useful spatial analysis that can be carried out with vector data.
10.2. Buffering in detail¶
Buffering usually creates two areas: one area that is within a specified distance to selected real world features and the other area that is beyond. The area that is within the specified distance is called the buffer zone.
A buffer zone is any area that serves the purpose of keeping real world features distant from one another. Buffer zones are often set up to protect the environment, protect residential and commercial zones from industrial accidents or natural disasters, or to prevent violence. Common types of buffer zones may be greenbelts between residential and commercial areas, border zones between countries (see
There are several variations in buffering. The buffer distance or buffer size can vary according to numerical values provided in the vector layer attribute table for each feature. The numerical values have to be defined in map units according to the Coordinate Reference System (CRS) used with the data. For example, the width of a buffer zone along the banks of a river can vary depending on the intensity of the adjacent land use. For intensive cultivation the buffer distance may be bigger than for organic farming (see Figure Fig. 10.10 and Table table_buffer_attributes).
Table Buffer Attributes 1: Attribute table with different buffer distances to rivers based on information about the adjacent land use.
Buffers around polyline features, such as rivers or roads, do not have to be on both sides of the lines. They can be on either the left side or the right side of the line feature. In these cases the left or right side is determined by the direction from the starting point to the end point of line during digitising.
Buffer zones often have dissolved boundaries so that there are no overlapping areas between the buffer zones. In some cases though, it may also be useful for boundaries of buffer zones to remain intact, so that each buffer zone is a separate polygon and you can identify the overlapping areas (see Figure Fig. 10.12).
10.3.3. Buffering outward and inward¶
Buffer zones around polygon features are usually extended outward from a polygon boundary but it is also possible to create a buffer zone inward from a polygon boundary. Say, for example, the Department of Tourism wants to plan a new road around Robben Island and environmental laws require that the road is at least 200 meters inward from the coast line. They could use an inward buffer to find the 200 m line inland and then plan their road not to go beyond that line.
10.4. Common problems / things to be aware of¶
Most GIS Applications offer buffer creation as an analysis tool, but the options for creating buffers can vary. For example, not all GIS Applications allow you to buffer on either the left side or the right side of a line feature, to dissolve the boundaries of buffer zones or to buffer inward from a polygon boundary.
A buffer distance always has to be defined as a whole number (integer) or a decimal number (floating point value). This value is defined in map units (meters, feet, decimal degrees) according to the Coordinate Reference System (CRS) of the vector layer.
10.5. More spatial analysis tools¶
Buffering is a an important and often used spatial analysis tool but there are many others that can be used in a GIS and explored by the user.
Spatial overlay is a process that allows you to identify the relationships between two polygon features that share all or part of the same area. The output vector layer is a combination of the input features information (see. | https://docs.qgis.org/testing/en/docs/gentle_gis_introduction/vector_spatial_analysis_buffers.html | 2021-06-12T18:23:11 | CC-MAIN-2021-25 | 1623487586239.2 | [array(['../../_images/buffer_zone.png', '../../_images/buffer_zone.png'],
dtype=object)
array(['../../_images/overlay_operations.png',
'../../_images/overlay_operations.png'], dtype=object)] | docs.qgis.org |
Contents
Configuration
Align Business Structure and Business Requirements
In some businesses, the way you define Departments and Processes in iWD will directly align with how the business views distribution and reporting.
In other cases, consider aligning Departments and Processes with your reporting requirements and use Genesys skills to align with distribution. This is the recommended approach because the Departments and Processes can then be used as input in the Data Mart plug-ins—that is, the pre-defined attributes of Department and Process can be used to support the reporting metrics and dimensions. This makes it easier to provide statistics from a business point of view.
Consider Using Multiple iWD Tenants
Consider configuring more than one iWD managed tenant, where each tenant aligns to a different business unit. This allows you to configure dedicated custom attributes in iWD Data Mart for each business unit. It also reduces the amount of data iWD Data Mart has to process from the Interaction Server Event Log database. This means you will need to set up multiple iWD Data Mart instances, but this configuration is more scalable.
Load Balance GRE in High Volume Deployments
If your iWD solution has particularly high volumes or uses frequent reprioritization, it might be useful to set up a cluster of Genesys Rules Engines (GRE) in a load-balanced configuration. Consider updating the out-of-the-box IWDBP business process to add a subroutine with this type of load balancing, with multiple runtime nodes within the solution. You can make the number of retry attempts configurable as a strategy variable or within a List Object so the value can be modified without changing the strategy itself. | https://docs.genesys.com/Documentation/IWD/latest/Dep/BestPracConfiguration | 2021-06-12T17:10:58 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.genesys.com |
Handling versions which change over time
There are many situations when you want to use the latest version of a particular module, the module you request can change over time even for the same version, a so-called changing version.
An example of this type of changing module is a Maven
SNAPSHOT module, which always points at the latest artifact published.
In other words, a standard Maven snapshot is a module that is continually evolving, it is a "changing module".
Declaring a dynamic version
Projects might adopt a more aggressive approach for consuming dependencies to modules. For example you might want to always integrate the latest version of a dependency to consume cutting edge features at any given time. A dynamic version allows for resolving the latest version or the latest version of a version range for a given module.
plugins { id 'java-library' } repositories { mavenCentral() } dependencies { implementation 'org.springframework:spring-web:5.+' }
plugins { `java-library` } repositories { mavenCentral() } dependencies { implementation("org.springframework:spring-web:5.+") }
A build scan can effectively visualize dynamic dependency versions and their respective, selected versions.
By default, Gradle caches dynamic versions of dependencies for 24 hours. Within this time frame, Gradle does not try to resolve newer versions from the declared repositories. The threshold can be configured as needed for example if you want to resolve new versions earlier.
Declaring a changing version
A team might decide to implement a series of features before releasing a new version of the application or library. A common strategy to allow consumers to integrate an unfinished version of their artifacts early and often is to release a module with a so-called changing version. A changing version indicates that the feature set is still under active development and hasn’t released a stable version for general availability yet.
In Maven repositories, changing versions are commonly referred to as snapshot versions.
Snapshot versions contain the suffix
-SNAPSHOT.
The following example demonstrates how to declare a snapshot version on the Spring dependency.
plugins { id 'java-library' } repositories { mavenCentral() maven { url '' } } dependencies { implementation 'org.springframework:spring-web:5.0.3.BUILD-SNAPSHOT' }
plugins { `java-library` } repositories { mavenCentral() maven { url = uri("") } } dependencies { implementation("org.springframework:spring-web:5.0.3.BUILD-SNAPSHOT") }
By default, Gradle caches changing versions of dependencies for 24 hours. Within this time frame, Gradle does not try to resolve newer versions from the declared repositories. The threshold can be configured as needed for example if you want to resolve new snapshot versions earlier.
Gradle is flexible enough to treat any version as changing version e.g. if you wanted to model snapshot behavior for an Ivy module.
All you need to do is to set the property ExternalModuleDependency.setChanging(boolean) to
true.
Controlling dynamic version caching
By default, Gradle caches dynamic versions and changing modules for 24 hours. During that time frame Gradle does not contact any of the declared, remote repositories for new versions. If you want Gradle to check the remote repository more frequently or with every execution of your build, then you will need to change the time to live (TTL) threshold.
You can override the default cache modes using command line options. You can also change the cache expiry times in your build programmatically using the resolution strategy.
Controlling dependency caching programmatically
You can fine-tune certain aspects of caching programmatically using the ResolutionStrategy for a configuration. The programmatic approach is useful if you would like to change the settings permanently.
By default, Gradle caches dynamic versions for 24 hours. To change how long Gradle will cache the resolved version for a dynamic version, use:
configurations.all { resolutionStrategy.cacheDynamicVersionsFor 10, 'minutes' }
configurations.all { resolutionStrategy.cacheDynamicVersionsFor(10, "minutes") }
By default, Gradle caches changing modules for 24 hours. To change how long Gradle will cache the meta-data and artifacts for a changing module, use:
configurations.all { resolutionStrategy.cacheChangingModulesFor 4, 'hours' }
configurations.all { resolutionStrategy.cacheChangingModulesFor(4, "hours") }
Controlling dependency caching from the command line
Avoiding network access with offline mode.
Refreshing dependencies
You can control the behavior of dependency caching for a distinct build invocation from the command line. Command line options are helpful for making a selective, ad-hoc choice for a single execution of the build.
At times, the Gradle Dependency Cache can become.
new versions of dynamic dependencies
new versions of changing modules (modules which use the same version string but can have different contents). | https://docs.gradle.org/current/userguide/dynamic_versions.html | 2021-06-12T18:38:44 | CC-MAIN-2021-25 | 1623487586239.2 | [array(['img/dependency-management-dynamic-dependency-build-scan.png',
'dependency management dynamic dependency build scan'],
dtype=object) ] | docs.gradle.org |
Ready to go live with your Legal Monster setup? This checklist provides you with an overview of what items must be ready before you go live.
Most importantly, your agreements must be complete and published. For example, users need to be able to read your cookie policy and privacy policy, when they consent to your cookies.
Make sure you have:
A Privacy Policy
A Cookie Policy
A Terms of Service Policy, if you use our widgets for consent and signup
Each widget must be correctly configured to be legally compliant. Make sure that your widgets are configured as intended:
Make sure that your audience is correctly set - business or consumer
Your widget is connected to the correct agreements.
All cookies are categorised for your cookie widget
These are legally binding, so make sure that your agreements are paired correctly with your widgets for signup, marketing and/or cookies. For instance, if you have a "Signup for a service"-widget, you'll need to include both a privacy policy and your terms of service.
The widgets need to be installed correctly. We highly recommend adding some UI test to your test suite to make sure, that the widgets are shown and working.
Follow our technical documentation to install the widgets properly.
Cookie widget is shown on the first visit to your site
For your Legal monster integration to be compliant, you must block any non-necessary cookies until the user has given consent. Make sure that no cookies are set on your page when users first visit and have not yet given consent. We have written a small guide on how to block third-party cookies in our technical documentation.
Our widgets support different languages and jurisdictions. You should select the option that is relevant for your website.
If you do not specify a language, or if the language you set is not supported by us in the visitor's jurisdiction, English will be used.
You can see all the supported languages in this table here.
If you are on one of our paid plans, you have the option to customise the styling of our cookie widget. This includes changing the button colours and using a link instead of a shield to access cookie settings. You can find all the options, and how to customise in this guide.
Ready, set, go! You're now ready to become a platin data citizen. Forgot worrying about legal compliance, and know that our privacy mascot's Sven is taking care of everything for you. | https://docs.legalmonster.com/getting-started-1/checklist | 2021-06-12T17:39:16 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.legalmonster.com |
logPurchaseEvents()
This method logs all application purchase events in the Hurree platform, which is then used to produce analytics.
To add the method
- Open your project.
- You can add the method anywhere, but we recommend that you open the controllers.js file in the js folder.
- Add the method code wherever you have your payment gateway code:
window.plugins.hurreSDK.logPurchaseEvents( successCallback, errorCallback, {Purchase_Event_Payload} )
Arguments
These are the arguments for the method.
- successCallback - Success Callback function is called a following a success log and includes a success message
- errorCallback - The Error Callback function is called in case of an error, and includes an error message
- Purchase_Event_Payload - The
{Purchase_Event_Payload}'s keys are listed below
Object Keys
Sample Code
window.plugins.hurreSDK.logPurchaseEvents( function(data){ //success callback function }, function(err){ //error callback function }, { productIdentifier: "PRODUCT CODE WHICH IS PURCHASED", currencyCode: "USD", price: "1.99", quantity: "1" } ); | https://docs.hurree.co/phonegap_sdk/methods_and_arguments/logpurchaseevents.html | 2017-08-16T21:46:20 | CC-MAIN-2017-34 | 1502886102663.36 | [] | docs.hurree.co |
About 6 months ago I posted about the ‘end of the medical conference‘ after a fairly humdrum experience at Rural Medicine Australia 2012 in Perth. This negativity was picked up by others, notably EM-IM Doc
SMACC2013 has changed all that.
No doubt Minh over at prehospitalmed.com and the lifeinthefastlane.com crew will feed out snippets, so I won;t give an exhaustive breakdown.
Suffice it to say, SMACC2013 was noticeable for
– incredible collegiality between ED, ICU, prehospital and rural clinicians, be they student, paramedic, nurse or doctor
– opportunity for meritocracy-type interaction between colleagues from both Australia and overseas
– memorable for Gerard Fennessy delivering part of his anaphylaxis talk in song (ya’ muppet)
– moving and inspiring talks from the likes of Weingart, Lex and Reid
– SCAT paramedics abseiling in from the rafters as GSA-HEMS (lead by Karel Habig) made a surprise last minute showing in SimWars
– excellent organisation and venue. Thanks to the organising committee for all their hard work.
SMACC2014 will be in Brisbane 17-19 March 2014. Book now. Rural doctors – you NEED to embrace FOAMed. Casey “not the porn star” Parker and I have been banging on about this now for sometime – we need to raise the bar and bring “quality care, out there”
So, my mission for 2013-14
(i) to try and persuade ACRRM and RDAA that we need to embrace FOAMed as rural doctors
(ii) to try and implement an Australian version of BASICS – to improve the current gap in rural prehospital care (rural docs with airway/resus skills are well placed to provide appropriate interventions before retrieval arrive – but need to be equipped, trained and have formal call out criteria)
(iii) to ensure that EVERY small rural ED (and better still , every ED, ICU in Australia) is familiar with and engaged in processes such as
– Toby Fogg’s excellent airway registry
– RSI kit dump and checklists
– adequate difficult airway kit and knowing how to use it
– checklists for crisis management and ‘logistics not strategy‘
– regular sim and resus room management training
– NO-DESAT apnoeic diffusion oxygenation and DASH-1a for all RSIs, whether in ED, OT, ICU or prehospital
None of the above ideas (and more) will be unfamiliar to FOAMites – indeed, they’ve been bounced around for a few years now..but still they are not out there where they are most needed. This must change.
Concomitant with that will be more FOAMed relevant to rural docs – not just ‘airway’ and ‘shock’ but of relevance to critical care in the bush
Might seem a big ask..but I can dream.
After all, critical illness does not respect geography!
Fantastic
Loved the conference – sad we did not get to catch up such was the volume of people present.
Agreed that this has changed the way I feel about medical conferences and can’t wait for the next one.
Your agenda over the next year is very ambitious, but I am sure the FOAMed community with help in any way they can to make it all possible
Thanks for your great reporting of SMACC
Mike
Pingback: The LITFL Review 099 - Life in the Fast Lane medical education blog | https://ki-docs.com/2013/03/13/smacc2013-critical-illness-does-not-respect-geography/ | 2017-08-16T21:30:46 | CC-MAIN-2017-34 | 1502886102663.36 | [] | ki-docs.com |
Use the Library view to reuse your artwork and animation in other scenes or to build props and puppets.
A library is a folder where you store your templates and symbols. You can access these folders from different projects. Using the library is easy, just drag the content into the library to store your artwork and then drag it into your Timeline or Camera view when you want to reuse it.
Organize your library using subfolders. You can keep several different library folders on your hard drive or network.
A symbol is a container or construction block used to build your props, puppets and looping clips. You can use symbols to contain artwork and animation, to manipulate them as a single object, or as a case holder where you will put one. If you drag a symbol into your scene several times in the Timeline view, they will all be linked to the original one. If you modify one, they will all be modified.
A symbol is local to the project and cannot be accessed directly from other scenes. To reuse a symbol's content into another scene, you must create a template out of it.
When a symbol is exposed in the Timeline view, the symbol's cells are represented as a movie strip.
A template is an individual copy of the artwork stored in the library. This package can be reused in different scenes. Once a template is stored in the library it can be accessed from any project.
Dragging a template into your scene copies the content in your Timeline and does not link it to the original. This individual copy can be modified at will.
The.
Related Topics | http://docs.toonboom.com/help/harmony/Content/HAR/Stage/010_Library/001_H1_Understanding_the_Library_Concept.html | 2017-08-16T21:35:48 | CC-MAIN-2017-34 | 1502886102663.36 | [] | docs.toonboom.com |
Elasticsearch Mesos Framework
- Roadmap
- Getting Started
- Users Guide
- Developers Guide
- Support
- Sponsors
- License
Roadmap
Features
- Deployment ✓
- Durable cluster topology (via ZooKeeper) ✓
- Web UI on scheduler port 31100 ✓
- Support deploying multiple Elasticsearch clusters to single Mesos cluster ✓
- Fault tolerance ✓
- Customised ES configuration ✓
- Configurable data directory ✓
- Scale cluster horizontally ✓
- Snapshot and restore ✓
[Future]
- High availability (master, indexer, replica)
- Upgrading configuration
- Scale cluster vertically
- Upgrade
- Rollback
Blocked features
Developer Tools
- Local environment (Docker-machine) ✓
- Rapid code + test (Mini Mesos) ✓
- Build automation (Gradle and Jenkins) ✓
User tools
- One JSON post to marathon install ✓
Getting Started
We recommend that users install via marathon, using a docker container.
This framework requires:
- A running Mesos cluster on version 0.25.0
- The use of Marathon is strongly recommended to provide resiliency against scheduler failover.
- That the slaves have routable IP addresses. The ES ports are exposed on the slaves, so that the ES cluster can discover each other. Please double check that your slaves are routable.
Users Guide
Mesos version support
The framework currently supports Mesos 0.25.0. It may work with newer or older versions of Mesos, but the tests are only performed on this version.
How to install on Marathon
Create a Marathon file like the one below and fill in the IP addresses and other configuration. This is the minimum viable command; there are many more options.
{ "id": "elasticsearch", "container": { "docker": { "image": "mesos/elasticsearch-scheduler", "network": "HOST" } }, "args": ["--zookeeperMesosUrl", "zk://ZOOKEEPER_IP_ADDRESS:2181/mesos"], "cpus": 0.2, "mem": 512.0, "env": { "JAVA_OPTS": "-Xms128m -Xmx256m" }, "instances": 1 }
Then post to marathon to instantiate the scheduler:
curl -k -XPOST -d @marathon.json -H "Content-Type: application/json"
Note: the JAVA_OPTS line is required. If this is not set, then the Java heap space will be incorrectly set.
Other command line options include:
Usage: (Options preceded by an asterisk are required) [options] Options: --dataDir The host data directory used by Docker volumes in the executors. [DOCKER MODE ONLY] Default: /var/lib/mesos/slave/elasticsearch --elasticsearchBinaryUrl The elasticsearch binary to use (Must be tar.gz format). E.g. '' [JAR MODE ONLY] Default: <empty string> --elasticsearchClusterName Name of the elasticsearch cluster Default: mesos-ha --elasticsearchCpu The amount of CPU resource to allocate to the elasticsearch instance. Default: 1.0 --elasticsearchDisk The amount of Disk resource to allocate to the elasticsearch instance (MB). Default: 1024.0 --elasticsearchDockerImage The elasticsearch docker image to use. E.g. 'elasticsearch:latest' [DOCKER MODE ONLY] Default: elasticsearch:latest --elasticsearchNodes Number of elasticsearch instances. Default: 3 --elasticsearchPorts Override Mesos provided ES HTTP and transport ports. Format `HTTP_PORT,TRANSPORT_PORT` (comma delimited, both required). Default: <empty string> --elasticsearchRam The amount of ram resource to allocate to the elasticsearch instance (MB). Default: 256.0 --elasticsearchSettingsLocation Path or URL to ES yml settings file. Path example: '/var/lib/mesos/config/elasticsearch.yml'. URL example: ''. In Docker mode a volume will be created from /tmp/config in the container to the directory that contains elasticsearch.yml. Default: <empty string> --executorForcePullImage Option to force pull the executor image. [DOCKER MODE ONLY] Default: false --executorName The name given to the executor task. Default: elasticsearch-executor --externalVolumeDriver Use external volume storage driver. By default, nodes will use volumes on host. Default: <empty string> --externalVolumeOptions External volume driver options. Default: <empty string> --frameworkFailoverTimeout The time before Mesos kills a scheduler and tasks if it has not recovered (ms). Default: 2592000.0 --frameworkName The name given to the framework. Default: elasticsearch --frameworkPrincipal The principal to use when registering the framework (username). Default: <empty string> --frameworkRole Used to group frameworks for allocation decisions, depending on the allocation policy being used. Default: * --frameworkSecretPath The path to the file which contains the secret for the principal (password). Password in file must not have a newline. Default: <empty string> --frameworkUseDocker The framework will use docker if true, or jar files if false. If false, the user must ensure that the scheduler jar is available to all slaves. Default: true --javaHome When starting in jar mode, if java is not on the path, you can specify the path here. [JAR MODE ONLY] Default: <empty string> --useIpAddress If true, the framework will resolve the local ip address. If false, it uses the hostname. Default: false --webUiPort TCP port for web ui interface. Default: 31100 --zookeeperMesosTimeout The timeout for connecting to zookeeper for Mesos (ms). Default: 20000 * --zookeeperMesosUrl Zookeeper urls for Mesos in the format zk://IP:PORT,IP:PORT,...) Default: zk://mesos.master:2181
Framework Authorization
To use framework Auth, and if you are using docker, you must mount a docker volume that contains your secret file. You can achieve this by passing volume options to marathon. For example, if you wanted to use the file located at
/etc/mesos/frameworkpasswd, then could use the following:
... "docker": { "image": "mesos/elasticsearch-scheduler", "network": "HOST" }, "volumes": [ { "containerPath": "/etc/mesos/frameworkpasswd", "hostPath": "/etc/mesos/frameworkpasswd", "mode": "RO" } ] }, ...
Please note that the framework password file must only contain the password (no username) and must not have a newline at the end of the file. (Marathon bugs)
Using JAR files instead of docker images
It is strongly recommended that you use the containerized version of Mesos Elasticsearch. This ensures that all dependencies are met. Limited support is available for the jar version, since many issues are due to OS configuration. However, if you can't or don't want to use containers, use the raw JAR files in the following way: 1. Requirements: Java 8, Apache Mesos. 2. Set the CLI parameter frameworkUseDocker to false. Set the javaHome CLI parameter if necessary. 3. Run the jar file manually, or use marathon. Normal command line parameters apply. For example:
{ "id": "elasticsearch", "cpus": 0.2, "mem": 512, "instances": 1, "cmd": "java -jar scheduler-1.0.0.jar --frameworkUseDocker false --zookeeperMesosUrl zk://10.0.0.254:2181/mesos --frameworkName elasticsearch --elasticsearchClusterName mesos-elasticsearch --elasticsearchCpu 1 --elasticsearchRam 1024 --elasticsearchDisk 1024 --elasticsearchNodes 3 --elasticsearchSettingsLocation /home/ubuntu/elasticsearch.yml", "uris": [ "" ], "env": { "JAVA_OPTS": "-Xms256m -Xmx512m" }, "ports": [31100], "requirePorts": true, "healthChecks": [ { "gracePeriodSeconds": 120, "intervalSeconds": 10, "maxConsecutiveFailures": 6, "path": "/", "portIndex": 0, "protocol": "HTTP", "timeoutSeconds": 5 } ] }
Jars are available under the (releases section of github)[].
External volumes
The elasticsearch database can be given a third layer of data resiliency (in addition to sharding, and replication) by using exernal volumes. External volumes are storage devices that are mounted externally to the application. For example, AWS's EBS volumes. To enable this feature, simply specify the docker volume plugin that you wish to use. For example:
--externalVolumeDriver rexray. This will create volumes prefixed with the framework name and a numeric ID of the node, e.g.
elasticsearch0data. Volume options can be passed using the
--externalVolumeOptions parameter.
The most difficult part is setting up a docker volume plugin. The next few sections will describe how to setup the "rexray" docker volume plugin.
Docker mode installation of the rexray docker volume plugin
In docker mode, the applications rexray and dvdcli:
Below is a script that will install these applications for you on AWS. Ensure the following AWS credentials are exported on your host:
$TF_VAR_access_key,
$TF_VAR_access_key. To use, simply run the script with an argument pointing to an agent. E.g.
./installRexray.sh url.or.ip.to.agent.
Then to use external volumes, simply pass the required argument. Below is an example marathon json:
{ "id": "es-rexray", "cpus": 1.0, "mem": 512, "instances": 1, "args": [ "--zookeeperMesosUrl", "zk://$MASTER:2181/mesos", "--elasticsearchCpu", "0.5", "--elasticsearchRam", "1024", "--elasticsearchDisk", "1024", "--externalVolumeDriver", "rexray" ], "env": { "JAVA_OPTS": "-Xms32m -Xmx256m" }, "container": { "type": "DOCKER", "docker": { "image": "mesos/elasticsearch-scheduler:latest", "network": "HOST", "forcePullImage": true } }, "ports": [31100], "requirePorts": true }
Jar mode installation of the rexray docker volume plugin
It is possible to use in jar mode using the mesos-module-dvdi project, or the mesos-flocker project. Testing has been performed with the mesos-module-dvdi project.
The following script (in addition to the previous docker script) will install the required software. To use, simply run the script with an argument pointing to an agent. E.g.
./installRexrayLib.sh url.or.ip.to.agent.
Data directory
The ES node data can be written to a specific directory. If in docker mode, use the
--dataDir option. If in jar mode, set the
path.data option in your custom ES settings file.
The cluster name and slaveID will be appended to the end of the data directory option. This allows users with a shared network drive to write node specific data to their own seperate location.
For example, if the user specifies a data directory of
/var/lib/data, then the data for the agent with a Slave ID of S1 will be written to
/var/lib/data/mesos-ha/S1.
User Interface
The web based user interface is available on port 31100 of the scheduler by default. It displays real time information about the tasks running in the cluster and a basic configuration overview of the cluster.
The user interface uses REST API of the Elasticsearch Mesos Framework. You can find the API documentation here: docs.elasticsearchmesos.apiary.io.
Cluster Overview
Cluster page shows on the top the number of Elasticsearch nodes in the cluster, the overall amount of RAM and disk space allocated by the cluster. State of individual nodes is displayed in a bar, one color representing each state and the percentage of nodes being in this state.
Below you can see Performance Overview with the following metrics over time: number of indices, number of shards, number of documents in the cluster and the cluster data size.
Scaling
This simple interface allows you to specify a number of nodes to scale to.
Tasks List
Tasks list displays detailed information about all tasks in the cluster, not only those currently running, but also tasks being staged, finished or failed. Click through individual tasks to get access to Elasticsearch REST API.
Configuration
This is a read-only interface displaying an overview of the framework configuration.
Query Browser
Query Browser allows you to examine data stored on individual Elasticsearch nodes. In this example we searched for the word "Love" on
slave1 node. You can toggle between tabular view and raw results view mode, which displays the raw data returned from Elasticsearch
/_search API endpoint.
Known issues
Developers Guide
For developers, we have provided a range of tools for testing and running the project. Check out the minimesos project for an in-memory Mesos cluster for integration testing.
Quickstart
You can run Mesos-Elasticsearch using minimesos, a containerized Mesos cluster for testing frameworks.
How to run on Linux
Requirements
- Docker
$ ./gradlew build buildDockerImage system-test:main
How to run on Mac
Requirements
- Docker Machine
$ docker-machine create -d virtualbox --virtualbox-memory 4096 --virtualbox-cpu-count 2 mesos-es $ eval $(docker-machine env mesos-es) $ sudo route delete 172.17.0.0/16; sudo route -n add 172.17.0.0/16 $(docker-machine ip mesos-es) $ ./gradlew build buildDockerImage system-test:main
System test
The project contains a system-test module which tests if the framework interacts correctly with Mesos, using minimesos. We currently test Zookeeper discovery and the Scheduler's API by calling endpoints and verifying the results. As the framework grows we will add more system tests.
How to run system tests on Linux
Requirements
- Docker
Run all system tests
$ ./gradlew build buildDockerImage system-test:systemTest
Run a single system test
$ ./gradlew -DsystemTest.single=DiscoverySystemTest system-test:systemTest
How to release
- First update the CHANGELOG.md by listing fixed issues and bugs
- Update the version number in the Configuration.class so that the Web UI shows the correct version number.
- Push changes
- Verify that the Continuous Build Pipeline completes successfully.
- Run the Release Build and pick a release type: patch, minor or major.
- Done!
Support
Get in touch with the Elasticsearch Mesos framework developers via [email protected]
Sponsors
This project is sponsored by Cisco Cloud Services
License
Apache License 2.0 | http://mesos-elasticsearch.readthedocs.io/en/latest/ | 2017-08-16T21:24:19 | CC-MAIN-2017-34 | 1502886102663.36 | [array(['./screenshot-cluster.png', 'Cluster Overview'], dtype=object)
array(['./screenshot-scaling.png', 'Scaling'], dtype=object)
array(['./screenshot-tasks.png', 'Tasks List'], dtype=object)
array(['./screenshot-configuration.png', 'Configuration'], dtype=object)
array(['./screenshot-query-browser.png', 'Query Browser'], dtype=object)] | mesos-elasticsearch.readthedocs.io |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.