content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
Exporting Snapshot Difference Results to .csv Download CSV on New Snapshot Difference - To download differences in a csv file, create a Snapshot Differences record and click on Calculate Differences: - Once all differences are calculated, you will find at the top and at the bottom of the results grid the Download CSV button to download results as a .csv file: - Click on Download CSV to download the SnapshotDifferenceCSV.csv file. You can now resolve differences more easily. Download CSV on Existing Snapshot Differences When you click on an existing Snapshot Differences record, you can see Download CSV: If you click on Calculate Differences you will recalculate differences so you can get a new difference .csv file to download.
https://docs.copado.com/article/hmyqktkf8q-exporting-snapshot-difference-results-to-csv
2021-10-16T11:19:44
CC-MAIN-2021-43
1634323584567.81
[array(['https://files.helpdocs.io/U8pXPShac2/articles/hmyqktkf8q/1575391219874/calculate-differences.png', None], dtype=object) ]
docs.copado.com
Depending on your requirement, you can increase the size of an online LUN. Online resizing of the LUN is enabled only if all hosts in the cluster are upgraded to vSAN 6.7 Update 3 or later. Procedure - In the vSphere Client, navigate to the vSAN cluster. - Click the Configure tab. - Under vSAN, click iSCSI Target Service. - Click the iSCSI Targets tab and select a target. - In the vSAN iSCSI LUNs section, select a LUN and click Edit. The Edit LUN dialog box is displayed. - Increase the size of the LUN depending on your requirement. - Click OK.
https://docs.vmware.com/en/VMware-vSphere/6.7/com.vmware.vsphere.virtualsan.doc/GUID-C14EF9B3-7458-44DA-9B94-A9623EF0810B.html
2021-10-16T12:36:12
CC-MAIN-2021-43
1634323584567.81
[]
docs.vmware.com
Prerequisites You must complete the following prerequisites to configure an application with CloudWatch Application Insights: Amazon SSM enablement. You must install Systems Manager Agent (SSM Agent), and your instances must be SSM enabled. For steps on how to install the SSM Agent, see Setting Up Amazon SSM. EC2 instance role. You must attach the AmazonSSMManagedInstanceCorerole to enable Systems Manager (see Using Identity-based Policies (IAM Policies) for AAmazon SSM) and the CloudWatchAgentServerPolicyto enable instance metrics and logs to be emitted through CloudWatch. See Create IAM Roles and Users for Use With CloudWatch Agent for more information. Amazon resource groups. To onboard your applications to CloudWatch Application Insights, you must create a resource group that includes all associated Amazon resources used by your application stack. This includes application load balancers, EC2 instances running IIS and web front‐end, .NET worker tiers, and your SQL Server database. CloudWatch Application Insights automatically includes Auto Scaling groups using the same tags or CloudFormation stacks as your resource group, because Auto Scaling groups are not currently supported by resource groups. For more information, see Getting Started with Amazon Resource Groups. IAM permissions: For non‐admin users, you must create an Amazon Identity and Access Management (IAM) policy that allows Application Insights to create a service-linked role, and attach it to your user identity. For steps on attaching the policy, see IAM policy. Service-linked role: CloudWatch Application Insights uses Amazon Identity and Access Management (IAM) service‐linked roles. A service‐linked role is created for you when you create your first CloudWatch Application Insights application in the Amazon Management Console. For more information, see Using service-linked roles for CloudWatch Application Insights. Performance Counter metrics support for EC2 Windows instances: To monitor Performance Counter metrics on your EC2 Windows instances, Performance Counters must be installed on the instances. For Performance Counter metrics and corresponding Performance Counter set names, see Performance Counter metrics. For more information about Performance Counters, see Performance Counters .
https://docs.amazonaws.cn/en_us/AmazonCloudWatch/latest/monitoring/appinsights-prereqs.html
2021-10-16T12:55:55
CC-MAIN-2021-43
1634323584567.81
[]
docs.amazonaws.cn
Changelog in Kissflow This is the log of all the changes in Kissflow, including scheduled changes and planned deprecations. Effective from March 31, 2020 Webhook payload changes When you send Kissflow process data to a webhook, the payload format for some widgets are changed. Now, we have matched the webhook’s payload format with the API’s payload format. Field validations changes Kissflow will start to audit the field validation when submitting a process item through integrations or APIs. For example, if your form has required fields and field validations, they will now be verified when submitting an item via a Kissflow Integration, API, or Zapier. Step triggers changes When you use step triggers, only the fields visible for that particular step will be displayed for mapping. - When an item crosses it's deadline at this step - When an item is reassigned at this step Expect these, all the other step triggers will display the fields visible for that particular step.When you use process triggers, all the form fields will be displayed for mapping irrespective of the fields hidden under permission. Deprecation of step triggers We are deprecating these step triggers: - When an item is rejected at this step - When an item is sent back at this step - When an item is withdrawn at this step - Item is submitted at this step We suggest you to use the trigger ‘When an item exits this step’ which fires in these scenarios. - advances to the next step - is sent back to a previous step - is rejected, OR - is withdrawn You can find out which of these scenarios caused the trigger. This will be available as part of the payload information as shown in the example: { "_last_action_performed_by": [ {“_id”: "test", "Name": "Test"} ], "_last_action_performed_at": "2020-02-28T08:40:56Z", "_last_action": "Reject", | “Submit”, | “Sent Back”, |“Withdraw”, | “Reassign” "_last_action_note": "Reason for the Rejection" "_flow_id": "ProcessId" }
https://docs.kissflow.com/article/ffgc14a0w0-changelog-in-kissflow
2021-10-16T12:09:54
CC-MAIN-2021-43
1634323584567.81
[]
docs.kissflow.com
Class passed to an error. More... #include <opencv2/core.hpp> Class passed to an error. This class encapsulates all or almost all necessary information about the error happened in the program. The exception is usually constructed and thrown implicitly via CV_Error and CV_Error_ macros. Default constructor Full constructor. Normally the constructor is not called explicitly. Instead, the macros CV_Error(), CV_Error_() and CV_Assert() are used. error description source file name where the error has occurred function name. Available only when the compiler supports getting it line number in the source file where the error has occurred the formatted error message
https://docs.opencv.org/3.4.13/d1/dee/classcv_1_1Exception.html
2021-10-16T11:41:37
CC-MAIN-2021-43
1634323584567.81
[]
docs.opencv.org
Experiment Saving One of the main uses of aitoolbox.experiment package is tracking experiments and saving models and result as the training progresses. This way the executed is well documented and its parameters and details can easily be determined even a long time after the original training. The components in this package basically help prevent training the model, then doing some other experiments and coming back to the original experiment one month later and having no clue what was actually the experiment setting, what was the performance and how exactly were the results produced. Experiment Saver AIToolbox experiment savers at their core handle the creation of the experiment folder into which all the experiment results and model are automatically saved in a structured way which helps the experiment traceability. Experiment savers represent the high-level experiment tracking API and generally fall into two main categories: cloud experiment savers and local-only experiment savers. Local-only experiment savers in aitoolbox.experiment.local_experiment_saver are simpler and only save the the experiment results onto the local drive. Cloud-enabled experiment savers in aitoolbox.experiment.experiment_saver are an extension in sense that they in addition to tracking the experiment on the local drive also automatically take care of uploading all the produced results and models to the cloud storage. This is especially useful when shutting automatically down the GPU instance after the training is finished. By using cloud experiment saver, all the experiment results are safely and automatically persisted in the cloud storage even after all the locally produced results are deleted when the instance is terminated. Cloud enabled experiment savers: aitoolbox.experiment.experiment_saver.FullPyTorchExperimentS3Saver aitoolbox.experiment.experiment_saver.FullKerasExperimentS3Saver aitoolbox.experiment.experiment_saver.FullPyTorchExperimentGoogleStorageSaver aitoolbox.experiment.experiment_saver.FullKerasExperimentGoogleStorageSaver Local-only experiment savers: aitoolbox.experiment.local_experiment_saver.FullPyTorchExperimentLocalSaver aitoolbox.experiment.local_experiment_saver.FullKerasExperimentLocalSaver A very convenient property all the experiment savers have is that they all implement the same user facing API which makes them ideal for easy use as part of the larger system. Due to the unified API different experiment saver types can be easily dynamically exchanged according to desired training scenarios without any need to modify the surrounding code. The core API function that is common to all the experiment savers used to initiate the experiment snapshot saving is aitoolbox.experiment.experiment_saver.AbstractExperimentSaver.save_experiment(). Local Save While experiment savers described above serve as the high-level experiment tracking API, the local model and results savers are the low-level components on top of which the experiment saver API is built. Most users will probably very often just use the experiment savers, however in certain use cases the use of more low level components could still be desired. The local experiment data saving low-level components can be found in the aitoolbox.experiment.local_save subpackage. They handle all the experiment tracking tasks ranging from experiment folder structuring, neural model weights & optimizer state saving and all the way to tracked results packaging before finally saving to the local drive. Local Model Save Implementations of model saving logic to the local drive. Currently available model savers: Local Results Save Implementation of training results saving logic to the local drive available in aitoolbox.experiment.local_save.local_results_save.LocalResultsSaver. This class offers two main options to save produced experiment results: saving all results into the single (potentially large) file via aitoolbox.experiment.local_save.local_results_save.LocalResultsSaver.save_experiment_results saving results into the single multiple separate files via aitoolbox.experiment.local_save.local_results_save.LocalResultsSaver.save_experiment_results_separate_files
https://aitoolbox.readthedocs.io/en/latest/experiment/experiment_save.html
2021-10-16T11:15:24
CC-MAIN-2021-43
1634323584567.81
[]
aitoolbox.readthedocs.io
community.general.zfs_delegate_admin – Manage ZFS delegated administration (user admin privileges) Note This plugin is part of the community.general collection (version 3.7.0). To install it use: ansible-galaxy collection install community.general. To use it in a playbook, specify: community.general.zfs_delegate_admin. Synopsis Manages ZFS file system delegated administration permissions, which allow unprivileged users to perform ZFS operations normally restricted to the superuser. See the zfs allowsection of zfs(1M) for detailed explanations of options. This module attempts to adhere to the behavior of the command line tool as much as possible. Requirements The below requirements are needed on the host that executes this module. A ZFS/OpenZFS implementation that supports delegation with zfs allow, including: Solaris >= 10, illumos (all versions), FreeBSD >= 8.0R, ZFS on Linux >= 0.7.0. Examples - name: Grant `zfs allow` and `unallow` permission to the `adm` user with the default local+descendents scope community.general.zfs_delegate_admin: name: rpool/myfs users: adm permissions: allow,unallow - name: Grant `zfs send` to everyone, plus the group `backup` community.general.zfs_delegate_admin: name: rpool/myvol groups: backup everyone: yes permissions: send - name: Grant `zfs send,receive` to users `foo` and `bar` with local scope only community.general.zfs_delegate_admin: name: rpool/myfs users: foo,bar permissions: send,receive local: yes - name: Revoke all permissions from everyone (permissions specifically assigned to users and groups remain) community.general.zfs_delegate_admin: name: rpool/myfs everyone: yes state: absent
https://docs.ansible.com/ansible/latest/collections/community/general/zfs_delegate_admin_module.html
2021-10-16T12:38:07
CC-MAIN-2021-43
1634323584567.81
[]
docs.ansible.com
Date: Sat, 14 Sep 1996 10:25:16 -0500 (CDT) From: "S(pork)" <[email protected]> To: [email protected] Cc: [email protected] Subject: Re: Package install 2.1.5, proc: table is full Message-ID: <[email protected]> In-Reply-To: <[email protected]> Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help Yeah, it's pretty interesting, when I go to the other console and do a ps -ax (ps -aux segmentation faults) I see about 80 zombie tar processes... What else in the install hasn't been completed, and how safe do you think it is to just reboot right now? Also, is there a log that notes the packages and the order they were installed upt to this point? Thanks, Charles On Fri, 13 Sep 1996, Doug White wrote: > On Fri, 13 Sep 1996, S(pork) wrote: > > > I swear I've seen the answer to this before, but I just checked the FAQ > > and searched the archive and I can't find it... Basically about 2/3 of > > the way through the package install, it fails, and I see the above message > > (proc: table is full) in the debug screen. > > > > I'd call it a bug and install the packages once the system is installed. > (use pkg_add to install packages.) > > Doug White | University of Oregon > Internet: [email protected] | Residence Networking Assistant > | Computer Science Major > Want to link to this message? Use this URL: <>
https://docs.freebsd.org/cgi/getmsg.cgi?fetch=992214+0+/usr/local/www/mailindex/archive/1996/freebsd-questions/19960908.freebsd-questions
2021-10-16T13:07:16
CC-MAIN-2021-43
1634323584567.81
[]
docs.freebsd.org
Opt-in Controller Development Do you ever feel that its a little silly to create a controller method to render even the simplest page? Opt-in Controller Development speeds up development by allowing you to create your pages without using controllers. Variables can be passed to view files similar to how CI's routes work or can be set in the page itself and will cascade to all subviews of the page. You can even load in a library, model and helper. This method also allows for deeper website structures without the need for special routing to controllers. How it Works The CodeIgniter Router is modified to route all requests that don't map to a controller method to a default controller named page_router. This controller will do one of two things, depending on the fuel_mode value set in the FUEL config file: - render pages saved in the fuel database (not what we are doing here) - find a corresponding view file to the uri requested and map variables to the view file found the views/_variables folder What is a Variables File? A variables file is an array of variables that get passed to pages. The files are located in the views/_variables folder There are three levels of variables: - global - variables loaded for every controller-less page. - controller - variables loaded at what would normally be a controller level. A request of would load in the about controller variables. This level is good for setting all template variables for a section of your website (e.g. about). - page - variables loaded for just a specific page Global Variables Global variables can be set in the views/_variables/global.php file using the $vars array variable. Often times you will set default page variables in this file. By default, the following global variables are provided: $vars['layout'] = 'main'; $vars['page_title'] = ''; $vars['meta_keywords'] = ''; $vars['meta_description'] = ''; $vars['js'] = ''; $vars['css'] = ''; $vars['body_class'] = uri_segment(1).' '.uri_segment(2); Controller Variables Controller variables are applied to what would normally be the controller level of a website. For example, If you have an about section of your website at, you can create a views/_variables/about.php a variables file at views/_variables/about.php can be used. Variables in this file will overwrite any global variable with the same key. The variables will be extended automatically to all sub pages of the about section (e.g.) . Variables at the controller level use the same $vars variable as global variables. $vars['layout'] = 'level2'; $vars['page_title'] = 'About : My Website'; $vars['body_class'] = 'about'; Page Variables Page variables are applied to a specific URI path. They will overwrite any global or controller variable set with the same key. As an example, you may want to overwrite the page_title of About : My Website to Contact : My Website.. To do this, you can create a $pages array variable in your global or controller variables file (e.g. views/_variables/about.php) with the key being the URI of the page or a regular expression to match pages (just like routing files). The value is an array of variables to pass to the page. // controller level variables $vars['layout'] = 'level2'; $vars['page_title'] = 'About : My Website'; $vars['body_class'] = 'about'; // page specific $pages['about/contact'] = array('page_title' => 'Contact : My Website'); $pages['about/contact$|about/careers$'] = array('layout' => 'special'); Adding Variables from Within the View An alternative to creating $pages variables is to simply declare the variable in the view file using the fuel helper function fuel_set_var. By setting the variable this way, you will ensure that the variable will also get passed to the layout file containing the view file. <?php fuel_set_var('page_title', 'About : My Website'); ?> <h1>About My Website</h1> <p>A long, long, time ago, in a galaxy far, far away...</p> Using Variable Files Within a Controller If perhaps you need to use the variables in a controller (e.g. you have a form located at about/contact and an about controller variables file), you can load it like so: class About extends CI_Controller { function __construct() { parent::__construct(); } function contact() { // set your variables $vars = array('page_title' => 'Contact : My Website'); //... form code goes here // use Fuel_page to render so it will grab all opt-in variables and do any necessary parsing $this->fuel->pages->render('about/contact', $vars); } } Special Variables The following are special variable keys that can be used in the variables files: - helpers - a string or array of helpers to load. - libraries - a string or array of library classes to load. - models - a string or array of models to load. - layout - the path to the layout view. The $body variable is used within the layout to load the view file contents. - view - a specific view file to use for the page. By default it will look for a view file that matches the uri path. If it doesn't find one, then it will search the uri path for a corresponding view. - parse_view - determines whether the view file should be parsed. - allow_empty_content - determines whether to allow empty view content without showing the 404 page. - CI - the CodeIgniter super object variable
https://docs.getfuelcms.com/general/opt-in-controllers
2021-10-16T12:03:46
CC-MAIN-2021-43
1634323584567.81
[]
docs.getfuelcms.com
Use a calendar search to create an upcoming event listing This tutorial takes you through the steps to create an upcoming event listing using a Calendar Events Search Page asset. In this tutorial, you configure the page to look for events under a certain root node and list them in order of the closest starting date. Events with start dates in the past are not listed. Before you start Create calendar events with start dates in the future. These events display in the calendar search listing once you complete the configuration. Create the calendar events search page Create a Calendar Events Search Page asset named Upcoming Events. Go to the Details screen and click Edit to begin editing. Scroll down to the Root Nodes section and set the location of the root node under which you want to store calendar events. Optionally choose which statuses to include in the listing under the Asset Statuses to List field. The events are not yet published, so select all statuses in the list except for Archived. Click Save to save your configuration settings. You have now specified where to store calendar event assets. The next step is to configure the search parameters. Configure the search settings Configure the Calendar Events Search Page to act as an upcoming event listing. Go to the Search Fields screen of the Upcoming Events asset. Within the Events Date Filter Configuration section, add a new events date filter named listing-typeinside the Events Filter Fields fields. Click Save to load more configuration settings. The event date filter you created displays in a table. To the right of the listing, a drop-down is available for choosing the Filter type. Select Fuzzy Date from the drop-down. Click Save to load more configuration settings. You now have a selection of Fuzzy Date options available. From this list, select only the Upcoming Events option. Click Save to load more configuration settings. The Event Horizon setting lets you restrict how many events display in the Event Listing. You may not want to list every single upcoming event if there are events that are not scheduled to happen for a very long time. In this tutorial, you configure events to occur in the distant future. Set the Event Horizon value to 999. Click Save to finalize your changes to the Search settings. Configure the stored search settings Now that you have configured how the search fields display calendar events, you can set the Search page to behave like a listing page. You do this by telling the search page to show the results by default. When users land on this page, the page does an instant search for upcoming events using the listing-type setting. Go back to the Details screen of the search page. Scroll down to the Stored Query Location table in the Stored Search section. Select the following values: - Parameter Date Filter: listing-type - Source Set Value Click Save. You now have a text box under the Source column to enter a value. In this tutorial, you want to search for upcoming events, so type upcoming_eventsinto this box. Set Show the Results page to Yes to show the search results straight away. Click Save. Preview the Upcoming Events page. Notice how the calendar events you created (or imported) are listed using the default listing format, similar to this example:
https://docs.squiz.net/matrix/version/latest/tutorials/listings/create-an-upcoming-events-listing-using-a-calendar-search-page.html
2021-10-16T12:54:55
CC-MAIN-2021-43
1634323584567.81
[array(['../_images/calendar-search-page-upcoming-events-preview.png', 'calendar search page upcoming events preview'], dtype=object)]
docs.squiz.net
Closing chrisdiazart.com The Close Gap tool allows you<< - Select View > Show >Show Strokes or press K to see a preview of the result. - In the Tools toolbar, select the Close Gaps tool. - In the Camera orDrawing.
https://docs.toonboom.com/help/harmony-20/premium/colour/close-gaps.html
2021-10-16T11:20:16
CC-MAIN-2021-43
1634323584567.81
[array(['../Resources/Images/HAR/Stage/Colours/an_autoclosegap.png', None], dtype=object) array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../Resources/Images/HAR/Stage/Colours/Steps/011_closegap.png', None], dtype=object) ]
docs.toonboom.com
Customizing Toolbars T-HFND-003-007 Some toolbars have buttons that are hidden by default to prevent clutter. You can customize which buttons are displayed in these toolbars, as well as the order in which the buttons are displayed. - Right-click on the toolbar that you want to customize. In the context menu, select Customize. The Toolbar Manager dialog appears.NOTES - If the Customize option is greyed out, this toolbar cannot be customized. - The Tools toolbar can only be customized in Flat Tools Toolbar mode—see Enabling the Flat Toolbar. To add a new toolbar button, select the button you want to add from the Available Tools list, then click on the Add the selected tool into the toolbar button. The button will be removed from the Available Tools list and added to the Toolbar list. To remove a button from the toolbar, select it in the Toolbar list, then click on the Remove the selected tool from the toolbar button. The button will be removed from the Toolbar list and added to the Available Tools list. - To reorder the buttons in the toolbar, select a button in the Toolbar list, then use the Move the tool up and Move the tool down buttons to change the button’s order. The order of the tools in the toolbar from left to right is the same as the order of the buttons in the Toolbar list from top to bottom. - Click OK when you are done customizing the toolbar.
https://docs.toonboom.com/help/harmony-20/premium/user-interface/customize-view-toolbar.html
2021-10-16T11:51:08
CC-MAIN-2021-43
1634323584567.81
[array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../Resources/Images/HAR/Stage/Interface/HAR11/HAR11_RemoveButtonFromToolbar.png', 'How to Remove a Button from a Toolbar in Toon Boom Harmony How to Remove a Button from a Toolbar in Toon Boom Harmony'], dtype=object) array(['../Resources/Images/HAR/Stage/Interface/HAR11/HAR11_Reorder_Buttons.png', 'How to reorder buttons in a toolbar in Toon Boom Harmony How to reorder buttons in a toolbar in Toon Boom Harmony'], dtype=object) ]
docs.toonboom.com
BtoB Magazine Names Blue Iceberg as a Top Direct Agency: Top Ranking Achieved for Second Consecutive Year Blue Iceberg was named a Top Agency by BtoB Magazine for the second year in a row. This recognizes Blue Iceberg's success helping clients embrace the internet as a strategic marketing medium. NEW YORK, NY - Blue Iceberg Interactive, an independent New York-based interactive agency, announced today that BtoB Magazine has named Blue Iceberg as one of the top agencies in the United States for its second consecutive year. Each year BtoB Magazine, a leading national publication for business-to-business marketers, honors the achievements of the top US business-to-business agencies. To be recognized as a Top Agency, honorees must offer full-service interactive marketing communications services and were judged on exceptional work performed in 2010, the percentage of their business that is b-to-b, new client wins, and the demonstrated growth of their business. "Interactive agencies grew last year as the economy picked up and clients increased online spending," said Kate Maddox, Executive Editor of BtoB Magazine. Blue Iceberg Interactive grew its business with existing customers and won new customers such as Borga Steel and Assured Guaranty. "Being recognized by BtoB Magazine for a second year underscores the success that Blue Iceberg has achieved as we have continued to help our clients embrace the internet as a strategic marketing medium, even during this difficult economic downturn," said Blue Iceberg's Cofounder Richard Cacciato. "We are pleased and honored that BtoB has selected Blue Iceberg once more, recognizing the excellent work that we perform for our clients.".
http://docs.erbtest.org/news/view.php?id=65
2018-07-16T00:40:23
CC-MAIN-2018-30
1531676589029.26
[]
docs.erbtest.org
When Amanda Enterprise is configured and licensed for Hyper-V Virtual Machine backup, Amanda uses the Microsoft Windows Volume Shadow Copy Service (VSS) to back up the Hyper-V virtual machines. To backup Hyper-V virtual machine, Zmanda Windows client will have to be installed on the Hyper-V hypervisor (Hyper-V core installation is also supported). You need a Hyper-V license for each Windows Hypervisor. The windows client installed on the hypervisor can be backup virtual machines residing on the hypervisor. Hyper-V virtual machine backups are always Full backups. Differential or Incremental backups are not supported. The backups are performed using the Hyper-V VSS Writer. Hyper-V uses one of two mechanisms to back up each Virtual Machine. The default backup mechanism is called the Saved State method, where the VM is put into a saved state during the processing of the snapshot event, snapshots are taken of the appropriate volumes, and the VM is returned to previous state as part of post-snapshot event. The other backup mechanism is called the Child VM Snapshot method, which uses the VSS writers inside the Hyper-V child virtual machine to participate in the backup. The Child VM is the virtual machine being backed up. For the Child VM Snapshot method to be used, following conditions must be met: The backup mechanism is selected for each virtual machine (child VM) in Hyper-V management interface. In ZMC Backup What page, you can select Hyper-V from the Applications drop down menu. The figure below shows the Hyper-V virtual machine configuration as a backup object in the Backup set. Host Name is the IP address or the name of Windows Hyper-V hypervisor. Click Discover button to discover all the virtual machines on the hypervisor. Zmanda Windows Client must be installed on the hypervisor to discover virtual machines. Data Source : Users may select individual virtual machines with the Hyper-V hypervisor backup. This will backup the virtual hard disk files (.vhd or .vhdx) as well as the snapshot files (.avhd or .avhdx) of the virtual machine(s) selected. The Initial Store file is also backed up. The InitialStore.xml file, which contains the Hyper-V Authorization Store is backed up. Encryption and Compression : Select these options as desired. They are described in more detail here. After clicking Add button, the backup object is added to the table as shown below. You can edit the entry by clicking on the table entry. For all Microsoft SQL Server, Microsoft Exchange, Microsoft Sharepoint, and Hyper-V backups, the name of the new machine must exactly match the name of the old machine. Hyper-V virtual machines can be restored to the original Hyper-V hypervisor. The Name of hypervisor has to match the name of machine from where backup was performed (even if it is a different physical machine). The actual process of restoration is identical to all other applications. Viewing Details:
http://docs.zmanda.com/Project:Amanda_Enterprise_3.3/Zmanda_App_modules/Microsoft_HyperV
2018-07-16T01:03:14
CC-MAIN-2018-30
1531676589029.26
[]
docs.zmanda.com
Frequently...¶ ...asked questions¶ How can I contribute?¶ Okay, this one hasn’t been asked “frequently” in the strict meaning of the word, but anyway. Glad you’re interested! Your help is welcome! Please check the contribute section. ...encountered problems¶ After updating, you might get: KeyError at /en/stuff/ 'SomethingPlugin' This means a plugin was removed but is still in the database. Just run: python source/manage.py cms delete_orphaned_plugins --noinput if you were using the plugin that was removed, then those use cases will be gone. The alternative is reverting the update. ElasticSearch / Haystack can’t connect¶ You can test that elasticsearch is running using a http request on port 9200, like so: curl -X GET If it isn’t, there could be a number of reasons. In my case, I had to set START_DAEMON=true in /etc/default/elasticsearch (source) You might also have the wrong version of the Python binding, see here
http://svsite.readthedocs.io/en/latest/faq/
2018-07-16T00:40:50
CC-MAIN-2018-30
1531676589029.26
[]
svsite.readthedocs.io
The PDO Event Store is an implementation of prooph/event-store that supports MySQL and MariaDB as well as PostgreSQL. For a better understanding, we recommend to read the event-store docs, first. The PostgresEventStore has a better performance (at least with default database configuration) and implements the TransactionalEventStore interface. If you need maximum performance or transaction support, we recommend to use the PostgresEventStore instead of the MariaDb-/MySqlEventStore. All known event streams are stored in an event stream table, so with a simple table lookup, you can find out what streams are available in your store. Same goes for the projections, all known projections are stored in a single table, so you can see what projections are available, and what their current state / stream positition / status is. When reading from an event streams with multiple aggregates (especially when using projections), you could end of with millions of events loaded in memory. Therefor the pdo-event-store will load events only in batches of 10000 by default. You can change to value to something higher to achieve even more performance with higher memory usage, or decrease it to reduce memory usage even more, with the drawback of having a not as good performance. It is important to use the same database for event-store and projection manager, you could use distinct pdo connections if you want to, but they should be both connected to the same database. Otherwise you will run into issues, because the projection manager needs to query the underlying database table of the event-store for its querying API. It's recommended to just use the same pdo connection instance for both. This component ships with 9 default persistance strategies: All persistance strategies have the following in common: The generated table name for a given stream is: '_' . sha1($streamName->toString() so a sha1 hash of the stream name, prefixed with an underscore is the used table name. You can query the event_streams table to get real stream name to stream name mapping. You can implement your own persistance strategy by implementing the Prooph\EventStore\Pdo\PersistenceStrategy interface. This stream strategy should be used together with event-sourcing, if you use one stream per aggregate. For example, you have 2 instances of two different aggregates named user-123, user-234, todo-345 and todo-456, you would have 4 different event streams, one for each aggregate. This stream strategy is the most performant of all (with downsides, see notes), but it will create a lot of database tables, which is something not everyone likes (especially DB admins). All needed database tables will be created automatically for you. Note: For event-store projections the aggregate stream strategy is not that performant anymore, consider using CategoryStreamProjectionRunner from the [standard-projections](() repository. But even than, the projections would be slow, because the projector needs to check all the streams one-by-one for any new events. Because of this speed of finding and projecting any new events depends on the number of streams which means it would rapidly decrease as you add more data to your event store. You could however drastically improve the projections, if you would add a category stream projection as event-store-plugin. (This doesn't exist, yet) This stream strategy should be used together with event-sourcing, if you want to store all events of an aggregate type into a single stream, for example user-123 and user-234 should both be stored into a stream called user. You can also store all stream of all aggregate types into a single stream, for example your aggregates user-123, user-234, todo-345 and todo-456 can all be stored into a stream called event_stream. This stream strategy is slightly less performant then the aggregate stream strategy. You need to setup the database table yourself when using this strategy. An example script to do that can be found here. This stream strategy is not meant to be used for event-sourcing. It will create simple event streams without any constraints at all, so having two events of the same aggregate with the same version will not rise any error. This is very useful for projections, where you copy events from one stream to another (the resulting stream may need to use the simple stream strategy) or when you want to use the event-store outside the scope of event-sourcing. You need to setup the database table yourself when using this strategy. An example script to do that can be found here. When you query the event streams a lot, it might be a good idea to create your own stream strategy, so you can add custom indexes to your database tables. When using with the MetadataMatcher, take care that you add the metadata matches in the right order, so they can match your indexes. You can configure the event store to disable transaction handling completely. In order to do this, set the last parameter in the constructor to true (or configure your interop config factory accordingly, key is disable_transaction_handling). Enabling this feature will disable all transaction handling and you have to take care yourself to start, commit and rollback transactions. Note: This could lead to problems using the event store, if you did not manage to handle the transaction handling accordingly. This is your problem and we will not provide any support for problems you encounter while doing so.
http://docs.getprooph.org/event-store/implementations/pdo_event_store/variants.html
2018-07-16T00:26:42
CC-MAIN-2018-30
1531676589029.26
[]
docs.getprooph.org
Django 1.10.1 release notes¶ September 1, 2016 Django 1.10.1 fixes several bugs in 1.10. Bugfixes¶ - Fixed a crash in MySQL connections where SELECT @@SQL_AUTO_IS_NULLdoesn’t return a result (#26991). - Allowed User.is_authenticatedand User.is_anonymousproperties to be compared using ==, !=, and |(#26988, #27154). - Removed the broken BaseCommand.usage()method which was for optparsesupport (#27000). - Fixed a checks framework crash with an empty Meta.default_permissions(#26997). - Fixed a regression in the number of queries when using RadioSelectwith a ModelChoiceFieldform field (#27001). - Fixed a crash if request.META['CONTENT_LENGTH']is an empty string (#27005). - Fixed the isnulllookup on a ForeignKeywith its to_fieldpointing to a CharFieldor pointing to a CharFielddefined with primary_key=True(#26983). - Prevented the migratecommand from raising InconsistentMigrationHistoryin the presence of unapplied squashed migrations (#27004). - Fixed a regression in Client.force_login()which required specifying a backendrather than automatically using the first one if multiple backends are configured (#27027). - Made QuerySet.bulk_create()properly initialize model instances on backends, such as PostgreSQL, that support returning the IDs of the created records so that many-to-many relationships can be used on the new objects (#27026). - Fixed crash of django.views.static.serve()with show_indexesenabled (#26973). - Fixed ClearableFileInputto avoid the requiredHTML attribute when initial data exists (#27037). - Fixed annotations with database functions when combined with lookups on PostGIS (#27014). - Reallowed the {% for %}tag to unpack any iterable (#27058). - Made makemigrationsskip inconsistent history checks on non-default databases if database routers aren’t in use or if no apps can be migrated to the database (#27054, #27110, #27142). - Removed duplicated managers in Model._meta.managers(#27073). - Fixed contrib.admindocscrash when a view is in a class, such as some of the admin views (#27018). - Reverted a few admin checks that checked field.many_to_manyback to isinstance(field, models.ManyToManyField)since it turned out the checks weren’t suitable to be generalized like that (#26998). - Added the database alias to the InconsistentMigrationHistorymessage raised by makemigrationsand migrate(#27089). - Fixed the creation of ContentTypeand Permissionobjects for models of applications without migrations when calling the migratecommand with no migrations to apply (#27044). - Included the already applied migration state changes in the Appsinstance provided to the pre_migratesignal receivers to allow ContentTyperenaming to be performed on model rename (#27100). - Reallowed subclassing UserCreationFormwithout USERNAME_FIELDin Meta.fields(#27111). - Fixed a regression in model forms where model fields with a defaultthat didn’t appear in POST data no longer used the default(#27039).
https://docs.djangoproject.com/en/1.11/releases/1.10.1/
2018-07-16T01:03:34
CC-MAIN-2018-30
1531676589029.26
[]
docs.djangoproject.com
The Linux PC or laptop on which you install Horizon Client, and the peripherals it uses, must meet certain system requirements. These system requirements pertain to the Horizon Client for Linux that VMware makes available. In addition, several VMware partners offer thin and zero client devices for View. Starting with version 7.0, View Agent is renamed Horizon Agent. VMware Blast, the display protocol that is available starting with Horizon Client 4.0 and Horizon Agent 7.0, is also known as VMware Blast Extreme. Architecture i386, x86_64, ARM Memory At least 2GB of RAM Operating system - OpenSSL requirement Horizon Client requires a specific version of OpenSSL. The correct version is automatically downloaded and installed. View Connection Server, Security Server, and View Agent or Horizon Agent Latest maintenance release of View 6.2.x and later releases If client systems connect from outside the corporate firewall, VMware recommends that you use a security server. With a security server, client systems will not require a VPN connection. Remote (hosted) applications are available only on Horizon 6.0 (or later) View servers. Display protocol VMware Blast (requires Horizon Agent 7.0 or later) PCoIP RDP Screen resolution on the client system Minimum: 1024 X 768 pixels Hardware requirements for VMware Blast and PCoIP x86- or x64- or x64-based processor with SSE2 extensions, with a 800MHz or higher processor speed. 128MB RAM. Software requirements for Microsoft RDP Use the latest rdesktop version available. Software requirements for FreeRDP If you plan to use an RDP connection to View desktops and you would prefer to use a FreeRDP client for the connection, you must install the correct version of FreeRDP and any applicable patches. See Install and Configure FreeRDP. Other software requirements Horizon Client also has certain other software requirements, depending on the Linux distribution you use. Be sure to allow the Horizon Client installation wizard to scan your system for library compatibilities and dependencies. The following list of requirements pertains only to Ubuntu distributions. libudev0.so.0Note: Beginning with Horizon Client 4.2, libudev0 is required to launch Horizon Client. By default, libudev0 is not installed in Ubuntu 14.04. To support idle session timeouts: libXsso.so.1. To support Flash URL redirection: libexpat.so.1. (The libexpat.so.0 file is no longer required.) To improve performance when using multiple monitors, enable Xinerama.
https://docs.vmware.com/en/VMware-Horizon-Client-for-Linux/4.6/linux-client-installation/GUID-DF3FBF68-3C78-45AA-9503-202BD683408F.html
2018-07-16T01:20:29
CC-MAIN-2018-30
1531676589029.26
[]
docs.vmware.com
You can disable a host to prevent vApps from starting up on the host. Virtual machines that are already running on the host are not affected. About this task To perform maintenance on a host, migrate all vApps off of the host or stop all vApps and then disable the host. Procedure - Click the Manage & Monitor tab and click Hosts in the left pane. - Right-click the host name and select Enable Host or Disable Host. Results vCloud Director enables or disables the host for all provider virtual datacenters that use its resources.
https://docs.vmware.com/en/vCloud-Director/8.10/com.vmware.vcloud.admin.doc_810/GUID-044C08EF-9796-4CFE-8297-FCDF251C254E.html
2018-07-16T00:49:23
CC-MAIN-2018-30
1531676589029.26
[]
docs.vmware.com
You can clear filtering and search results to view the list of all log events. About this task After you perform a search on the events list, the search results remain on the screen until you clear all queries. Prerequisites Verify that you are logged in to the vRealize Log Insight Web user interface. The URL format is, where log_insight-host is the IP address or host name of the vRealize Log Insight virtual appliance. Procedure - On the Interactive Analytics tab, remove all filters. - If text appears in the search text box, delete it. - Click the Search button.
https://docs.vmware.com/en/vRealize-Log-Insight/4.5/com.vmware.log-insight.user.doc/GUID-18D9FB11-C841-4E0E-8703-72401D3EB75C.html
2018-07-16T01:11:04
CC-MAIN-2018-30
1531676589029.26
[]
docs.vmware.com
Gw basic 11 - giftwares home page Baskets 35 #82250 triangle vine basket fits 3 - 6" pots 12 pcs/cs $8.98 each sewn-in liner whitewashedbulb pan baskets #2867ww 9" opening x 5.5" tall fits... Bamboo basket | chinese restaurant | southbank | portside A small part of shanghai brought to australia, bamboo basket specialises in southern and northern chinese cuisine, located at southbank and portside wharf. Echter's 2016 landscape plant list Baptisia decadence cherry jub: baptisia lactea. barberry - see berberis: barberry, dwarf golden. berberis thunbergii 'aurea nana ' basket of gold. basket of gold gold... Creative home furnishings, inc Dakotah line list - pillows page 2 of 14 613138172145 bamboo pleat plw fringe 12x17 chestnut 613138177270 bamboo pleat plw knife edge 12x17 coral
https://www.docs-archive.net/Bamboo-Basket.pdf
2018-07-16T00:46:30
CC-MAIN-2018-30
1531676589029.26
[]
www.docs-archive.net
17. The Theory Behind OpenMM: Introduction¶ 17.1. Overview¶ This guide describes the mathematical theory behind OpenMM. For each computational class, it describes what computations the class performs and how it should be used. This serves two purposes. If you are using OpenMM within an application, this guide teaches you how to use it correctly. If you are implementing the OpenMM API for a new Platform, it teaches you how to correctly implement the required kernels. On the other hand, many details are intentionally left unspecified. Any behavior that is not specified either in this guide or in the API documentation is left up to the Platform, and may be implemented in different ways by different Platforms. For example, an Integrator is required to produce a trajectory that satisfies constraints to within the user-specified tolerance, but the algorithm used to enforce those constraints is left up to the Platform. Similarly, this guide provides the functional form of each Force, but does not specify what level of numerical precision it must be calculated to. This is an essential feature of the design of OpenMM, because it allows the API to be implemented efficiently on a wide variety of hardware and software platforms, using whatever methods are most appropriate for each platform. On the other hand, it means that a single program may produce meaningfully different results depending on which Platform it uses. For example, different constraint algorithms may have different regions of convergence, and thus a time step that is stable on one platform may be unstable on a different one. It is essential that you validate your simulation methodology on each Platform you intend to use, and do not assume that good results on one Platform will guarantee good results on another Platform when using identical parameters. 17.2. Units¶ There are several different sets of units widely used in molecular simulations. For example, energies may be measured in kcal/mol or kJ/mol, distances may be in Angstroms or nm, and angles may be in degrees or radians. OpenMM uses the following units everywhere. These units have the important feature that they form an internally consistent set. For example, a force always has the same units (kJ/mol/nm) whether it is calculated as the gradient of an energy or as the product of a mass and an acceleration. This is not true in some other widely used unit systems, such as those that express energy in kcal/mol. The header file Units.h contains predefined constants for converting between the OpenMM units and some other common units. For example, if your application expresses distances in Angstroms, you should multiply them by OpenMM::NmPerAngstrom before passing them to OpenMM, and positions calculated by OpenMM should be multiplied by OpenMM::AngstromsPerNm before passing them back to your application. 18. Standard Forces¶ The following classes implement standard force field terms that are widely used in molecular simulations. 18.1. HarmonicBondForce¶ Each harmonic bond is represented by an energy term of the form where x is the distance between the two particles, x0 is the equilibrium distance, and k is the force constant. This produces a force of magnitude k(x-x0). Be aware that some force fields define their harmonic bond parameters in a slightly different way: E = k´(x-x0)2, leading to a force of magnitude 2k´(x-x0). Comparing these two forms, you can see that k = 2k´. Be sure to check which form a particular force field uses, and if necessary multiply the force constant by 2. 18.2. HarmonicAngleForce¶ Each harmonic angle is represented by an energy term of the form where \(\theta\) is the angle formed by the three particles, \(\theta_0\) is the equilibrium angle, and k is the force constant. As with HarmonicBondForce, be aware that some force fields define their harmonic angle parameters as E = k´(\(\theta\)-\(\theta\)0)2. Be sure to check which form a particular force field uses, and if necessary multiply the force constant by 2. 18.3. PeriodicTorsionForce¶ Each torsion is represented by an energy term of the form where \(\theta\) is the dihedral angle formed by the four particles, \(\theta_0\) is the phase offset, n is the periodicity, and k is the force constant. 18.4. RBTorsionForce¶ Each torsion is represented by an energy term of the form where \(\phi\) is the dihedral angle formed by the four particles and C0 through C5 are constant coefficients. For reason of convention, PeriodicTorsionForce and RBTorsonForce define the torsion angle differently. \(\theta\) is zero when the first and last particles are on the same side of the bond formed by the middle two particles (the cis configuration), whereas \(\phi\) is zero when they are on opposite sides (the trans configuration). This means that \(\theta\) = \(\phi\) - \(\pi\). 18.5. CMAPTorsionForce¶ Each torsion pair is represented by an energy term of the form where \(\theta_1\) and \(\theta_2\) are the two dihedral angles coupled by the term, and f(x,y) is defined by a user-supplied grid of tabulated values. A natural cubic spline surface is fit through the tabulated values, then evaluated to determine the energy for arbitrary (\(\theta_1\), \(\theta_2\)) pairs. 18.6. NonbondedForce¶ 18.6.1. Lennard-Jones Interaction¶ The Lennard-Jones interaction between each pair of particles is represented by an energy term of the form where r is the distance between the two particles, \(\sigma\) is the distance at which the energy equals zero, and \(\epsilon\) sets the strength of the interaction. If the NonbondedMethod in use is anything other than NoCutoff and r is greater than the cutoff distance, the energy and force are both set to zero. Because the interaction decreases very quickly with distance, the cutoff usually has little effect on the accuracy of simulations. Optionally you can an exception has been added for a pair of particles, \(\sigma\) and \(\epsilon\) are the parameters specified by the exception. Otherwise they are determined from the parameters of the individual particles using the Lorentz-Berthelot combining rule: When using periodic boundary conditions, NonbondedForce can optionally add a term (known as a long range dispersion correction) to the energy that approximately represents the contribution from all interactions beyond the cutoff distance:[38] where N is the number of particles in the system, V is the volume of the periodic box, \(r_c\) is the cutoff distance, \(\sigma_{ij}\) and \(\epsilon_{ij}\) are the interaction parameters between particle i and particle j, and \(\langle \text{...} \rangle\) represents an average over all pairs of particles in the system. When a switching function is in use, there is also a contribution to the correction that depends on the integral of E·(1-S) over the switching interval. The long range dispersion correction is primarily useful when running simulations at constant pressure, since it produces a more accurate variation in system energy with respect to volume. The Lennard-Jones interaction is often parameterized in two other equivalent ways. One is where \(r_\mathit{min}\) (sometimes known as \(d_\mathit{min}\); this is not a radius) is the center-to-center distance at which the energy is minimum. It is related to \(\sigma\) by In turn, \(r_\mathit{min}\) is related to the van der Waals radius by \(r_\mathit{min} = 2r_\mathit{vdw}\). Another common form is The coefficients A and B are related to \(\sigma\) and \(\epsilon\) by 18.6.2. Coulomb Interaction Without Cutoff¶ The form of the Coulomb interaction between each pair of particles depends on the NonbondedMethod in use. For NoCutoff, it is given by where q1 and q2 are the charges of the two particles, and r is the distance between them. 18.6.3. Coulomb Interaction With Cutoff¶ For CutoffNonPeriodic or CutoffPeriodic, it is modified using the reaction field approximation. This is derived by assuming everything beyond the cutoff distance is a solvent with a uniform dielectric constant.[39] where \(r_\mathit{cutoff}\) is the cutoff distance and \(\epsilon_\mathit{solvent}\) is the dielectric constant of the solvent. In the limit \(\epsilon_\mathit{solvent}\) >> 1, this causes the force to go to zero at the cutoff. 18.6.4. Coulomb Interaction With Ewald Summation¶ For Ewald, the total Coulomb energy is the sum of three terms: the direct space sum, the reciprocal space sum, and the self-energy term.[40] In the above expressions, the indices i and j run over all particles, n = (n1, n2, n3) runs over all copies of the periodic cell, and k = (k1, k2, k3) runs over all integer wave vectors from (-kmax, -kmax, -kmax) to (kmax, kmax, kmax) excluding (0, 0, 0). \(\mathbf{r}_i\) is the position of particle i , while \(r_{ij}\) is the distance between particles i and j. V is the volume of the periodic cell, and \(\alpha\) is an internal parameter. In the direct space sum, all pairs that are further apart than the cutoff distance are ignored. Because the cutoff is required to be less than half the width of the periodic cell, the number of terms in this sum is never greater than the square of the number of particles. The error made by applying the direct space cutoff depends on the magnitude of \(\text{erfc}({\alpha}r_\mathit{cutoff})\). Similarly, the error made in the reciprocal space sum by ignoring wave numbers beyond kmax depends on the magnitude of \(\text{exp}(-({\pi}k_{max}/{\alpha})^2\)). By changing \(\alpha\), one can decrease the error in either term while increasing the error in the other one. Instead of having the user specify \(\alpha\) and -kmax, NonbondedForce instead asks the user to choose an error tolerance \(\delta\). It then calculates \(\alpha\) as Finally, it estimates the error in the reciprocal space sum as where d is the width of the periodic box, and selects the smallest value for kmax which gives error < \(\delta\). (If the box is not square, kmax will have a different value along each axis.) This means that the accuracy of the calculation is determined by \(\delta\). \(r_\mathit{cutoff}\) does not affect the accuracy of the result, but does affect the speed of the calculation by changing the relative costs of the direct space and reciprocal space sums. You therefore should test different cutoffs to find the value that gives best performance; this will in general vary both with the size of the system and with the Platform being used for the calculation. When the optimal cutoff is used for every simulation, the overall cost of evaluating the nonbonded forces scales as O(N3/2) in the number of particles. Be aware that the error tolerance \(\delta\) is not a rigorous upper bound on the errors. The formulas given above are empirically found to produce average relative errors in the forces that are less than or similar to \(\delta\) across a variety of systems and parameter values, but no guarantees are made. It is important to validate your own simulations, and identify parameter values that produce acceptable accuracy for each system. 18.6.5. Coulomb Interaction With Particle Mesh Ewald¶ The Particle Mesh Ewald (PME) algorithm[41] is similar to Ewald summation, but instead of calculating the reciprocal space sum directly, it first distributes the particle charges onto nodes of a rectangular mesh using 5th order B-splines. By using a Fast Fourier Transform, the sum can then be computed very quickly, giving performance that scales as O(N log N) in the number of particles (assuming the volume of the periodic box is proportional to the number of particles). As with Ewald summation, the user specifies the direct space cutoff \(r_\mathit{cutoff}\) and error tolerance \(\delta\). NonbondedForce then selects \(\alpha\) as and the number of nodes in the mesh along each dimension as where d is the width of the periodic box along that dimension. Alternatively, the user may choose to explicitly set values for these parameters. (Note that some Platforms may choose to use a larger value of \(n_\mathit{mesh}\) than that given by this equation. For example, some FFT implementations require the mesh size to be a multiple of certain small prime numbers, so a Platform might round it up to the nearest permitted value. It is guaranteed that \(n_\mathit{mesh}\) will never be smaller than the value given above.) The comments in the previous section regarding the interpretation of \(\delta\) for Ewald summation also apply to PME, but even more so. The behavior of the error for PME is more complicated than for simple Ewald summation, and while the above formulas will usually produce an average relative error in the forces less than or similar to \(\delta\), this is not a rigorous guarantee. PME is also more sensitive to numerical round-off error than Ewald summation. For Platforms that do calculations in single precision, making \(\delta\) too small (typically below about 5·10-5) can actually cause the error to increase. 18.6.6. Lennard-Jones Interaction With Particle Mesh Ewald¶ The PME algorithm can also be used for Lennard-Jones interactions. Usually this is not necessary, since Lennard-Jones forces are short ranged, but there are situations (such as membrane simulations) where neglecting interactions beyond the cutoff can measurably affect results. For computational efficiency, certain approximations are made[42]. Interactions beyond the cutoff distance include only the attractive \(1/r^6\) term, not the repulsive \(1/r^{12}\) term. Since the latter is much smaller than the former at long distances, this usually has negligible effect. Also, the interaction between particles farther apart than the cutoff distance is computed using geometric combination rules: The effect of this approximation is also quite small, and it is still far more accurate than ignoring the interactions altogether (which is what would happen with PME). The formula used to compute the number of nodes along each dimension of the mesh is slightly different from the one used for Coulomb interactions: As before, this is an empirical formula. It will usually produce an average relative error in the forces less than or similar to \(\delta\), but that is not guaranteed. 18.7. GBSAOBCForce¶ 18.7.1. Generalized Born Term¶ GBSAOBCForce consists of two energy terms: a Generalized Born Approximation term to represent the electrostatic interaction between the solute and solvent, and a surface area term to represent the free energy cost of solvating a neutral molecule. The Generalized Born energy is given by[21] where the indices i and j run over all particles, \(\epsilon_\mathit{solute}\) and \(\epsilon_\mathit{solvent}\) are the dielectric constants of the solute and solvent respectively, \(q_i\) is the charge of particle i, and \(d_{ij}\) is the distance between particles i and j. \(f_\text{GB}(d_{ij}, R_i, R_j)\) is defined as \(R_i\) is the Born radius of particle i, which calculated as where \(\alpha\), \(\beta\), and \(\gamma\) are the GBOBCII parameters \(\alpha\) = 1, \(\beta\) = 0.8, and \(\gamma\) = 4.85. \(\rho_i\) is the adjusted atomic radius of particle i, which is calculated from the atomic radius \(r_i\) as \(\rho_i = r_i-0.009\) nm. \(\Psi_i\) is calculated as an integral over the van der Waals spheres of all particles outside particle i: where \(\theta\)(r) is a step function that excludes the interior of particle i from the integral. 18.7.2. Surface Area Term¶ The surface area term is given by[43][44] where \(r_i\) is the atomic radius of particle i, \(r_i\) is its atomic radius, and \(r_\mathit{solvent}\) is the solvent radius, which is taken to be 0.14 nm. The default value for the energy scale \(E_{SA}\) is 2.25936 kJ/mol/nm2. 18.8. GayBerneForce¶ This is similar to the Lennard-Jones interaction described in section 18.6.1, but instead of being based on the distance between two point particles, it is based on the distance of closest approach between two ellipsoids.[45] Let \(\mathbf{A}_1\) and \(\mathbf{A}_2\) be rotation matrices that transform from the lab frame to the body frames of two interacting ellipsoids. These rotations are determined from the positions of other particles, as described in the API documentation. Let \(\mathbf{r}_{12}\) be the vector pointing from particle 1 to particle 2, and \(\hat{\mathbf{r}}_{12}=\mathbf{r}_{12}/|\mathbf{r}_{12}|\). Let \(\mathbf{S}_1\) and \(\mathbf{S}_2\) be diagonal matrices containing the three radii of each particle: The energy is computed as a product of three terms: The first term describes the distance dependence, and is very similar in form to the Lennard-Jones interaction: where \(h_{12}\) is an approximation to the distance of closest approach between the two ellipsoids: The second term adjusts the energy based on the relative orientations of the two ellipsoids: The third term applies the user-defined scale factors \(e_a\), \(e_b\), and \(e_c\) that adjust the strength of the interaction along each axis: When using a cutoff, you can optionally. 18.9. AndersenThermostat¶ AndersenThermostat couples the system to a heat bath by randomly selecting a subset of particles at the start of each time step, then setting their velocities to new values chosen from a Boltzmann distribution. This represents the effect of random collisions between particles in the system and particles in the heat bath.[46] The probability that a given particle will experience a collision in a given time step is where f is the collision frequency and \(\Delta t\) is the step size. Each component of its velocity is then set to where T is the thermostat temperature, m is the particle mass, and R is a random number chosen from a normal distribution with mean of zero and variance of one. 18.10. MonteCarloBarostat¶ MonteCarloBarostat models the effect of constant pressure by allowing the size of the periodic box to vary with time.[47][48] At regular intervals, it attempts a Monte Carlo step by scaling the box vectors and the coordinates of each molecule’s center by a factor s. The scale factor s is chosen to change the volume of the periodic box from V to V+\(\Delta\)V: The change in volume is chosen randomly as where A is a scale factor and r is a random number uniformly distributed between -1 and 1. The step is accepted or rejected based on the weight function where \(\Delta E\) is the change in potential energy resulting from the step, P is the pressure being applied to the system, N is the number of molecules in the system, \(k_B\) is Boltzmann’s constant, and T is the system temperature. In particular, if \(\Delta W\le 0\) the step is always accepted. If \(\Delta W > 0\), the step is accepted with probability \(\text{exp}(-\Delta W/k_B T)\). This algorithm tends to be more efficient than deterministic barostats such as the Berendsen or Parrinello-Rahman algorithms, since it does not require an expensive virial calculation at every time step. Each Monte Carlo step involves two energy evaluations, but this can be done much less often than every time step. It also does not require you to specify the compressibility of the system, which usually is not known in advance. The scale factor A that determines the size of the steps is chosen automatically to produce an acceptance rate of approximately 50%. It is initially set to 1% of the periodic box volume. The acceptance rate is then monitored, and if it varies too much from 50% then A is modified accordingly. Each Monte Carlo step modifies particle positions by scaling the centroid of each molecule, then applying the resulting displacement to each particle in the molecule. This ensures that each molecule is translated as a unit, so bond lengths and constrained distances are unaffected. MonteCarloBarostat assumes the simulation is being run at constant temperature as well as pressure, and the simulation temperature affects the step acceptance probability. It does not itself perform temperature regulation, however. You must use another mechanism along with it to maintain the temperature, such as LangevinIntegrator or AndersenThermostat. 18.11. MonteCarloAnisotropicBarostat¶ MonteCarloAnisotropicBarostat is very similar to MonteCarloBarostat, but instead of scaling the entire periodic box uniformly, each Monte Carlo step scales only one axis of the box. This allows the box to change shape, and is useful for simulating anisotropic systems whose compressibility is different along different directions. It also allows a different pressure to be specified for each axis. You can specify that the barostat should only be applied to certain axes of the box, keeping the other axes fixed. This is useful, for example, when doing constant surface area simulations of membranes. 18.12. MonteCarloMembraneBarostat¶ MonteCarloMembraneBarostat is very similar to MonteCarloBarostat, but it is specialized for simulations of membranes. It assumes the membrane lies in the XY plane. In addition to applying a uniform pressure to regulate the volume of the periodic box, it also applies a uniform surface tension to regulate the cross sectional area of the periodic box in the XY plane. The weight function for deciding whether to accept a step is where S is the surface tension and \(\Delta\)A is the change in cross sectional area. Notice that pressure and surface tension are defined with opposite senses: a larger pressure tends to make the box smaller, but a larger surface tension tends to make the box larger. MonteCarloMembraneBarostat offers some additional options to customize the behavior of the periodic box: - The X and Y axes can be either - isotropic (they are always scaled by the same amount, so their ratio remains fixed) - anisotropic (they can change size independently) - The Z axis can be either - free (its size changes independently of the X and Y axes) - fixed (its size does not change) - inversely varying with the X and Y axes (so the total box volume does not change) 18.13. CMMotionRemover¶ CMMotionRemover prevents the system from drifting in space by periodically removing all center of mass motion. At the start of every n’th time step (where n is set by the user), it calculates the total center of mass velocity of the system: where \(m_i\) and \(\mathbf{v}_i\) are the mass and velocity of particle i. It then subtracts \(\mathbf{v}_\text{CM}\) from the velocity of every particle. 19. Custom Forces¶ In addition to the standard forces described in the previous chapter, OpenMM provides a number of “custom” force classes. These classes provide detailed control over the mathematical form of the force by allowing the user to specify one or more arbitrary algebraic expressions. The details of how to write these custom expressions are described in section 19.12. 19.1. CustomBondForce¶ CustomBondForce is similar to HarmonicBondForce in that it represents an interaction between certain pairs of particles as a function of the distance between them, but it allows the precise form of the interaction to be specified by the user. That is, the interaction energy of each bond-bond parameters are defined by specifying a value for each bond. 19.2. CustomAngleForce¶ CustomAngleForce is similar to HarmonicAngleForce in that it represents an interaction between sets of three particles as a function of the angle between them, but it allows the precise form of the interaction to be specified by the user. That is, the interaction energy of each angle is given by where \(f(\theta)\) is a user defined mathematical expression. In addition to depending on the angle \(\theta\), the energy may also depend on an arbitrary set of user defined parameters. Parameters may be specified in two ways: - Global parameters have a single, fixed value. - Per-angle parameters are defined by specifying a value for each angle. 19.3. CustomTorsionForce¶ CustomTorsionForce is similar to PeriodicTorsionForce in that it represents an interaction between sets of four particles as a function of the dihedral angle between them, but it allows the precise form of the interaction to be specified by the user. That is, the interaction energy of each angle is given by where \(f(\theta)\) is a user defined mathematical expression. The angle \(\theta\) is guaranteed to be in the range [-π, π]. Like PeriodicTorsionForce, it is defined to be zero when the first and last particles are on the same side of the bond formed by the middle two particles (the cis configuration). In addition to depending on the angle \(\theta\), the energy may also depend on an arbitrary set of user defined parameters. Parameters may be specified in two ways: - Global parameters have a single, fixed value. - Per-torsion parameters are defined by specifying a value for each torsion. 19.4. CustomNonbondedForce¶ CustomNonbondedForce is similar to NonbondedForce in that it represents a pairwise interaction between all particles in the System, but it allows the precise form of the interaction to be specified by the user. That is, the interaction energy between each pair of particles-particle parameters are defined by specifying a value for each particle. A CustomNonbondedForce can optionally be restricted to only a subset of particle pairs in the System. This is done by defining “interaction groups”. See the API documentation for details. When using a cutoff, a switching function can optionally be applied using periodic boundary conditions, CustomNonbondedForce can optionally add a term (known as a long range truncation correction) to the energy that approximately represents the contribution from all interactions beyond the cutoff distance:[38] where N is the number of particles in the system, V is the volume of the periodic box, and \(\langle \text{...} \rangle\) represents an average over all pairs of particles in the system. When a switching function is in use, there is an additional contribution to the correction given by The long range dispersion correction is primarily useful when running simulations at constant pressure, since it produces a more accurate variation in system energy with respect to volume. 19.5. CustomExternalForce¶ CustomExternalForce represents a force that is applied independently to each particle as a function of its position. That is, the energy of each particle is given by where f(x, y, z) is a user defined mathematical expression. In addition to depending on the particle’s (x, y, z) coordinates, the energy may also depend on an arbitrary set of user defined parameters. Parameters may be specified in two ways: - Global parameters have a single, fixed value. - Per-particle parameters are defined by specifying a value for each particle. 19.6. CustomCompoundBondForce¶ CustomCompoundBondForce supports a wide variety of bonded interactions. It defines a “bond” as a single energy term that depends on the positions of a fixed set of particles. The number of particles involved in a bond, and how the energy depends on their positions, is configurable. It may depend on the positions of individual particles, the distances between pairs of particles, the angles formed by sets of three particles, and the dihedral angles formed by sets of four particles. That is, the interaction energy of each bond is given by where f(...) is a user defined mathematical expression. It may depend on an arbitrary set of positions {\(x_i\)}, distances {\(r_i\)}, angles {\(\theta_i\)}, and dihedral angles {\(\phi_i\)} guaranteed to be in the range [-π, π]. Each distance, angle, or dihedral is defined by specifying a sequence of particles chosen from among the particles that make up the bond.-bond parameters are defined by specifying a value for each bond. 19.7. CustomCentroidBondForce¶ CustomCentroidBondForce is very similar to CustomCompoundBondForce, but instead of creating bonds between individual particles, the bonds are between the centers of groups of particles. This is useful for purposes such as restraining the distance between two molecules or pinning the center of mass of a single molecule. The first step in computing this force is to calculate the center position of each defined group of particles. This is calculated as a weighted average of the positions of all the particles in the group, with the weights being user defined. The computation then proceeds exactly as with CustomCompoundBondForce, but the energy of each “bond” is now calculated based on the centers of a set of groups, rather than on the positions of individual particles. This class supports all the same function types and features as CustomCompoundBondForce. In fact, any interaction that could be implemented with CustomCompoundBondForce can also be implemented with this class, simply by defining each group to contain only a single atom. 19.8. CustomManyParticleForce¶ CustomManyParticleForce is similar to CustomNonbondedForce in that it represents a custom nonbonded interaction between particles, but it allows the interaction to depend on more than two particles. This allows it to represent a wide range of non-pairwise interactions. It is defined by specifying the number of particles \(N\) involved in the interaction and how the energy depends on their positions. More specifically, it takes a user specified energy function that may depend on an arbitrary set of positions {\(x_i\)}, distances {\(r_i\)}, angles {\(\theta_i\)}, and dihedral angles {\(\phi_i\)} from a particular set of \(N\) particles. Each distance, angle, or dihedral is defined by specifying a sequence of particles chosen from among the particles in the set.-particle parameters are defined by specifying a value for each particle. The energy function is evaluated one or more times for every unique set of \(N\) particles in the system. The exact number of times depends on the permutation mode. A set of \(N\) particles has \(N!\) possible permutations. In SinglePermutation mode, the function is evaluated for a single arbitrarily chosen one of those permutations. In UniqueCentralParticle mode, the function is evaluated for \(N\) of those permutations, once with each particle as the “central particle”. The number of times the energy function is evaluated can be further restricted by specifying type filters. Each particle may have a “type” assigned to it, and then each of the \(N\) particles involved in an interaction may be restricted to only a specified set of types. This provides a great deal of flexibility in controlling which particles interact with each other. 19.9. CustomGBForce¶ CustomGBForce implements complex, multiple stage nonbonded interactions between particles. It is designed primarily for implementing Generalized Born implicit solvation models, although it is not strictly limited to that purpose. The interaction is specified as a series of computations, each defined by an arbitrary algebraic expression. These computations consist of some number of per-particle computed values, followed by one or more energy terms. A computed value is a scalar value that is computed for each particle in the system. It may depend on an arbitrary set of global and per-particle parameters, and well as on other computed values that have been calculated before it. Once all computed values have been calculated, the energy terms and their derivatives are evaluated to determine the system energy and particle forces. The energy terms may depend on global parameters, per-particle parameters, and per-particle computed values. Computed values can be calculated in two different ways: - Single particle values are calculated by evaluating a user defined expression for each particle: where f(...) may depend only on properties of particle i (its coordinates and parameters, as well as other computed values that have already been calculated). - Particle pair values are calculated as a sum over pairs of particles: where the sum is over all other particles in the System, and f(r, ...) is a function of the distance r between particles i and j, as well as their parameters and computed values. Energy terms may similarly be calculated per-particle or per-particle-pair. - Single particle energy terms are calculated by evaluating a user defined expression for each particle: where f(...) may depend only on properties of that particle (its coordinates, parameters, and computed values). - Particle pair energy terms are calculated by evaluating a user defined expression once for every pair of particles in the System: where the sum is over all particle pairs i < j, and f(r, ...) is a function of the distance r between particles i and j, as well as their parameters and computed values. Note that energy terms are assumed to be symmetric with respect to the two interacting particles, and therefore are evaluated only once per pair. In contrast, expressions for computed values need not be symmetric and therefore are calculated twice for each pair: once when calculating the value for the first particle, and again when calculating the value for the second particle. Be aware that, although this class is extremely general in the computations it can define, particular Platforms may only support more restricted types of computations. In particular, all currently existing Platforms require that the first computed value must be a particle pair computation, and all computed values after the first must be single particle computations. This is sufficient for most Generalized Born models, but might not permit some other types of calculations to be implemented. 19.10. CustomHbondForce¶ CustomHbondForce supports a wide variety of energy functions used to represent hydrogen bonding. It computes interactions between “donor” particle groups and “acceptor” particle groups, where each group may include up to three particles. Typically a donor group consists of a hydrogen atom and the atoms it is bonded to, and an acceptor group consists of a negatively charged atom and the atoms it is bonded to. The interaction energy between each donor group and each acceptor group is given by where f(...) is a user defined mathematical expression. It may depend on an arbitrary set of distances {\(r_i\)}, angles {\(\theta_i\)}, and dihedral angles {\(\phi_i\)}. Each distance, angle, or dihedral is defined by specifying a sequence of particles chosen from the interacting donor and acceptor groups (up to six atoms to choose from, since each group may contain up to three atoms). distances, angles, and dihedrals, the energy may also depend on an arbitrary set of user defined parameters. Parameters may be specified in three ways: - Global parameters have a single, fixed value. - Per-donor parameters are defined by specifying a value for each donor group. - Per-acceptor parameters are defined by specifying a value for each acceptor group. 19.11. CustomCVForce¶ CustomCVForce computes an energy as a function of “collective variables”. A collective variable may be any scalar valued function of the particle positions and other parameters. Each one is defined by a Force object, so any function that can be defined via any force class (either standard or custom) can be used as a collective variable. The energy is then computed as where f(...) is a user supplied mathematical expression of the collective variables. It also may depend on user defined global parameters. 19.12. Writing Custom Expressions¶ The custom forces described in this chapter involve user defined algebraic expressions. These expressions are specified as character strings, and may involve a variety of standard operators and mathematical functions. The following operators are supported: + (add), - (subtract), * (multiply), / (divide), and ^ (power). Parentheses “(“ and “)” may be used for grouping. The following standard functions are supported: sqrt, exp, log, sin, cos, sec, csc, tan, cot, asin, acos, atan, sinh, cosh, tanh, erf, erfc, min, max, abs, floor, ceil, step, delta, select. step(x) = 0 if x < 0, 1 otherwise. delta(x) = 1 if x is 0, 0 otherwise. select(x,y,z) = z if x = 0, y otherwise. Some custom forces allow additional functions to be defined from tabulated values. Numbers may be given in either decimal or exponential form. All of the following are valid numbers: 5, -3.1, 1e6, and 3.12e-2. The variables that may appear in expressions are specified in the API documentation for each force class. In addition, an expression may be followed by definitions for intermediate values that appear in the expression. A semicolon “;” is used as a delimiter between value definitions. For example, the expression a^2+a*b+b^2; a=a1+a2; b=b1+b2 is exactly equivalent to (a1+a2)^2+(a1+a2)*(b1+b2)+(b1+b2)^2 The definition of an intermediate value may itself involve other intermediate values. All uses of a value must appear before that value’s definition. 19.13. Setting Parameters¶ Most custom forces have two types of parameters you can define. The simplest type are global parameters, which represent a single number. The value is stored in the Context, and can be changed at any time by calling setParameter() on it. Global parameters are designed to be very inexpensive to change. Even if you set a new value for a global parameter on every time step, the overhead will usually be quite small. There can be exceptions to this rule, however. For example, if a CustomNonbondedForce uses a long range correction, changing a global parameter may require the correction coefficient to be recalculated, which is expensive. The other type of parameter is ones that record many values, one for each element of the force, such as per-particle or per-bond parameters. These values are stored directly in the force object itself, and hence are part of the system definition. When a Context is created, the values are copied over to it, and thereafter the two are disconnected. Modifying the force will have no effect on any Context that already exists. Some forces do provide a way to modify these parameters via an updateParametersInContext() method. These methods tend to be somewhat expensive, so it is best not to call them too often. On the other hand, they are still much less expensive than calling reinitialize() on the Context, which is the other way of updating the system definition for a running simulation. 19.14. Parameter Derivatives¶ Many custom forces have the ability to compute derivatives of the potential energy with respect to global parameters. To use this feature, first define a global parameter that the energy depends on. Then instruct the custom force to compute the derivative with respect to that parameter by calling addEnergyParameterDerivative() on it. Whenever forces and energies are computed, the specified derivative will then also be computed at the same time. You can query it by calling getState() on a Context, just as you would query forces or energies. An important application of this feature is to use it in combination with a CustomIntegrator (described in section 20.6). The derivative can appear directly in expressions that define the integration algorithm. This can be used to implement algorithms such as lambda-dynamics, where a global parameter is integrated as a dynamic variable. 20. Integrators¶ 20.1. VerletIntegrator¶ VerletIntegrator implements the leap-frog Verlet integration method. The positions and velocities stored in the context are offset from each other by half a time step. In each step, they are updated as follows: where \(\mathbf{v}_i\) is the velocity of particle i, \(\mathbf{r}_i\) is its position, \(\mathbf{f}_i\) is the force acting on it, \(m_i\) is its mass, and \(\Delta t\) is the time step. Because the positions are always half a time step later than the velocities, care must be used when calculating the energy of the system. In particular, the potential energy and kinetic energy in a State correspond to different times, and you cannot simply add them to get the total energy of the system. Instead, it is better to retrieve States after two successive time steps, calculate the on-step velocities as then use those velocities to calculate the kinetic energy at time t. 20.2. LangevinIntegator¶ LangevinIntegator simulates a system in contact with a heat bath by integrating the Langevin equation of motion: where \(\mathbf{v}_i\) is the velocity of particle i, \(\mathbf{f}_i\) is the force acting on it, \(m_i\) is its mass, \(\gamma\) is the friction coefficient, and \(\mathbf{R}_i\) is an uncorrelated random force whose components are chosen from a normal distribution with mean zero and variance \(2m_i \gamma k_B T\), where T is the temperature of the heat bath. The integration is done using a leap-frog method similar to VerletIntegrator. [49] The same comments about the offset between positions and velocities apply to this integrator as to that one. 20.3. BrownianIntegrator¶ BrownianIntegrator simulates a system in contact with a heat bath by integrating the Brownian equation of motion: where \(\mathbf{r}_i\) is the position of particle i, \(\mathbf{f}_i\) is the force acting on it, \(\gamma\) is the friction coefficient, and \(\mathbf{R}_i\) is an uncorrelated random force whose components are chosen from a normal distribution with mean zero and variance \(2 k_B T/m_i \gamma\), where T is the temperature of the heat bath. The Brownian equation of motion is derived from the Langevin equation of motion in the limit of large \(\gamma\). In that case, the velocity of a particle is determined entirely by the instantaneous force acting on it, and kinetic energy ceases to have much meaning, since it disappears as soon as the applied force is removed. 20.4. VariableVerletIntegrator¶ This is very similar to VerletIntegrator, but instead of using the same step size for every time step, it continuously adjusts the step size to keep the integration error below a user-specified tolerance. It compares the positions generated by Verlet integration with those that would be generated by an explicit Euler integrator, and takes the difference between them as an estimate of the integration error: where \(\mathbf{f}_i\) is the force acting on particle i and \(m_i\) is its mass. (In practice, the error made by the Euler integrator is usually larger than that made by the Verlet integrator, so this tends to overestimate the true error. Even so, it can provide a useful mechanism for step size control.) It then selects the value of \(\Delta t\) that makes the error exactly equal the specified error tolerance: where \(\delta\) is the error tolerance. This is the largest step that may be taken consistent with the user-specified accuracy requirement. (Note that the integrator may sometimes choose to use a smaller value for \(\Delta t\) than given above. For example, it might restrict how much the step size can grow from one step to the next, or keep the step size constant rather than increasing it by a very small amount. This behavior is not specified and may vary between Platforms. It is required, however, that \(\Delta t\) never be larger than the value given above.) A variable time step integrator is generally superior to a fixed time step one in both stability and efficiency. It can take larger steps on average, but will automatically reduce the step size to preserve accuracy and avoid instability when unusually large forces occur. Conversely, when each uses the same step size on average, the variable time step one will usually be more accurate since the time steps are concentrated in the most difficult areas of the trajectory. Unlike a fixed step size Verlet integrator, variable step size Verlet is not symplectic. This means that for a given average step size, it will not conserve energy as precisely over long time periods, even though each local region of the trajectory is more accurate. For this reason, it is most appropriate when precise energy conservation is not important, such as when simulating a system at constant temperature. For constant energy simulations that must maintain the energy accurately over long time periods, the fixed step size Verlet may be more appropriate. 20.5. VariableLangevinIntegrator¶ This is similar to LangevinIntegrator, but it continuously adjusts the step size using the same method as VariableVerletIntegrator. It is usually preferred over the fixed step size Langevin integrator for the reasons given above. Furthermore, because Langevin dynamics involves a random force, it can never be symplectic and therefore the fixed step size Verlet integrator’s advantages do not apply to the Langevin integrator. 20.6. CustomIntegrator¶ CustomIntegrator is a very flexible class that can be used to implement a wide range of integration methods. This includes both deterministic and stochastic integrators; Metropolized integrators; multiple time step integrators; and algorithms that must integrate additional quantities along with the particle positions and momenta. The algorithm is specified as a series of computations that are executed in order to perform a single time step. Each computation computes the value (or values) of a variable. There are two types of variables: global variables have a single value, while per-DOF variables have a separate value for every degree of freedom (that is, every x, y, or z component of a particle). CustomIntegrator defines lots of variables you can compute and/or use in computing other variables. Some examples include the step size (global), the particle positions (per-DOF), and the force acting on each particle (per-DOF). In addition, you can define as many variables as you want for your own use. The actual computations are defined by mathematical expressions as described in section 19.12. Several types of computations are supported: - Global: the expression is evaluated once, and the result is stored into a global variable. - Per-DOF: the expression is evaluated once for every degree of freedom, and the results are stored into a per-DOF variable. - Sum: the expression is evaluated once for every degree of freedom. The results for all degrees of freedom are added together, and the sum is stored into a global variable. There also are other, more specialized types of computations that do not involve mathematical expressions. For example, there are computations that apply distance constraints, modifying the particle positions or velocities accordingly. CustomIntegrator is a very powerful tool, and this description only gives a vague idea of the scope of its capabilities. For full details and examples, consult the API documentation. 21. Other Features¶ 21.1. Periodic Boundary Conditions¶ Many Force objects support periodic boundary conditions. They act as if space were tiled with infinitely repeating copies of the system, then compute the forces acting on a single copy based on the infinite periodic copies. In most (but not all) cases, they apply a cutoff so that each particle only interacts with a single copy of each other particle. OpenMM supports triclinic periodic boxes. This means the periodicity is defined by three vectors, \(\mathbf{a}\), \(\mathbf{b}\), and \(\mathbf{c}\). Given a particle position, the infinite periodic copies of that particle are generated by adding vectors of the form \(i \mathbf{a}+j \mathbf{b}+k \mathbf{c}\), where \(i\), \(j\), and \(k\) are arbitrary integers. The periodic box vectors must be chosen to satisfy certain requirements. Roughly speaking, \(\mathbf{a}\), \(\mathbf{b}\), and \(\mathbf{c}\) need to “mostly” correspond to the x, y, and z axes. They must have the form It is always possible to put the box vectors into this form by rotating the system until \(\mathbf{a}\) is parallel to x and \(\mathbf{b}\) lies in the xy plane. Furthermore, they must obey the following constraints: This effectively requires the box vectors to be specified in a particular reduced form. By forming combinations of box vectors (a process known as “lattice reduction”), it is always possible to put them in this form without changing the periodic system they represent. These requirements have an important consequence: the periodic unit cell can always be treated as an axis-aligned rectangular box of size \((a_x, b_y, c_z)\). The remaining non-zero elements of the box vectors cause the repeating copies of the system to be staggered relative to each other, but they do not affect the shape or size of each copy. The volume of the unit cell is simply given by \(a_x b_y c_z\). 21.2. LocalEnergyMinimizer¶ This provides an implementation of the L-BFGS optimization algorithm. [50] Given a Context specifying initial particle positions, it searches for a nearby set of positions that represent a local minimum of the potential energy. Distance constraints are enforced during minimization by adding a harmonic restraining force to the potential function. The strength of the restraining force is steadily increased until the minimum energy configuration satisfies all constraints to within the tolerance specified by the Context’s Integrator. 21.3. XMLSerializer¶ This provides the ability to “serialize” a System, Force, Integrator, or State object to a portable XML format, then reconstruct it again later. When serializing a System, the XML data contains a complete copy of the entire system definition, including all Forces that have been added to it. Here are some examples of uses for this class: - A model building utility could generate a System in memory, then serialize it to a file on disk. Other programs that perform simulation or analysis could then reconstruct the model by simply loading the XML file. - When running simulations on a cluster, all model construction could be done on a single node. The Systems and Integrators could then be encoded as XML, allowing them to be easily transmitted to other nodes. XMLSerializer is a templatized class that, in principle, can be used to serialize any type of object. At present, however, only System, Force, Integrator, and State are supported. 21.4. Force Groups¶ It is possible to split the Force objects in a System into groups. Those groups can then be evaluated independently of each other. Some Force classes also provide finer grained control over grouping. For example, NonbondedForce allows direct space computations to be in one group and reciprocal space computations in a different group. The most important use of force groups is for implementing multiple time step algorithms with CustomIntegrator. For example, you might evaluate the slowly changing nonbonded interactions less frequently than the quickly changing bonded ones. It also is useful if you want the ability to query a subset of the forces acting on the system. 21.5. Virtual Sites¶ A virtual site is a particle whose position is computed directly from the positions of other particles, not by integrating the equations of motion. An important example is the “extra sites” present in 4 and 5 site water models. These particles are massless, and therefore cannot be integrated. Instead, their positions are computed from the positions of the massive particles in the water molecule. Virtual sites are specified by creating a VirtualSite object, then telling the System to use it for a particular particle. The VirtualSite defines the rules for computing its position. It is an abstract class with subclasses for specific types of rules. They are: - TwoParticleAverageSite: The virtual site location is computed as a weighted average of the positions of two particles: - ThreeParticleAverageSite: The virtual site location is computed as a weighted average of the positions of three particles: - OutOfPlaneSite: The virtual site location is computed as a weighted average of the positions of three particles and the cross product of their relative displacements: where \(\mathbf{r}_{12} = \mathbf{r}_{2}-\mathbf{r}_{1}\) and \(\mathbf{r}_{13} = \mathbf{r}_{3}-\mathbf{r}_{1}\). This allows the virtual site to be located outside the plane of the three particles. - LocalCoordinatesSite: The locations of several other particles are used to compute a local coordinate system, and the virtual site is placed at a fixed location in that coordinate system. The number of particles used to define the coordinate system is user defined. The origin of the coordinate system and the directions of its x and y axes are each specified as a weighted sum of the locations of the other particles: These vectors are then used to construct a set of orthonormal coordinate axes as follows: Finally, the position of the virtual site is set to 21.6. Random Numbers with Stochastic Integrators and Forces¶ OpenMM includes many stochastic integrators and forces that make extensive use of random numbers. It is impossible to generate truly random numbers on a computer like you would with a dice roll or coin flip in real life—instead programs rely on pseudo-random number generators (PRNGs) that take some sort of initial “seed” value and steps through a sequence of seemingly random numbers. The exact implementation of the PRNGs is not important (in fact, each platform may have its own PRNG whose performance is optimized for that hardware). What is important, however, is that the PRNG must generate a uniform distribution of random numbers between 0 and 1. Random numbers drawn from this distribution can be manipulated to yield random integers in a desired range or even a random number from a different type of probability distribution function (e.g., a normal distribution). What this means is that the random numbers used by integrators and forces within OpenMM cannot have any discernible pattern to them. Patterns can be induced in PRNGs in two principal ways: - The PRNG uses a bad algorithm with a short period. - Two PRNGs are started using the same seed All PRNG algorithms in common use are periodic—that is their sequence of random numbers repeats after a given period, defined by the number of “unique” random numbers in the repeating pattern. As long as this period is longer than the total number of random numbers your application requires (preferably by orders of magnitude), the first problem described above is avoided. All PRNGs employed by OpenMM have periods far longer than any current simulation can cycle through. Point two is far more common in biomolecular simulation, and can result in very strange artifacts that may be difficult to detect. For example, with Langevin dynamics, two simulations that use the same sequence of random numbers appear to synchronize in their global movements.[51][52] It is therefore very important that the stochastic forces and integrators in OpenMM generate unique sequences of pseudo-random numbers not only within a single simulation, but between two different simulations of the same system as well (including any restarts of previous simulations). Every stochastic force and integrator that does (or could) make use of random numbers has two instance methods attached to it: getRandomNumberSeed() and setRandomNumberSeed(int seed)(). If you set a unique random seed for two different simulations (or different forces/integrators if applicable), OpenMM guarantees that the generated sequences of random numbers will be different (by contrast, no guarantee is made that the same seed will result in identical random number sequences). Since breaking simulations up into pieces and/or running multiple replicates of a system to obtain more complete statistics is common practice, a new strategy has been employed for OpenMM versions 6.3 and later with the aim of trying to ensure that each simulation will be started with a unique random seed. A random seed value of 0 (the default) will cause a unique random seed to be generated when a new Context is instantiated. Prior to the introduction of this feature, deserializing a serialized System XML file would result in each stochastic force or integrator being assigned the same random seed as the original instance that was serialized. If you use a System XML file generated by a version of OpenMM older than 6.3 to start a new simulation, you should manually set the random number seed of each stochastic force or integrator to 0 (or another unique value).
http://docs.openmm.org/latest/userguide/theory.html
2018-07-16T00:32:32
CC-MAIN-2018-30
1531676589029.26
[]
docs.openmm.org
This section lists porting guides that can help you in updating playbooks, plugins and other parts of your Ansible infrastructure from one version of Ansible to the next. Please note that this is not a complete list. If you believe any extra information would be useful in these pages, you can edit by clicking Edit on GitHub on the top right, or raising an issue.
https://docs.ansible.com/ansible/devel/porting_guides/porting_guides.html
2018-07-16T01:05:09
CC-MAIN-2018-30
1531676589029.26
[]
docs.ansible.com
Tags are strings that identify items such as machines, applications, Delivery Groups, and policies. After creating a tag and then adding it to an item, you can tailor certain operations to apply only to items that have a specified tag: For example, if you want to display only applications that have been optimized for testers, add a tag named “test” to those applications. Then, filter the Studio search with the tag “test”. Similarly, if you want to display only Delivery Groups that contain test team members, add a “tester” tag to Delivery Groups that contain test team members and then filter the Studio search with the “tester”. Select a machine, application, or Delivery Group in Studio and then select Manage Tags in the Actions pane. The Manage Tags dialog box lists all the tags that have been created in the Site, not just for the item you selected. The following actions are available from the Manage Tags dialog box:
https://docs.citrix.com/ko-kr/xenapp-and-xendesktop/7-11/manage-deployment/tags.html
2018-07-16T01:07:16
CC-MAIN-2018-30
1531676589029.26
[]
docs.citrix.com
Price List Table This table or field is used to integrate data in the Price Level entity in Microsoft Dynamics CRM with customers in Microsoft Dynamics NAV. The table is based on the Price Level entity and the fields correspond to specific fields in the entity. The data in this table or field is populated at runtime and is not stored in the Microsoft Dynamics NAV database. List of Fields in the Table See Also Other Resources Integrating with Microsoft Dynamics CRM How to: View Microsoft Dynamics CRM Account Information for Customers
https://docs.microsoft.com/en-us/previous-versions/dynamicsnav-2016/mt299716(v=nav.90)
2018-07-16T01:22:47
CC-MAIN-2018-30
1531676589029.26
[]
docs.microsoft.com
Richard Cacciato selected as iMediaconnection blogger for Ad:Tech NY 2010 New York, NY - For the sixth year in a row, Richard Cacciato has been selected as a blogger for Ad:Tech NY 2010, to be held November 3-5 2010. This year's blog is sponsored by iMedia Connection, and the blogs can be read online at
http://docs.erbtest.org/news/view.php?id=68
2018-07-16T00:39:59
CC-MAIN-2018-30
1531676589029.26
[]
docs.erbtest.org
apollo-link-context Easily set a context on your operation, which is used by other links further down the chain. The setContext function takes a function that returns either an object or a promise that returns an object to set the new context of a request. It receives two arguments: the GraphQL request being executed, and the previous context. This link makes it easy to perform async look up of things like authentication tokens and more! import { setContext } from "apollo-link-context"; const setAuthorizationLink = setContext((request, previousContext) => ({ headers: {authorization: "1234"} })); const asyncAuthLink = setContext( request => new Promise((success, fail) => { // do some async lookup here setTimeout(() => { success({ token: "async found token" }); }, 10); }) ); Caching lookups Typically async actions can be expensive and may not need to be called for every request, especially when a lot of request are happening at once. You can setup your own caching and invalidation outside of the link to make it faster but still flexible! Take for example a user auth token being found, cached, then removed on a 401 response: import { setContext } from "apollo-link-context"; import { onError } from "apollo-link-error"; // cached storage for the user token let token; const withToken = setContext(() => { // if you have a cached value, return it immediately if (token) return { token }; return AsyncTokenLookup().then(userToken => { token = userToken; return { token }; }); }); const resetToken = onError(({ networkError }) => { if (networkError && networkError.statusCode === 401) { // remove cached token on 401 from the server token = null; } }); const authFlowLink = withToken.concat(resetToken);
http://apollo-link-docs.netlify.com/docs/link/links/context.html
2018-07-16T00:34:42
CC-MAIN-2018-30
1531676589029.26
[]
apollo-link-docs.netlify.com
Installation¶ svSite should provide a fully functional site with minimal work. Although later on you may want to personalize the look (info), which will take time. That’s inevitable. To get svSite running, follow the steps in the appropriate section. Linux / bash¶ Installing dependencies¶ For this to work, you will need python3-dev including pip and a database ( sqlite3 is default and easy, but slow). Things will be easier and better with virtualenv or pew and git, so probably get those too. You’ll also need libjpeg-dev and the dev version of Python because of pillow. You can install them with: sudo apt-get install python3.4-dev sqlite3 git libjpeg-dev python-pip sudo apt-get install postgresql libpq-dev # for postgres, only if you want that database sudo apt-get install mysql-server mysql-client # for mysql, only if you want that database Make sure you use the python3.X-dev that matches your python version (rather than python3-dev). If there are problems, you might need these packages. Now get the code. The easiest way is with git, replacing SITENAME: git clone SITENAME Enter the directory ( cd SITENAME). Starting a virtual environment is recommended (but optional), as it keeps this project’s Python packages separate from those of other projects. If you know how to do this, just do it your way. This is just one of the convenient ways: sudo pip install -U pew pew new --python=python3 sv If you skip this step, everything will be installed system-wide, so you need to prepend sudo before any pip command. Also make sure you’re installing for Python 3. Install the necessary Python dependencies through: pip install -r dev/requires.pip pip install psycopg2 # for postgres, only if you want that database pip install mysqlclient # for mysql, only if you want that database Development¶ If you want to run tests, build the documentation or do anything other than simply running the website, you should install (otherwise skip it): pip install -r dev/requires_dev.pip # optional Database¶ We need a database. SQLite is used by default, which you could replace now or later (see local settings) for a substantial performance gain. To create the structure and an administrator, type this and follow the steps: python3 source/manage.py migrate python3 source/manage.py createsuperuser Static files¶ Then there are static files we need, which are handles by bower by default [1]. On Ubuntu, you can install bower using: sudo apt-get install nodejs npm install bower After that, install the static files and connect them: python3 source/manage.py bower install python3 source/manage.py collectstatic --noinput Starting the server¶ Then you can start the test-server. This is not done with the normal runserver command but with python3 source/manage.py runsslserver localhost.markv.nl:8443 --settings=base.settings_development We use this special command to use a secure connection, which is enforced by default. In this test mode, an unsigned certificate is used, so you might have to add a security exception. You can replace the url and port. You can stop the server with ctrl+C. Automatic tests¶ There are only few automatic tests at this time, but more might be added. You are also more than welcome to add more yourself. The tests use py.test with a few addons, which are included in dev/requires_dev.pip. If you installed those packages, you can run the tests by simply typing py.test in the root directory. It could take a while (possibly half a minute). Going live¶ Everything working and ready to launch the site for the world to see? Read Going live!
http://svsite.readthedocs.io/en/latest/install_dev/
2018-07-16T00:40:32
CC-MAIN-2018-30
1531676589029.26
[]
svsite.readthedocs.io
Frequently asked questions This article addresses frequently asked questions raised by the Azure Media Services (AMS) user community. General AMS FAQs Q: How do you stream to Apple iOS devices A: add "(format=m3u8-aapl)" path to the "/Manifest" portion of the URL to tell the streaming origin server to return back HLS content for consumption on Apple iOS native devices (for details, see (delivering content)[media-services-deliver-content-overview.md]), Q: How do you scale indexing? A: The reserved units are the same for Encoding and Indexing tasks. Follow instructions on How to Scale Encoding Reserved Units. Note that Indexer performance is not affected by Reserved Unit Type. Q: I uploaded, encoded, and published a video. What would be the reason the video does not play when I try to stream it? A: One of the most common reasons is you do not have the streaming endpoint from which you are trying to playback in the Running state. Q: Can I do compositing on a live stream? A: Compositing on live streams is currently not offered in Azure Media Services, so you would need to pre-compose on your computer. Q: Can I use Azure CDN with Live Streaming? A: Media Services supports integration with Azure CDN (for more information, see How to Manage Streaming Endpoints in a Media Services Account). You can use Live streaming with CDN. Azure Media Services provides Smooth Streaming, HLS and MPEG-DASH outputs. All these formats use HTTP for transferring data and get benefits of HTTP caching. In live streaming actual video/audio data is divided to fragments and this individual fragments get cached in CDN. Only data needs to be refreshed is the manifest data. CDN periodically refreshes manifest data. Q: Does Azure Media services support storing images? A: If you are just looking to store JPEG or PNG images, you should keep those in Azure Blob Storage. There is no benefit to putting them in your Media Services account unless you want to keep them associated with your Video or Audio Assets. Or if you might have a need to use the images as overlays in the video encoder.Media Encoder Standard supports overlaying images on top of videos, and that is what it lists JPEG and PNG as supported input formats. For more information, see Creating Overlays. Q: How can I copy assets from one Media Services account to another. A: To copy assets from one Media Services account to another using .NET, use IAsset.Copy extension method available in the Azure Media Services .NET SDK Extensions repository. For more information, see this forum thread. Q: What are the supported characters for naming files when working with AMS? A: Media Services uses the value of the IAssetFile.Name property when building URLs for the streaming content (for example, http://{AMSAccount}.origin.mediaservices.windows.net/{GUID}/{IAssetFile.Name}/streamingParameters.) For this reason, percent-encoding is not allowed. The value of the Name property cannot have any of the following percent-encoding-reserved characters: !*'();:@&=+$,/?%#[]". Also, there can only be one ‘.’ for the file name extension. Q: How to connect using REST? A: For information on how to connect to the AMS API, see Access the Azure Media Services API with Azure AD authentication. Q: How can I rotate a video during the encoding process. A: The Media Encoder Standard supports rotation by angles of 90/180/270. The default behavior is "Auto", where it tries to detect the rotation metadata in the incoming MP4/MOV file and compensate for it. Include the following Sources element to one of the json presets defined here: "Version": 1.0, "Sources": [ { "Streams": [], "Filters": { "Rotation": "90" } } ], "Codecs": [ ... Media Services learning paths Read about the Azure Media Services learning paths: Provide feedback Use the User Voice forum to provide feedback and make suggestions on how to improve Azure Media Services. You also can go directly to one of the following categories:
https://docs.microsoft.com/en-us/azure/media-services/previous/media-services-frequently-asked-questions
2018-07-16T01:17:08
CC-MAIN-2018-30
1531676589029.26
[]
docs.microsoft.com
Issues & feature requests Filing issues with Begin If you have an issue that you have reason to believe impacts all Begin users, and not just your project, please report it here! If you are experiencing an issue that may be specific to your account or project, involves billing or sensitive information, or just aren't sure what the best venue to ask your question is, please open a ticket in our private support system. Feature requests & enhancements We love your feature requests and product feedback, please post them here! Issues with Begin documentation Did you know that this docs site is a separate, fully open source Begin app, and has its own issue tracker? If you see something wrong, please head on over to the Begin docs repo to file docs-related issues and requests. (PRs are welcome, by the way!) Edit this page on GitHub
https://docs.begin.com/en/support/issues/
2019-01-16T03:27:12
CC-MAIN-2019-04
1547583656665.34
[]
docs.begin.com
CheckBox Control Type This topic provides information about Microsoft UI Automation support for the CheckBox control type. where the UI framework/platform integrates UI Automation support for control types and control patterns. This topic contains the following sections. - Typical Tree Structure - Relevant Properties - Required Control Patterns - Required Events - DefaultAction - Related topics Typical Tree Structure The following table depicts a typical control and content view of the UI Automation tree that pertains to check box controls and describes what can be contained in each view. For more information about the UI Automation tree, see UI Automation Tree Overview. Relevant Properties The following table lists the UI Automation properties whose value or definition is especially relevant to the CheckBox control type. For more information about UI Automation properties, see Retrieving Properties from UI Automation Elements. Required Control Patterns The following table lists the UI Automation control patterns required to be supported by all check box controls. For more information on control patterns, see UI Automation Control Patterns Overview. Required Events The following table lists the UI Automation events that check box controls are required to support. For more information on events, see UI Automation Events Overview. DefaultAction The default action of the check box is to cause a radio button to become focused and toggle its current state. As mentioned previously, check boxes either present a binary (Yes/No or On/Off) decision to the user or a tertiary (On, Off, Indeterminate). If the check box is binary the default action causes the "on" state to become "off" or the "off" state to become "on". In a tertiary check box the default action cycles through the states of the check box in the same order as if the user had sent successive mouse clicks to the control. Related topics Conceptual UI Automation Control Types Overview -
https://docs.microsoft.com/en-us/windows/desktop/winauto/uiauto-supportcheckboxcontroltype
2019-05-19T09:07:03
CC-MAIN-2019-22
1558232254731.5
[]
docs.microsoft.com
Table of contents Chats Rating Availability Triggers Referral Chats This statistic shows the number of normal and missed chats by your agents at any given time. - Normal chats: These are chats served or successful chats which are contain both agent and visitor. This give you an overview of an agent’s chat load in any given time period. Normally, the higher volume of chats agents served, the more leads you will have. - Missed chats: which are started by visitors but there is no agent joining in. In some ways, those visitors were not served so they could be very upset, unsatisfied. You should find the solutions to reduce this number. You can use filtering option to see the chats for particular departments or agents in a certain period of time Rating This report shows you how happy your customers are with your service. Visitors can vote Good (Satisfied) or Bad (Not satisfied). Subiz also displays unrated conversations. You should get this number of Bad rated chats down because it implies that your customers felt uncomfortable or nothing special in your website’s customer service. For more details, you can filter to review content of bad rated chats in History. Availability In this graph, you can the online time of individual agents or particular departments in a given time period. It measures how much time did agents serve the chats. Subiz doesn’t provide away time because agents were not available to reply chats from visitors. Chat timings will effect to chat volume. As a matter of fact, the lower available time was, the less conversation agents served. For example, you can use Gooogle Analytics to figure out the particular part of the day that your website is accessed by the largest number of visitors and try expanding the chat use past that hours. Trigger Trigger report provides you how many trigger was fired sucessfully in a given time period. You can set up trigger in Settings > Trigger Referral You can earn commission by referring Subiz to your friends or customers. Referral report will let you know the number of referred accounts you’ve got.
https://docs.subiz.com/reports/
2019-05-19T08:56:56
CC-MAIN-2019-22
1558232254731.5
[array(['https://docs.subiz.com/wp-content/uploads/2015/08/13.png', '1'], dtype=object) array(['https://docs.subiz.com/wp-content/uploads/2015/08/22.png', '2'], dtype=object) ]
docs.subiz.com
Airspeed sensor Configuration: Airspeed| Nodes: cas xhawk Dynamic pressure sensor estimates airspeed from pressure readings. - pt - this setting enables the driver - off - the pressure sensor is powered off and there’s no data output - airspeed - the sensor provides airspeed fixes for autopilot, the output is sent through variables airspeed. - buoyancy - the sensor provides filtered pressure measurements and the data is sent to variable buoyancy[Pa]. this option is used mainly for Blimps. - prio - redundancy priority for sensor. - primary - data will be used when available. - secondary - data will be used when primary is unavailable. - low - level3 priority, used when primary and secondary is unavailable. - safety - used when no other sources are available. - kalman - filter settings for data (Q=0.05, R=30) - calib - calibration mode - auto - the zero bias will be estimated on power on - fixed - the zero bias is set up in biassetting - bias - the bias value, used for fixed calibration mode. The use of fixed calibration mode is strongly recommended. To perform the calibration, follow these steps: - change the calibsetting to auto - reboot device - execute node command Recalibrate differential sensor - reload nodes configuration values on GCU to refresh data - change the calibsetting to fixed * Pre-flight check should always include Airspeed sensor test.
https://docs.uavos.com/fw/conf/airspeed.html
2019-05-19T08:39:25
CC-MAIN-2019-22
1558232254731.5
[]
docs.uavos.com
Pitot heater Configuration: Heater| Nodes: cas This feature provides control of pitot probe heating. - heater - heater switch control type: - off - the internal switch is turned off - auto - turn on switch if the MSL altitude is higher than altitudesetting - force - switch is always turned on - temp_out - variable for pitot temperature output (for debugging and monitoring on GCU) - mult - multiplier for temperature (0= no multiplier) to adjust/recalibrate temperatre sensor - altitude - when mode is autothis sets the absolute MSL altitude [m] where to turn the swith on, the altitude is taken from altpsvariable - temp - target temperature to hold [C], usually set to 150C - ocp - overcurrent protection (shortcircuit), usually set to 10 [A], will report shortcircuit condition to the console - uvp - undervoltage protection (battery discharge), will turn off the switch if the voltage is below this value, common value is 7 [V] - pwm - when this value is set to non-zero, the switch will operate in PWM mode with ~50Hz frequency (used to decrease output power)
https://docs.uavos.com/fw/conf/heater.html
2019-05-19T08:39:51
CC-MAIN-2019-22
1558232254731.5
[]
docs.uavos.com
API Joins¶ An API "get" action will typically return only the values of the entity requested. However, there are times when it is advantageous to returned data from a related entity. For instance, when querying the API for email addresses, you may want to return the name of the associated contact from the Contact entity. The CiviCRM API supports two methods of returning data from associated entities; API Joins and API Chaining. API joins provide higher performance by making a single SQL query with a SQL join, and are generally preferable to API chaining where available. Using an API Join¶ To use a join in an API call, specify the name of the field on which the join happens, a period, and the name of the field to reference. For instance, to search for all primary emails, returning the email and joining to also return the contact's display name: $result = civicrm_api3('Email', 'get', array( 'return' => array("email", "contact_id.display_name"), 'is_primary' => 1, )); Alternatively, to return email addresses of everyone whose last name is Smith by joining to the Contact entity: $result = civicrm_api3('Email', 'get', array( 'contact_id.last_name' => "Smith", )); Identifying fields eligible for a join¶ It is possible to join an entity to any other entity if the xml schema identifies a foreign key or a pseudoconstant. The getfields action identifies fields that are eligible for an API join. Warning For historical reasons, some entities have a non-standard API in APIv3 in order to handle more complicated operations. Those entities - 'Contact', 'Contribution', 'Pledge', and 'Participant' - can be joined to from another table, but you can not join to other tables from them. This limitation will be removed in APIv4.
https://docs.civicrm.org/dev/en/latest/api/joins/
2018-07-15T21:12:30
CC-MAIN-2018-30
1531676588972.37
[]
docs.civicrm.org
NOTICE: Our WHMCS Addons are discontinued and not supported anymore. Most of the addons are now released for free in github - You can download them from GitHub and conribute your changes ! :) Monitoring Module :: Licensing Our monitoring module will register your license once you activate it. The license will register your server’s ip, domain and working directory. If you move your WHMCS to another server, you will need to reissue the license through our client’s area. Login to our clients area, from the top menu navigate to “Services” -> “My Services”, and choose our monitoring module (click the “View Details” button). Under “Management Actions”, click on “Reissue License”. Now all you need to do is to access the monitoring module, and the license will be reissued using the new server details.
https://docs.jetapps.com/monitoring-module-licensing
2018-07-15T21:07:04
CC-MAIN-2018-30
1531676588972.37
[array(['https://docs.jetapps.com/wp-content/plugins/lazy-load/images/1x1.trans.gif', None], dtype=object) ]
docs.jetapps.com
WordPress Left menu > JS Car Manager > Credits pack. Admin control panel > Credits Pack. Admin Left Menu > Credits > Credits Pack. This is credits pack listing page. It has all the credit packs that admin has defined. This portion has a back link that takes to control panel, page title and add new credits pack button . This portion is filter for credits log. Admin can filter records on the basis of credits range or price range. When admin uses filter only records that fulfill his provided criteria are shown on the page. Reset button will disable filter criteria and show all records. This portion represents an individual credits pack, it has number of credits, date created, cost (amount and currency). Pagination.
http://docs.joomsky.com/jscarmanager/creditspack/admin/creditpacklisting.html
2018-07-15T20:37:12
CC-MAIN-2018-30
1531676588972.37
[]
docs.joomsky.com
Lodgix.com Manuals & Tutorials Topics - Introduction 3 - My Account 6 - Initial Setup 4 - Property / Room Setup 12 - 5 - Important Settings 7 - Availability and Booking Calendars 9 - 4 - Internal Calendar Tape 4 - Document Templates 2 - Trigger Setup 14 - 3 - Guest Control Panel 2 - Work Orders 2 - Inquiry Management & Automation 14 - 3 - Payment Gateway / PayPal Setup 12 - 3 - Guest Invoices 9 - 3 -
http://docs.lodgix.com/m/5502/c/248455
2018-07-15T21:02:47
CC-MAIN-2018-30
1531676588972.37
[]
docs.lodgix.com
Before installing DC/OS, your cluster must have the software and hardware requirements. Create an IP detection script a directory named genconfon your bootstrap node and navigate to it. mkdir -p genconf detection) Create a configuration fileCreate a configuration file In this step you create a YAML configuration file that is customized for your environment. DC/OS uses this configuration file during installation to generate your cluster installation files. In these image from this script and loading into docker daemon, this step a configuration file and save as genconf/config.yaml. You can use this template to get started. The template specifies three Mesos masters, static master discovery list, internal storage backend for Exhibitor, a custom proxy, security mode specified, and Google DNS resolvers. If your servers are installed with a domain name in your /etc/resolv.conf, add the dns_searchparameter.>. bootstrap_url: http://<bootstrap_ip>:80 cluster_name: '<cluster-name>' customer_key: <customer-key> exhibitor_storage_backend: static master_discovery: static ip_detect_public_filename: <path-to-ip-script> master_list: - <master-private-ip-1> - <master-private-ip-2> - <master-private-ip-3> resolvers: - 8.8.4.4 - 8.8.8.8 # Choose your security mode: permissive, strict, or disabled security: <security-mode> superuser_password_hash: <hashed-password> superuser_username: <username> use_proxy: 'true' http_proxy: http://<user>:<pass>@<proxy_host>:<http_proxy_port> https_proxy: https://<user>:<pass>@<proxy_host>:<https_proxy_port> no_proxy: - 'foo.bar.com' - '.baz.com' Install. To install DC/OS:.ee.sh --helpflag. Tip: For the install script to work, you must have created genconf/config.yamland Tip: If you encounter errors such as Time is marked as bad, adjtimex, or Time not in syncin journald, verify that Network Time Protocol (NTP) is enabled on all nodes. For more information, see the system requirements. Monitor Exhibitor and wait for it to converge at http://<master-ip>:8181/exhibitor/v1/ui/index.html. Tip: This process can take about 10 minutes. During this time you will see the Master nodes become visible on the Exhibitor consoles and come online, eventually showing a green light. When the status icons are green, you can access the DC/OS web interface. Launch the DC/OS web interface at: http://<master-node-public-ip>/. Important: After clicking Log In To DC/OS, your browser may show a warning that your connection is not secure. This is because DC/OS uses self-signed certificates. You can ignore this error and click to proceed. Enter your administrator username and password. You are done! Next StepsNext Steps Now you can assign user roles.
https://docs.mesosphere.com/1.9/installing/ent/custom/advanced/
2018-07-15T21:11:10
CC-MAIN-2018-30
1531676588972.37
[]
docs.mesosphere.com
By default, you can copy and paste text from your client system to a remote desktop or application. If your administrator enables the feature, you can also copy and paste text from a remote desktop or application to your client system or between two remote desktops or applications. Some restrictions apply. If you use the VMware Blast display protocol or the PCoIP display protocol, your View administrator can set this feature so that copy and paste operations are allowed only from your client system to a remote desktop, or only from a remote desktop to your client system, or both, or neither. If you are using a Horizon 6.0 with View remote application, the same rules apply. Administrators configure the ability to copy and paste by using group policy objects (GPOs) that pertain to the agent in remote desktops or applications. In Horizon 7.0 and later, administrators can also use Smart Policies to control copy and paste behavior in remote desktops. For more information, see the Setting Up Desktop and Application Pools in View document. GPO information is in the topic about View PCoIP general session variables, which includes the setting called Configure clipboard redirection. For Smart Policies information, see the topic about Horizon Policy settings, which includes the setting Clipboard. Supported file formats include text, images, and RTF (Rich Text Format). The clipboard can accommodate 1MB of data for copy and paste operations. If you are copying formatted text, some of the data is text and some of the data is formatting information. For example, an 800KB document might use more than 1MB of data when it is copied because more than 200KB of RTF data might get put in the clipboard. If you copy a large amount of formatted text or text and an image, when you attempt to paste the text and image, you might see some or all of the plain text but no formatting or image. The reason is that the three types of data are sometimes stored separately. For example, depending on the type of document you are copying from, images might be stored as images or as RTF data. If the text and RTF data together use less than 1MB, the formatted text is pasted. Often the RTF data cannot be truncated, so that if the text and formatting use more than 1MB, the RTF data is discarded, and plain text is pasted. If you are unable to paste all of the formatted text and images you selected in one operation, you might need to copy and paste smaller amounts in each operation. You cannot copy and paste files between a remote desktop and the file system on your client computer.
https://docs.vmware.com/en/VMware-Horizon-Client-for-Mac/4.0/com.vmware.horizon.mac-client-doc/GUID-64739335-3996-4DFC-881A-1A360311FB3F.html
2018-07-15T21:31:56
CC-MAIN-2018-30
1531676588972.37
[]
docs.vmware.com
Tenant Administration explains how to perform administrative tasks for vRealize Automation. It describes procedures for configuring tenants, setting up notifications, managing users, and managing the contents of the catalog. Not all features and capabilities of vRealize Automation are available in all editions. For a comparison of feature sets in each edition, see. Intended Audience This information is intended for system administrators, tenant administrators, service architects, and business group managers who need to configure and maintain the vRealize Automation environment. VMware Technical Publications Glossary VMware Technical Publications provides a glossary of terms that might be unfamiliar to you. For definitions of terms as they are used in VMware technical documentation, go to.
https://docs.vmware.com/en/vRealize-Automation/6.2/com.vmware.vra.tenant.administration.doc/GUID-496EBFFB-0776-436C-B862-13739FC1AE90.html
2018-07-15T21:29:08
CC-MAIN-2018-30
1531676588972.37
[]
docs.vmware.com
GraphExpert Professional 1.6.0 documentation Miniprograms in GraphExpert Professional are “small” Python codes that output a dataset. In essence, a miniprogram is a dataset, except that the source of the data is a program rather than being a set of numbers in the computer’s memory. A miniprogram can be as simple or as complex as the user desires, and the full power of the Python language and add-on modules is available for use. For convenience, GraphExpert Professional provides all of the functions listed in Appendix A: Math Functions in your miniprogram’s namespace for direct use. A new miniprogram can be created directly via the Create->Miniprogram main menu choice (or the corresponding toolbar button). The following dialog will then appear: The operation of this dialog is quite straightforward. Enter your miniprogram, and click the Test Run button to test. Once the miniprogram has passed the test, click the OK button. If the code executes its test run correctly (this does not imply correctness of the code itself), an information bar will appear at the bottom to indicate this, and it will also report the dimension of the returned dataset. If the code does not compile correctly, a stack trace will appear in the information bar to help you track down the problem. Note The OK button will not enable until the program has been test run. Every miniprogram must have one evaluate function. The evaluate function is what will be called whenever GraphExpert Professional determines that the data should be updated. Rather than trying to reinvent another language for the user to express a model in, GraphExpert Professional uses Python as its base scripting language. Thus, anything that can be done in Python can be done in your miniprograms. This gives the user extreme flexibility, even to do things like download data from a web server that influences the data that the miniprogram returns. Here, your creativity is the only limit. As you enter the Python code for your function, it is syntax highlighted, and syntax errors are dynamically detected. creating a miniprogram. Just follow the examples given here and in the miniprogram. Again, every miniprogram must have an evaluate function defined, and furthermore, the evaluate function should take no (mandatory) arguments. Other items that specify metadata (currently, only name), are optional. name¶ This is an attribute to define the name of your miniprogram, which is the name that is used for the component that is created for your miniprogram, by default. The type of this attribute is a string, and should be enclosed in quotes. evaluate()¶ This is the most important part of your miniprogram; evaluate() is a function that is called whenever a dataset update is required from your miniprogram. An example of an evaluate function is the following (with comments removed for brevity): name = "My miniprogram" def evaluate(): dset = ones((10,5)) # create a dataset that contains all ones, 10 rows, 5 columns return dset The evaluate function returns anything that can be coerced into a 2D dataset. This can be a float, a list of floats, a list of list of floats, or (most commonly) a numpy array. Alternatively, a dictionary may also be returned from the miniprogram. This method is used when the user desires to return some metadata about the dataset along with the dataset itself. If a dictionary is returned, it is required to have the dset key, which should have the dataset to return as its value. Metadata currently recognized by GraphExpert Professional is the colnames key, which maps to a list of column names, and the rownames key, which maps to a list of row names. An example of this is below: name = "My miniprogram" def evaluate(): dset = ones((10,3)) colnames = ["Distance","Velocity","Acceleration"] retval = {'dset':dset,'colnames':colnames} return retval Any other Python functions in your script (denoted by the def keyword) other than evaluate are ignored. This means that you can create your own Python functions and call them from your evaluate function as you wish. All mathematical functions supported by GraphExpert Professional, such as sin, cos, tan, exp, etc., are also reserved words. See Appendix A: Math Functions for a complete list of the mathematical functions supported. Certainly, the example below is more appropriately implemented via a Function (Create->Function). However, this example is useful to show how simple a miniprogram can be, in order to get you starting using them. This example just computes the sin function between 0 and 2*pi, and returns the dataset: name = "sin function" def evaluate(): n = 101 x = linspace(0,2*pi,n) y = sin(x) dset = column_stack((x,y)) return dset The “linspace” call creates a vector from 0 to 2*pi, made up of n points; then we take the sine of every entry in that array, returning the result in y. At this point, we have two separate 1D arrays, of dimension 1x101. We would like to return the x data in the first column of a dataset, and the y data as the second column. So, we first “column stack” the two arrays to get a 101x2 2D dataset, Now, just return that dataset, we are are done. The following miniprogram calculates the Blasius solution (describes the fluid boundary layer that forms near a flat plate under 2D, laminar, and steady conditions and a uniform oncoming flow) and returns it as a dataset: import scipy import scipy.integrate name = "Blasius Solution" def blasius(y, eta): return [y[1], y[2], -0.5*y[0]*y[2]] def evaluate(): n = 201 initial_condition = [0,0,0.33206] eta = linspace(0, 10, n) y = scipy.integrate.odeint(blasius, initial_condition, eta) eta = eta.reshape((n,1)) dset = hstack((eta,y)) colnames = ["eta","y","yp","ypp"] return {'dset':dset,'colnames':colnames} In this example, you can see that the evaluate() function uses the ordinary differential equation solver available in scipy. All of the functionality in scipy is available, which includes numerical integration, optimization, and many other very useful algorithms. The following miniprogram calculates the points for an ellipse: name = "Ellipse" def evaluate(): a = 2.0 # major axis b = 1.0 # minor axis x = linspace(-a,+a,201) # create 201 points between -a and +a y1 = b*sqrt(1-x**2/a**2) # evaluate the ellipse equation dset = vstack((x,y1,-y1)).T # stack the datasets vertically, take transpose # the output dataset is 200x3. return dset The Moody diagram example (see the examples shipped with GraphExpert Professional) demonstrates using a miniprogram in order to solve the Colebrook equation in order to find a family of constant roughness curves. This particular example uses the scipy optimizer in order to do so. Please see the example for further details; you can see each miniprogram by right-clicking on the appropriate component, and choosing Edit Miniprogram.
https://docs.curveexpert.net/graphexpert/pro/html/miniprogram.html
2020-11-23T21:37:43
CC-MAIN-2020-50
1606141168074.3
[]
docs.curveexpert.net
Spark supported types Spark supported CQL types are mapped to Scala types. This table maps CQL types to Scala types. All CQL types are supported by the DataStax Enterprise Spark integration. Other type conversions might work, but cause loss of precision or not work for all values. Most types are convertible to strings. You can convert strings that conform to the CQL standard to numbers, dates, addresses or UUIDs. You can convert maps to or from sequences of key-value tuples.
https://docs.datastax.com/en/dse/6.7/dse-dev/datastax_enterprise/spark/sparkSupportedTypes.html
2020-11-23T22:13:24
CC-MAIN-2020-50
1606141168074.3
[]
docs.datastax.com
Rapid7 Universal Ingress Authentication If Rapid7 does not support the logging format of your ingress authentications, you can still send data into InsightIDR so long as you transform your logs to meet this universal event format (UEF) contract. Ingress authentications are any activity where a user account can be observed authenticating to a protected system from an IP on the public Internet. For example, when a user account uses the VPN to log in, checks their email on their mobile phone, or accesses cloud services like Google Apps, etc. InsightIDR will use this activity for incident detection (Multiple Country Authentications, Ingress from Disabled Account, etc), visualization on the Ingress Locations map and dashboards, as well as investigations in Log Search. Need help transforming your logs? Read instructions on transforming your logs in this Rapid7 blog post or on the Transform Logs to UEF help page. Required Fields Ensure that your Ingress Authentication logs contain the following fields so that you can construct a valid UEF Ingress Authentication, {"event_type":"INGRESS_AUTHENTICATION","version": "v1","time": "2018-06-07T18:18:31.1Z","account":"jdoe","account_domain":"CORP","source_ip":"10.6.102.53","authentication_result": "SUCCESS","authentication_target":"Marketing Wiki"} Each event sent to InsightIDR must not contain newline characters. Here are some examples of a Universal Ingress Authentication Event with readable formatting: 12{3"version": "v1",4"event_type": "INGRESS_AUTHENTICATION",5"time": "2018-06-07T18:18:31.123Z",6"account": "jdoe",7"account_domain": "CORP",8"source_ip": "130.26.110.4",9"authentication_result": "SUCCESS",10"authentication_target": "Marketing Wiki"11}12 Or: 1{2"version": "v1",3"event_type": "INGRESS_AUTHENTICATION",4"time": "2018-06-07T18:18:31.99Z",5"account": "jdoe",6"account_domain": "CORP",7"source_ip": "130.26.110.4",8"authentication_result": "FAILURE",9"authentication_target": "Marketing Wiki",10"custom_data": {11"arbitrary_field": "arbitrary_value",12"arbitrary_number": 12313}14}15
https://docs.rapid7.com/insightidr/rapid7-universal-ingress-authentication/
2020-11-23T22:31:16
CC-MAIN-2020-50
1606141168074.3
[]
docs.rapid7.com
class in UnityEngineマニュアルに切り替える Any Image Effect with this attribute can be rendered into the Scene view camera. If you wish your image effect to be applied to the Scene view camera add this attribute. The effect will be applied in the same position, and with the same values as those from the camera the effect is on.
https://docs.unity3d.com/ja/current/ScriptReference/ImageEffectAllowedInSceneView.html
2020-11-23T21:37:03
CC-MAIN-2020-50
1606141168074.3
[]
docs.unity3d.com
How to Get the Good Loans With Bad Credit Score Despite how will you have planned your things at some point there will be an emergency that will require a lot of money that will lend you to be cashless.. When you need to get the loan with the bad credit card, read more here on borrowing from the family or friend where you won’t be put through the credit check process. Although with borrowing from the family or friend loan reduced rates for those with the bad credit, it is best to have the written contract fully showing all the terms. When you have a bad credit you need to spread out the loan payment monthly, and it is only able when you choose to lend from the personal instalment lender according to this website..
http://docs-prints.com/2020/09/19/getting-started-next-steps/
2020-11-23T21:35:38
CC-MAIN-2020-50
1606141168074.3
[]
docs-prints.com
Remote Audit The Remote Audit is a method of WAN audit, based on the deployment of standalone audit agents to a remote network. With this method, you can regularly audit offsite computers and remote networks that have no direct connection to the local network. The Remote Audit method offers two deployment scenarios for audit agents: - Install the audit agent to every remote computer. - Deploy the Inventory Analyzer package to a centralized location in the remote network and automate the audit agent using domain logon scripts or scheduled tasks. Depending on the way how audit snapshots are delivered to Alloy Discovery, the Remote Audit method comes in two modes: FTP delivery and e-mail delivery. When using this audit method, there is no direct link between the Inventory Server and deployed audit agents; this is why any configuration changes or updated versions of the audit agents have to be manually re-deployed.
https://docs.alloysoftware.com/alloydiscovery/8/help/glossary/remote-audit.htm
2020-11-23T21:49:24
CC-MAIN-2020-50
1606141168074.3
[]
docs.alloysoftware.com
Stop Workflow is the default error-handling option for all widgets. If an error occurs, the entire workflow will stop running, and its final status will be Canceled. A notification is sent to the list of people defined in the Workflow Error Reporting settings. Remember that Workflow Conductor lets you define who gets notified when a workflow error occurs. By default, the workflow initiator gets an e-mail with the details of the error. You can also notify the workflow designer or any user, Active Directory group, or e-mail address. For more information, see the Workflow Error Handling section. See Also:
https://docs.bamboosolutions.com/document/stop_workflow/
2020-11-23T22:01:05
CC-MAIN-2020-50
1606141168074.3
[array(['/wp-content/uploads/2017/06/sa08_WF-Errored.jpg', 'sa08_WF-Errored.jpg'], dtype=object) array(['/wp-content/uploads/2017/06/SA08_2010CPGeneralSettingsErrorReporting.jpg', 'SA08_2010CPGeneralSettingsErrorReporting.jpg'], dtype=object) ]
docs.bamboosolutions.com
Database Connections For information on establishing connections with either of these connection types, see the below pages: Database Connection Property Lists Connections are defined in a connection property list that contains all necessary information for Eggplant Functional to communicate with the database. This is done differently depending on the connection type. See ODBC Administration or Working with Excel for further connection information.. Excel-Only Connection List Properties File: Required. Specify the file path for the Excel workbook that you want to refer to as a database. Example: ODBC This example shows what an ODBC connection definition might look like. For more information, see ODBC Administration. setmyDBto {type:"odbc", DSN:"DataSource1", user:"root", password:""} Example: Excel This example shows what an excel database connection definition might look like. For more information, see Working with Excel. setmyExcelDBto {type: "excel", file: "/<Path>/<MYExcelFile>.xlsx"} -- set the specified variable, myExcelDB, to store the contents of the referenced graExcel file and establish a SenseTalk connection.
http://docs.eggplantsoftware.com/ePF/SenseTalk/stk-database-connections.htm
2020-11-23T21:33:46
CC-MAIN-2020-50
1606141168074.3
[]
docs.eggplantsoftware.com
Top When creating or replying to threads in Discussion Board Plus, authors can share video (if the file format is supported) by using the From Embed Code option. Discussion Board Plus paired with Video Library gives authors three additional ways to upload and insert video in their posts: You can choose to select a video from one of four locations: Sharing videos allows users to view video thumbnails in the body of discussion threads. See a list of Bamboo’s Supported File Formats.
https://docs.bamboosolutions.com/document/insert_video_options_in_discussion_board_plus/
2020-11-23T22:15:29
CC-MAIN-2020-50
1606141168074.3
[]
docs.bamboosolutions.com
How To Query and Update Excel Data Using ADO From ASP Note Office 365 ProPlus is being renamed to Microsoft 365 Apps for enterprise. For more information about this change, read this blog post. Summary This article demonstrates how to query and update information in an Excel spreadsheet using ActiveX Data Objects (ADO) from an Active Server Pages (ASP) page. The article also describes the limitations that are associated with this type of application. Important Though ASP/ADO applications support multi-user access, an Excel spreadsheet does not. Therefore, this method of querying and updating information does not support multi-user concurrent access. More Information To access the data in your Excel spreadsheet for this sample, use the Microsoft ODBC Driver for Excel. Create a table to access the data by creating a Named Range in your Excel spreadsheet. Steps to Create Sample Application Create the Excel file ADOtest.xls with the following data in sheet1: Note If a column in your Excel spreadsheet contains both text and numbers, the Excel ODBC driver cannot correctly interpret which data type the column should be. Please make sure that all the cells in a column are of the same data type. The following three errors can occur if each cell in a column is not of the same type or you have the types mixed between "text" and "general": - Microsoft OLE DB Provider for ODBC Drivers error '80040e21' The request properties can not be supported by this ODBC Driver. - Microsoft OLE DB Provider for ODBC Drivers error '80004005' The query is not updateable because it contains no searchable columns to use as a hopeful key. - Microsoft OLE DB Provider for ODBC Drivers error '80004005' Query based update failed. The row to update could not be found. Create a Named Range, myRange1, in your spreadsheet: - Highlight the row(s) and column(s) area where your data resides. - On the Insert menu, point to Name, and click Define. - Enter the name myRange1 for the Named Range name. - Click OK. The Named Range myRange1 contains the following data: Note - ADO assumes that the first row in an Excel query contains the column headings. Therefore, the Named Range must include the column headings. This is different behavior from DAO. - Column headings cannot be a number. The Excel driver cannot interpret them and, instead, returns a cell reference. For example, a column heading of "F1" would be misinterpreted. Create an ODBC System Data Source Name (DSN) pointing to the ADOTest.xls file. - From the Control Panel, open the ODBC Administrator. - On the System DSN tab, click Add. - Select Microsoft Excel Driver (*.xls) and click Finish. If this option does not exist, you need to install the Microsoft ODBC driver for Excel from Excel setup. - Choose ADOExcel for the Data Source Name. - Make sure the Version is set to the correct version of Excel. - Click "Select Workbook...", browse to the ADOTest.xls file, and click OK. - Click the "Options>>" button and clear the "Read Only" check box. - Click OK and then click OK again. Set permissions on the ADOTest.xls file. If your Active Server Page is accessed anonymously, you need to make sure that the Anonymous Account (IUSR_ If you are authenticating access to your Active Server Page, you need to ensure that all users accessing your application have the appropriate permissions. If you do not set the appropriate permissions on the spreadsheet, you get an error message similar to the following: Microsoft OLE DB Provider for ODBC Drivers error '80004005' [Microsoft][ODBC Excel Driver] The Microsoft Jet database engine cannot open the file '(unknown)'. It is already opened exclusively by another user, or you need permission to view its data. Create a new ASP page and paste in the following code: <!-- Begin ASP Source Code --> <%@ LANGUAGE="VBSCRIPT" %> <% Set objConn = Server.CreateObject("ADODB.Connection") objConn.Open "ADOExcel" Set objRS = Server.CreateObject("ADODB.Recordset") objRS.ActiveConnection = objConn objRS.CursorType = 3 'Static cursor. objRS.LockType = 2 'Pessimistic Lock. objRS.Source = "Select * from myRange1" objRS.Open %> <br> <% Response.Write("Original Data") 'Printing out original spreadsheet headings and values. 'Note that the first recordset does not have a "value" property 'just a "name" property. This will spit out the column headings.>") 'The update is made here objRS.MoveFirst objRS.Fields(0).Value = "change" objRS.Fields(1).Value = "look" objRS.Fields(2).Value = "30" objRS.Update 'Printing out spreadsheet headings and values after update. Response.Write("<br>Data after the update")>") 'ADO Object clean up. objRS.Close Set objRS = Nothing objConn.Close Set objConn = Nothing %> <!-- End ASP Source Code --> Save and name your Active Server Page and view it in the browser. You will see the following: Original Data: |column1|column2|column3| |------------|------------|------------| |rr|this|30| |bb|test|20| |tt|works|25| Data after the update: |column1|column2|column3| |------------|------------|------------| |change|look|30| |bb|test|20| |tt|works|25| Note An update was performed on the first row of your Named Range (after the headings).
https://docs.microsoft.com/en-us/office/troubleshoot/excel/query-update-data
2020-11-23T23:10:26
CC-MAIN-2020-50
1606141168074.3
[]
docs.microsoft.com
. Low available disc space on /mnt/crawler_worker on mirrorer This can be caused when the AWS autoscaler re-spins a VM and the newly created VM does not attach the secondary EBS volume properly. In this case, the alert will say DISK CRITICAL: /mnt/crawler_worker not found. Cause An EBS volume can only be attached to one VM at a time. When the autoscaler terminates a VM, AWS disconnects the EBS volume and releases a lock, whilst at the same time also spinning up the replacement VM. The new VM can spin up and be provisioned faster than the lock can be released. Solution Manually re-attach the EBS volume tagged with mirrorer to the mirrorer instance, then run puppet on the instance. This will cause puppet to correctly mount the EBS volume in /mnt/crawler_worker. - Login to the AWS Web Console: gds aws govuk-integration-powerusers -l - Click Services -> EC2 - Change region to "Europe (Ireland) eu-west-1" - Click Instances on the left hand side menu bar and filter by "mirrorer" - Copy the Instance ID, it should look similar to i-06887a467af1266a0 - Left hand side menu, expand Elastic Block Store, and click Volumes - Filter by "mirrorer" and ensure that the state of the only result is "available" - Right click volume and choose "Attach Volume" - Paste the Instance ID in to the appropriate box, leaving Device as the default - Click Attach - After AWS finishes attaching the volume to the instance the state should change to "in-use" - Once it has settled to the "in-use" state, SSH in to the mirrorer instance and run govuk_puppet --test No disk space on the MySQL master If the MySQL master runs out of disk space, all of the apps that rely on MySQL may crash. You'll probably notice this in Icinga.. Low available disk space on /var/lib/postgresql Check which databases are occupying a lot of space and discuss with the relevant owners about reducing size or exanding the size of the postgres drive. Steps to investigate postgres db size: - SSH into postgresql-primary-1.backend - See what space is left: df -h - Open the psql console: sudo -u postgres psql postgres - List databases: \listor \l - You can choose one of the dbs by doing: \c <name-of-db>For example: \c email-alert-api_production If this continues to be a problem see if you need to resize the disk.: sudo logrotate -vf /etc/logrotate.d/<type of logs> If you have nowhere to move the logs and it is causing an incident, uptime of the server takes precedent over logfiles, so just truncate the logs to free up space
https://docs.publishing.service.gov.uk/manual/alerts/low-available-disk-space.html
2020-11-23T22:35:55
CC-MAIN-2020-50
1606141168074.3
[]
docs.publishing.service.gov.uk
About the Splunk Add-on for Cisco FireSIGHT The Splunk Add-on for Cisco FireSIGHT (formerly Splunk Add-on for Cisco Sourcefire) leverages data collected via Cisco eStreamer to allow a Splunk software administrator to analyze and correlate Cisco Next-Generation Intrusion Prevention System (NGIPS) and Cisco Next-Generation Firewall (NGFW) log data and Advanced Malware Protection (AMP) reports from Cisco FireSIGHT and Snort IDS through the Splunk Common Information Model. You can then use the mapped data with other Splunk apps, such as Splunk Enterprise Security and the Splunk App for PCI Compliance. This add-on does not include a data collection component. You can use other apps, such as eStreamer for Splunk to ingest Cisco FireSIGHT data, or you can use syslog. Download the Splunk Add-on for Cisco FireSIGHT from Splunkbase at. Discuss the Splunk Add-on for Cisco FireSIGHT on Splunk Answers at. This documentation applies to the following versions of Splunk® Supported Add-ons: released Feedback submitted, thanks!
https://docs.splunk.com/Documentation/AddOns/released/Sourcefire/Description
2020-11-23T22:48:43
CC-MAIN-2020-50
1606141168074.3
[array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'], dtype=object) ]
docs.splunk.com
Configure Keycloak to Generate Link for Required User Action Overview A new user can be created in Sunbird in the following two ways: - Self sign-up using Sunbird where user can provide email, phone and password during user creation - Bulk users creation by Organisation Admin where an initial password is not yet set The Sunbird requires users to either verify email (when user is created by self sign-up) or set password (when users are created by bulk upload) for the first time before they are able to log in to Sunbird. The verify email or set password link is sent to the newly created users via email and/or SMS. The generated link also consists of a redirect URI to which the user is redirected after completing the required action. This document explains the configuration required in Keycloak to generate links for the required action to be performed by a new user. Configure Environment Variables Following environment variables need to be configured in Sunbird LMS service for generating required action links: - sunbird_sso_url - sunbird_sso_username - sunbird_sso_password - sunbird_sso_realm - sunbird_sso_client_id - sunbird_url_shortner_enable - sunbird_url_shortner_access_token - sunbird_keycloak_required_action_link_expiration_seconds Note: For details on the environment variables, refer to Sunbird LMS Service Environment Variables. Configure Administrator Role It is mandatory to configure a user with administrator role permissions to be able to generate the required action link in Keycloak. Configure Redirect URI The redirect URI configuration is necessary to redirect user to Sunbird tenant’s specific home page after completing the required action steps
http://docs.sunbird.org/2.10.0/developer-docs/server-configurations/keycloak_admin_config_settings/
2020-11-23T22:24:05
CC-MAIN-2020-50
1606141168074.3
[]
docs.sunbird.org
How to test a Mobile App Mobile applications are growing at a rapid pace that the developers and security teams are unable to secure them. There are lots of communications going on about the securing mobile devices and clients. But the most vulnerable aspects of the mobile application back-end services is simply being ignored by the developers and security teams. These back-end services are generally RESTful APIs using JSON, XML or AMF technology. These services are similar to web applications at High level and are vulnerable to common web application vulnerabilities like SQL injection, XSS, etc. To find this vulnerabilities requires new techniques. As these services are web services or RESTful apis, It is not possible to crawl the mobile application as web applications. For security testing, it is essential to crawl and capture the traffic manually, save it as consumable format and provide it to AppSpider. To test a Mobile application, user need to setup the Android emulator in the software model. User need to setup Android SDK which is freely available on the following link: Install Hackazon application in the Emulator Hackazon application binary is available at the “\hackazon\web\app” of the downloaded package from Github. To install application on Emulator, open a command prompt of the windows system and apply following command. Hackazon application will get installed on the Emulator. Command to install apk file into the Emulator: adb -e install “System Path of Apk” For example: adb -e install C:\android-sdk-windows\platform-tools\hackazon.apk Configuring the Proxy at the Emulator layer There are two methods to configure the proxy. - Run and set the proxy using a command line tool avd {avd-name| example testdroid} -http-proxy {http-proxy address:port| example 192.168.56.101:8080} For example: emulator -avd Android-http-proxy 192.168.56.101:8080 - Configure in the Android Operating System Itself - On your Android device, navigate into Settings, then tap Wi-Fi - Tap and hold WiFi connection to Modify network - Enter proxy settings We are using a Burp proxy tool to capture traffic. Navigate to “Proxy” >> “Options” and setup a proxy listener. Now, launch the Hackazon application. Login into the application using default login credentials. Manually crawl the application. - Browse through the items. - Add items to cart. - Proceed to checkout. Save requests to XML format. AppSpider has feature called “Import Recorded Traffic," which allows users to import prerecorded traffic to AppSpider and enable the “Restrict scan to recorded traffic” option. This feature restricts the AppSpider to attack and finds vulnerabilities only on the HTTP traffic imported by user. Create a new scan using Appspider and insert scan name and URL of the application. Check the “Attack policy” and “Recorded Traffic” options as we are scanning mobile application to find vulnerabilities. Select a predefined Attack Policy or Create your own attack policy and load it. Now, click on the “Import Traffic” button and load the recorded traffic. AppSpider imports the prerecorded traffic and loads it. User can check the “Restrict the scan to recorded traffic”, If they do not want to scan the entire domain. In our case, we would like to perform scanning on limited recorded traffic. Hence, we have checked the Restrict Traffic option. Now, Click on the Sava and Run and AppSpider will start scanning the mobile application.
https://docs.rapid7.com/appspider/how-to-test-a-mobile-app/
2020-11-23T22:51:06
CC-MAIN-2020-50
1606141168074.3
[array(['/areas/docs/_repos//product-documentation__master/7a5dab79b82f82c703654fbf2200657d61e31590/appspider/images/Screen Shot 2018-06-04 at 4.06.24 PM.png', None], dtype=object) array(['/areas/docs/_repos//product-documentation__master/7a5dab79b82f82c703654fbf2200657d61e31590/appspider/images/Screen Shot 2018-06-04 at 4.15.49 PM.png', None], dtype=object) array(['/areas/docs/_repos//product-documentation__master/7a5dab79b82f82c703654fbf2200657d61e31590/appspider/images/Screen Shot 2018-06-04 at 4.17.00 PM.png', None], dtype=object) array(['/areas/docs/_repos//product-documentation__master/7a5dab79b82f82c703654fbf2200657d61e31590/appspider/images/Screen Shot 2018-06-04 at 4.19.47 PM.png', None], dtype=object) array(['/areas/docs/_repos//product-documentation__master/7a5dab79b82f82c703654fbf2200657d61e31590/appspider/images/Screen Shot 2018-06-04 at 4.20.12 PM.png', None], dtype=object) array(['/areas/docs/_repos//product-documentation__master/7a5dab79b82f82c703654fbf2200657d61e31590/appspider/images/Screen Shot 2018-06-04 at 4.20.27 PM.png', None], dtype=object) array(['/areas/docs/_repos//product-documentation__master/7a5dab79b82f82c703654fbf2200657d61e31590/appspider/images/Screen Shot 2018-06-04 at 4.20.55 PM.png', None], dtype=object) array(['/areas/docs/_repos//product-documentation__master/7a5dab79b82f82c703654fbf2200657d61e31590/appspider/images/Screen Shot 2018-06-04 at 4.21.17 PM.png', None], dtype=object) array(['/areas/docs/_repos//product-documentation__master/7a5dab79b82f82c703654fbf2200657d61e31590/appspider/images/Screen Shot 2018-06-04 at 4.21.36 PM.png', None], dtype=object) array(['/areas/docs/_repos//product-documentation__master/7a5dab79b82f82c703654fbf2200657d61e31590/appspider/images/Screen Shot 2018-06-04 at 4.21.57 PM.png', None], dtype=object) array(['/areas/docs/_repos//product-documentation__master/7a5dab79b82f82c703654fbf2200657d61e31590/appspider/images/Screen Shot 2018-06-04 at 4.31.16 PM.png', None], dtype=object) array(['/areas/docs/_repos//product-documentation__master/7a5dab79b82f82c703654fbf2200657d61e31590/appspider/images/Screen Shot 2018-06-04 at 4.32.05 PM.png', None], dtype=object) array(['/areas/docs/_repos//product-documentation__master/7a5dab79b82f82c703654fbf2200657d61e31590/appspider/images/Screen Shot 2018-06-04 at 4.32.28 PM.png', None], dtype=object) array(['/areas/docs/_repos//product-documentation__master/7a5dab79b82f82c703654fbf2200657d61e31590/appspider/images/Screen Shot 2018-06-04 at 4.32.51 PM.png', None], dtype=object) ]
docs.rapid7.com
The genbankr package is part of the Bioconductor ecosystem of R packages. As such, it’s primary splash page - including continuous integration and testing badges - can be found here. genbankr is a package which provides utilities to parse GenBank and GenPept files into data structures which integrate with the Bioconductor ecosystem. This package depends on the Bioconductor ecosystem of R packages. If you already have a version of Bioconductor (>=3.3) installed, you can do gthe following: libary(BiocManager) BiocManager::install("genbankr") If you do not currently have the Bioconductor core machinery installed, you can get the current or release version like so: if (!requireNamespace("BiocManager", quietly=TRUE)) install.packages("BiocManager") for release and if (!requireNamespace("BiocManager", quietly=TRUE)) install.packages("BiocManager") BiocManager::install(version = "devel") For devel. After doing one of the above (once), you can install genbankr as above, via library(BiocManager) BiocManager::install("genbankr") Note that Bioconductor is a sychronized development and release platform, so release and development versions of Bioconductor packages cannot be safely mixed in the same package (installation) library. Use switchr or direct .libPaths management to maintain multiple side-by-side installations if necessary. To install directly from github (this will generally not be necessary unless you intend to contribute to genbankr’s development), do if (!requireNamespace("BiocManager", quietly=TRUE)) install.packages("BiocManager") BiocManager::install(version = "devel") BiocManager::install("gmbecker/genbankr") The Bioconductor development repository will contain the latest development version of genbankr which has passed testing (lagged by about a day).
https://docs.ropensci.org/genbankr/
2020-11-23T22:45:08
CC-MAIN-2020-50
1606141168074.3
[]
docs.ropensci.org
Bookmarks Using the built-in bookmark feature of Elentra can save users time and energy. Bookmark filter settings for assessment and evaluation reporting, curriculum search filters, learning event displays, etc. to quickly access your most commonly used tools. Bookmarks display by default in Elentra but there is a database setting to control this (setting: bookmarks_display_sidebar). If you want to hide Bookmarks from the user sidebar speak to a developer about changing the setting. How to add a bookmark Navigate to the page you want to bookmark (e.g., go to Curriculum Search and set the filters you want to use) Click on Add Bookmark in the My Bookmarks box in the left sidebar Provide an appropriate Bookmark Title and click Submit Your newly bookmarked page will show up in the My Bookmarks box How to delete an existing bookmark Click the settings cog within the My Bookmarks box in the left sidebar Trashcan icons will appear to the left of each bookmarked page Click on the trashcan icon beside a specific bookmark to delete it Click Done when you are finished deleting bookmarks How to rearrange existing bookmarks Click the settings cog within the My Bookmarks box in the left sidebar Crossed arrows will appear to the right of each bookmarked page Click on and drag the crossed arrows to rearrange your list of bookmarked pages Click Done when you are finished rearranging your bookmarks How to rename existing bookmarks Click the settings cog within the My Bookmarks box in the left sidebar Click on the name of the bookmark you want to edit Make the required changes and click the checkmark to indicate when your editing is complete Click Done when you are finished editing your bookmarks Display Style Switcher People Search Last modified 4mo ago Export as PDF Copy link Contents How to add a bookmark How to delete an existing bookmark How to rearrange existing bookmarks How to rename existing bookmarks
https://docs.elentra.org/learn-elentra-me/user-tools/bookmarks
2022-05-16T18:45:02
CC-MAIN-2022-21
1652662512229.26
[]
docs.elentra.org
Coding Standards¶ Please help us with consistent coding standards for a number of reasons: - Reduce the maintenance cost - Improve the readability - Ship source code as a product, in fact, it’s an art Although there is no restricted rules to be followed in all instances, it’s still highly recommended that when you implement new code or fix existing code, use the same style that is already being used in other places of the codebase. Naming Convention¶ In general, use camel case. - Class names - Should be nouns and start with an upper case letter, like EmptyIfStatementRule. - Variable names - Should be nouns, and start with a lower case letter, like ifStmt. Names for class attributes (fields, members) are recommended with a underscore as prefix, like _carrier. - Function names - Should be verbs, start with a lower case letter, like applyLoopCompoundStmt. Don’t Use else after return¶ Having else or else if after statements that interrupt control flow, like return, break, continue, etc, is confusing. It warms the readability, and sometimes, it’s hard to understand. We have a rule implemented about this, so dogfooding can help identify them. Don’t Use Inverted Logic¶ To make code easier to understand, code like if (!foo()) { doSomething(); } else { doSomethingElse(); } Should be changed to if (foo()) { doSomethingElse(); } else { doSomething(); }
https://oclint-docs.readthedocs.io/en/stable/devel/codingstandards.html
2022-05-16T19:36:33
CC-MAIN-2022-21
1652662512229.26
[]
oclint-docs.readthedocs.io
You're viewing Apigee Edge documentation. View Apigee X documentation. On March 10,: sudo /opt/apigee/apigee-service/bin/apigee-service apigee-validate update - Import the new SmartDocs proxy from the smartdocs.zipin the /opt/apigee/apigee-validate/bundlesdirectory and deploy as a new revision. The new proxy should be imported into the organization where SmartDocs is currently configured. Deploying the proxy as a new revision will make a rollback easier, if needed. Note: Before deploying, check to ensure that the <VirtualHost>in the new proxy matches the <VirtualHost>configuration currently set in your environment. If it does not, edit the proxy before deploying. -. Supported software None. Deprecations and retirements None. Bugs fixed The following table lists the bugs fixed in this release:
https://docs.apigee.com/release/notes/4190602-edge-private-cloud-release-notes?hl=in
2022-05-16T17:59:13
CC-MAIN-2022-21
1652662512229.26
[]
docs.apigee.com
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region. Subnets in the DB subnet group should cover at least two Availability Zones unless there is only one Availability Zone. Namespace: Amazon.Neptune.Model Assembly: AWSSDK.Neptune.dll Version: 3.x.y.z The DBSubnetGroupDoesNotCoverEnoughAZsException type exposes the following members .NET Core App: Supported in: 3.1 .NET Standard: Supported in: 2.0 .NET Framework: Supported in: 4.5, 4.0, 3.5
https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/Neptune/TDBSubnetGroupDoesNotCoverEnoughAZsException.html
2022-05-16T20:02:56
CC-MAIN-2022-21
1652662512229.26
[]
docs.aws.amazon.com
Updated: March 31, 2021 Welcome to Cashcow website, a website-hosted user interface (the “Interface” or “App”) provided by Cashcow Finance (“we”, “our”, or “us”). The Interface provides access to a decentralized protocol on the Huobi Eco Chain that allows suppliers and borrowers of certain digital assets to participate in autonomous interest rate markets (the “Protocol”). This Terms of Service Agreement (the “Agreement”). 1. Modification of this Agreement We reserve the right, in our sole discretion, to modify this Agreement from time to time. If we make any modifications, we will notify you by updating the date at the top of the Agreement and by maintaining a current version of the Agreement on this page. All modifications will be effective when they are posted, and your continued use of the Interface will serve as confirmation of your acceptance of those modifications. If you do not agree with any modifications to this Agreement, you must immediately stop accessing and using the Interface. 2. Eligibility Singapore, Huobi Eco Chain and is not our proprietary property. 4. Privacy We care about your privacy. Although we will comply with all valid subpoena requests, we will carefully consider each request to ensure that it comports with the spirit and letter of the law, and we will not hesitate to challenge invalid, overboard,. 5. Prohibited Activity You agree not to engage in, or attempt to engage in, any of the following categories of prohibited activity in relation to your access and use of the Interface: 1. Intellectual Property Infringement. Activity that infringes on or violates any copyright, trademark, service mark, patent, right of publicity, right of privacy, or other proprietary or intellectual property rights under the law. 2. Cyberattack. Activity that seeks to interfere with or compromise the integrity, security, or proper functioning of any computer, server, network, personal device, or other information technology system, including (but not limited to) the deployment of viruses and denial of service attacks. 3. Fraud and Misrepresentation. Activity that seeks to defraud us or any other person or entity, including (but not limited to) providing any false, inaccurate, or misleading information in order to unlawfully obtain the property of others. 4. Market Manipulation. Activity that violates any applicable law, rule, or regulation concerning the integrity of trading markets, including (but not limited to) the manipulative tactics commonly known as spoofing and wash trading. 5. Any Other Unlawful Conduct. Activity that violates any applicable law, rule, or regulation of the Singapore or another relevant jurisdiction, including (but not limited to) the restrictions and regulatory requirements imposed by Singapore law... 8. No Fiduciary Duties This Agreement is not intended to, and does out expressly in this Agreement. 9. Compliance Obligations The Interface is operated from facilities within the Singapore. The Interface may not be available or appropriate for use in other jurisdictions. the Singapore, or if your use of the Interface would be illegal or otherwise violate any applicable law. The Interface and all of its contents are solely directed to individuals, companies, and other entities located within the Singapore. 10. Assumption of Risk By accessing and using the Interface, you represent supplied to the Protocol. If you borrow digital assets from the Protocol, you will have to supply digital assets of your own as collateral. If your collateral declines in value such that it is no longer sufficient to secure the amount that you borrowed, others may interact with the Protocol to seize your collateral in a liquidation event.. this Agreement does not apply to your dealings or relationships with any third parties. You expressly relieve us of any and all liability arising from your use of any such resources or participation in any such promotions. of California Civil Code §.” amount you paid to us in exchange for access to and use of the Interface, or $100.00, whichever is greater. this Agreement may not apply to you. This limitation of liability shall apply to the fullest extent permitted by law. 15. Dispute Resolution. Frequently Asked Questions 13/6/21 - 20/6/21 Last modified 1yr ago Copy link
https://docs.cashcow.finance/terms-of-service
2022-05-16T18:44:19
CC-MAIN-2022-21
1652662512229.26
[]
docs.cashcow.finance
Jamf Now Mac Management Requirements These requirements ensure that a managed Mac fully communicates with Jamf Now: Jamf Now can only manage local accounts for Mac. Any network bound account used to enroll a Mac will be unable to inventory with Jamf Now. Jamf Now can only fully communicate with one local account per Mac. The local account used to enroll will be the account managed by Jamf Now. For any Auto-Enrolled Mac, this will most likely be the account created during the Setup Assistant. - Jamf Now cannot fully communicate with network-bound directory services like Active Directory and Open Directory. If a Mac is bound to a directory service, it will never be fully managed, and its record in Jamf Now will never update. To correct this communication issue, the Mac can be unbound from the directory service under .Note: If you want directory bound Mac management, we recommend checking out our sister product, Jamf Pro, as an alternative solution. For additional information on Jamf Pro, see the following links:
https://docs.jamf.com/jamf-now/documentation/Jamf_Now_Mac_Management_Requirements.html
2022-05-16T18:29:48
CC-MAIN-2022-21
1652662512229.26
[]
docs.jamf.com
Other CAM Software Why is my CAM software not mentioned? If they do not have one available they should be able to make one for you. Should they need assistance or have questions they can contact MASSO support and we will be happy to assist. Who makes MASSO Post Processors? Post Processors are made by the CAM Software supplier. As writers of the CAM software they have the expertise in writing Post Processors to suit their product. Should they need assistance or have questions they can contact MASSO support and we will be happy to assist. Why does MASSO not write Post Processors? Post Processors are made by the Cam software writers as they have the expertise in their own products. In the past we have approached various CAM software makers and requested Post Processors for MASSO but as we do not have their product or a support contract they are not interested. MASSO support is happy to work with CAM software makers to help with the development of Post Processors. Why are there links to some Post processors and not others? The Post Processors were provided by users of the various software or by the CAM manufacture themselves and linked to for convenience. If you have a post processor you have made and wish to share please post it in the Forums Post Processor section The person who makes the post processor is responsible for maintaining it. Contact MASSO support and a page with link will be added the new post processor.
https://docs.masso.com.au/cam-post-processors/other-cam-postprocessors
2022-05-16T18:23:42
CC-MAIN-2022-21
1652662512229.26
[]
docs.masso.com.au
Introduction to Nife Platform Nife is a developer-friendly serverless platform designed to let businesses quickly manage, deploy and scale applications globally. It runs your applications close to your users and scales compute in cities where your app is most accessed. Traditionally, applications get deployed on the cloud located a distance from the end-user. When data travels across regions and locations, it leads to computational challenges – bandwidth, cost and performance, to name a few. Cloud is built like a lego. To create a multi-region infrastructure for your applications across the limited cloud regions, you must understand each piece - network, infrastructure, capacity, and compute resources. Additionally, manage and monitor the infrastructure. This still does not improve the application performance. Nife PaaS Platform allows you to deploy all kinds of services like complete web applications, APIs, event-driven serverless functions close to the end-user without worrying about the underlying infrastructure. Nife comes with fast, continuous deployments and a built-in versioning framework to manage the applications. You can deploy standard Docker containers or plug your code directly from your git repositories to let your applications move across infrastructure worldwide. Applications can be deployed globally with multiple locations across North America, LatAm, Europe, and APAC. The Nife edge network provides an intelligent load balancer with rule-based geo-routing.
https://docs.nife.io/docs/Introduction/Nife
2022-05-16T18:41:25
CC-MAIN-2022-21
1652662512229.26
[]
docs.nife.io
Crate reed_solomon_erasuresource · [−] Expand description. Modules Macros Makes it easier to work with 2D slices, arrays, etc. Structs Reed-Solomon erasure code encoder/decoder. Bookkeeper for shard by shard encoding. Enums Traits Something which might hold a shard.
https://docs.rs/reed-solomon-erasure/latest/reed_solomon_erasure/
2022-05-16T17:51:45
CC-MAIN-2022-21
1652662512229.26
[]
docs.rs
There are several options for exporting data from your ClickUp Workspace! CSV Export A Workspace owner or admin can export Workspace task data at any time from the Import/Export page on their avatar menu. Attachments in tasks Download individual attachment files or a zip file with all attachments from any task. Export List or Table view Create a custom List view or Table view and export it with various options using this helpful feature available on our Business Plan and above. Note: exporting into XLSX format will preserve some formatting elements API If you are technically inclined, the ClickUp API is available with a standard rate limit of 100 requests/minute/token. Doc pages With Docs, you have the option to export pages in several formats! Simply click on the ellipses shown below: You can also export all the pages of a Doc simultaneously in PDF, Markdown, and HTML through the powerful Doc export feature! Exporting all Docs in bulk from a Workspace isn't available yet, but it is something that has been requested on our feedback boards! Feel free to add a vote or comment on that post. That's the best way for our team to know what changes and updates ClickUp users most want to see developed! Dashboard Widgets Within Dashboards, Workspaces on our Business Plan and above can export data from select widgets. Some provide a multitude of file format options! Widgets with export options include: Line Chart Bar Chart Pie Chart Battery Chart Bar Chart Burnup Chart Burndown Chart Cumulative Flow Velocity Chart Cycle Time Lead Time Workload By Status (All) Status Over Time Tag Usage (All) Tag Usage Over Time Tasks by Assignee (All) Priority Breakdown (All) Priority Over Time Time Reporting (CSV ONLY) Timesheet (CSV ONLY) Billable Report (CSV ONLY) Want to learn more?
https://docs.clickup.com/en/articles/1907587-data-portability-export-your-workspace-s-data
2022-05-16T19:46:01
CC-MAIN-2022-21
1652662512229.26
[]
docs.clickup.com
Subtasks in Multiple Lists Effortlessly manage your Sprints, Epics, and Goals without the clutter of unrelated tasks! Previously, only parent tasks could be added to multiple Lists. If you had a parent task with 15 subtasks and only 1 of those subtasks related to this week's Sprint, it was frustrating and distracting to see all 15 subtasks during your Sprint cycle. Now, plan and manage your workflow easier than ever by adding individual subtasks to multiple Lists! Just like Tasks in Multiple Lists, any changes made to a subtask will reflect in all the Lists the subtask has been added to, keeping your work synchronized at all times! Workspace Owners and Admins can enable Subtasks in Multiple Lists by checking the option under the Tasks in Multiple Lists ClickApp. Once enabled, head over to your subtask and either select the +button next to the breadcrumbs or select the ...menu button to add your subtask to multiple Lists! Note: Unlimited uses of Subtasks in Multiple Lists is available on our Business Plus Plan and above. Speed & Performance Improvements We're still focused on speed and releasing new optimizations every other week (if not faster) in our efforts to continue making ClickUp the fastest productivity platform on the planet! 🌎 💜 This week we... Enhanced the responsiveness of the Workspace Sidebar 🚤 Refined task view to allow faster edits of task details 👀 Accelerated the rendering of items in task activity 📈 Other Improvements Pop-Up Message when Deleting Tasks in Multiple Lists We've made it easier than ever to identify when a task has been deleted from multiple locations! 🙌 Whether you delete tasks individually or by bulk using the Multitask Toolbar, you'll now receive a message in the bottom left corner of your Workspace notifying you that you've deleted tasks from multiple Lists. 💬 The message will give you the option to Visit Trash or Undo your delete. The undo option will automatically restore the task into every List it lived in prior to being deleted. Bug Fixes In our continued effort to make ClickUp the most reliable productivity platform, we took extra focus this week on eliminating key bugs in the platform! 🚫 🦠 Here's a quick rundown on some of the top fixes: Tasks Fixed: Expanding the task description overlapped Custom Fields, subtasks, checklists and Relationships in Task view Fixed: Github icon missing from Task view Fixed: Unable to add to guests to a People Custom Field from Task view Views Fixed: Unable to add Custom Fields to Table view at the Everything level Fixed: Tasks in Lists under Shared with me were hidden in Board, Calendar, Timeline, and Table views Fixed: Unable to view or add tags in Table view at the Everything level Fixed: Guests were unable to see Custom Field columns in Table view Docs Fixed: Unable to collaboratively edit Docs created from a Doc template Fixed: Exporting Docs would infinitely load Fixed: Selecting text in Docs was delayed when the Dashlane Chrome extension was enabled Fixed: Guests did not have access to Doc views at the Space level Fixed: Unable to set standard or Custom Fields when creating a task from a Doc view Other Fixed: Unable to log in via web browser Fixed: Unable to select an assignee using the Automation trigger When Assignee changes And many more... 🐛
https://docs.clickup.com/en/articles/5635878-release-2-103
2022-05-16T17:48:10
CC-MAIN-2022-21
1652662512229.26
[]
docs.clickup.com
[ ) ] Testing Changes with DevStack¶ An important part of development is, obviously, being able to test changes you develop and ensure that they function properly. In a complicated ecosystem like OpenStack there is a need to also be able to verify the interoperability of your code. You may make a change to Cinder but that change can also impact the way that Nova interacts with Cinder’s APIs. Developers need an easy way to deploy an OpenStack cloud to do functional and interoperability testing of changes. This is the purpose of DevStack. What is DevStack¶ DevStack is a modular set of scripts that can be run to deploy a basic OpenStack cloud for use as a demo or test environment. The scripts can be run on a single node that is baremetal or a virtual machine. It can also be configured to deploy to multiple nodes. DevStack deployment takes care of tedious tasks like configuring the database and message queueing system, making it possible for developers to quickly and easily deploy an OpenStack cloud. By default, the core services for OpenStack are installed but users can configure additional services to be deployed. All services are installed from source. DevStack will pull the services from git master unless configured to clone from a stable branch (i.e. stable/pike). Getting and Configuring DevStack¶ To get DevStack you will need to clone it from the devstack repository under openstack. git clone It is recommended before proceeding further to set-up passwords and IP addresses for the environment in which you are running DevStack. This is done by configuring the local.conf file in DevStack. cd ./devstack cp ./samples/local.conf local.conf vi ./local.conf To simplify future interaction with your deployment you will want to set the following variables: ADMIN_PASSWORD DATABASE_PASSWORD RABBIT_PASSWORD On some distributions you may need to also set the HOST_IP. Whether this is necessary will depend on what naming convention is used for network interfaces in your operating system. Further down the file is the Using milestone-proposed branches section. These are the variables that can be changed if you wish to clone a branch other than master for one or more projects. Once your changes have been saved to local.conf you are ready to deploy an OpenStack cloud with DevStack. Deploying DevStack¶ Once you have your local.conf file configured executing DevStack is quite easy. Note The following command needs to be run as a user with sudo access and NOPASSWD configured. ./stack.sh Note The above command and subsequent command examples assume you are still in the root directory of your DevStack clone. At this point DevStack takes over preparing your node to function as an OpenStack cloud. The following is done: Required packages (like mysql and rabbitmq) are installed mysql and rabbitmq are configured The OpenStack projects are cloned into /opt/stack/ Any temporary backing files needed to simulate a system are created in /opt/stack/data Each project’s database is created and populated The OpenStack Services are registered and started A log of the stack.sh run is kept in /opt/stack/logs/stack.sh.log. Verifying Your DevStack Deployment¶ If OpenStack was successfully deployed by DevStack you should be able to point a web browser at the IP specified by HOST_IP in local.conf and access Horizon. Note The admin password will be set to the value you put in your local.conf file for ADMIN_PASSWORD Project services are all registered with systemd. Each service is prefixed with devstack@. Therefore you may verify through systemd that the Cinder Volume process is working with a command like: systemctl status [email protected] Since systemd accepts wildcards, the status of all services associated with DevStack can be displayed with: systemctl status devstack@* Logs for the running services are also able to be viewed through systemd. To display the logs for the Cinder Volume service the following command could be used: journalctl -u [email protected] A more complete reference of using systemd to interact with DevStack can be found on the Using Systemd in DevStack page. Testing Changes with DevStack¶ Using DevStack to develop and test changes is easy. Development can be done in the project clones under /opt/stack/<project name>. Since the projects are clones of the project’s git repository a branch can be made and development can take place. DevStack uses the code in those directories to run the OpenStack services so any change may be tested by making a code change in the project’s directory and then by restarting the project’s service through systemd. Here is an example of what that process would look like. In this example a change is made to Cinder’s LVM driver: cd /opt/stack/cinder/cinder/volume/drivers vi lvm.py *Brilliant Code Improvement Implemented* sudo systemctl restart [email protected] Once testing and development of your code change is complete you will want to push your code change to Gerrit for review. Since the projects in /opt/stack are already synced to their respective git repository you can configure git review, commit your change and upload the changes to Gerrit. Stopping DevStack¶ To shutdown a DevStack instance running on a node the following command should be used: ./unstack.sh This command cleans up the OpenStack installation that was performed on the node. This includes: Stopping the project services, mysql and rabbitmq Cleaning up iSCSI volumes Clearing temporary LVM mounts Running unstack.sh is the first thing to try in the case that a DevStack run fails. If subsequent runs fail a more thorough removal of DevStack components may be done with the following command: ./clean.sh A clean.sh run does the steps for unstack.sh plus additional cleaning: Removing configuration files for projects from /etc Removing log files Hypervisor clean-up Removal of .pyc files Database clean-up etc.
https://docs.openstack.org/contributors/code-and-documentation/devstack.html
2022-05-16T19:46:10
CC-MAIN-2022-21
1652662512229.26
[]
docs.openstack.org
The LambdaTest integration allows you to report bugs and UI anomalies directly into ClickUp. While performing cross browser testing with LambdaTest, you may capture screenshots, highlight anomalies, annotate extra information, choose bug task assignees, and provide further details in your tasks. How To Integrate ClickUp With Your LambdaTest Account? Step 1: Login to your LambdaTest account. Note: Admin or User level access is required to see and install integrations. Step 2: Select ‘Integration’ from the left navigation menu bar. Step 3: Click on the block that says ‘ClickUp’. Step 4: After you click on the ClickUp icon, you will need to authenticate LambdaTest API with your ClickUp account. If you are not logged into your ClickUp account, then you will be asked to do so for successfully authenticating you ClickUp account with LambdaTest. Note: If you are already logged into your ClickUp account, you will be redirected to the ClickUp instance for authenticating the LambdaTest APIs to fetch necessary details from your ClickUp account. Step 5: After authentication of your ClickUp account, you will be redirected back into LambdaTest application where you will notice a prompt message indicating that you have successfully integrated your LambdaTest account with your ClickUp instance. You will also notice a green tick and a refresh icon. The refresh button will help you synchronize your ClickUp account with LambdaTest in just a single click. Log Your First Bug Through LambdaTest Integration With ClickUp Step 1: Go for any of the test from the left navigation menu. For demo, we will be taking “Real Time Test” option. Step 2: Present a URL of the web-app you need to test in the dialog box. After that, select any configuration for browser and operating system of your choice & hit ‘Start‘. Step 3: After the VM is launched and operable, you may perform testing on your web-app for finding bugs. If a bug gets revealed, then you need to click on the Bug icon from the left panel for capturing a screenshot of the same. We have highlighted that option with yellow in the below image. Step 4: After a screenshot is captured, you can annotate any issue through the image editor. Once you are done highlighting the bug, click on the button that says “Mark as Bug”. Step 5: Fill in your fields and create the issue. You can select the Team you wish to assign the bug. Specify the Space on which you wish to log the bug. Choose a particular Project. Determine the List on which you wish to involve the UI bug/suggestion. Set a Status for the task. Assign it to a colleague. Provide a task name. A relevent Description about the UI observation. Note: After you click on “Create Bug”, you will be able to observe it being successfully marked through a single click effort. You will get prompt messages on top of your Virtual Machine indicating the progress of bug logging. After few seconds you will be notified with a prompt message “Bug successfully marked” indicating that the screenshot has been pushed to your ClickUp project.. Remove LambdaTest Integration With ClickUp You can work with one integration at a time. So if you would want to integrate to a similar 3rd party application, then you would have to remove your current integration. Here is how you can do that. Step 1: Login to your LambdaTest account. Step 2: Select. Happy testing! 😊
https://docs.clickup.com/en/articles/3372979-lambdatest-integration
2022-05-16T18:48:40
CC-MAIN-2022-21
1652662512229.26
[]
docs.clickup.com
Package org.hibernate.internal.log Interface UnsupportedLogger @MessageLogger(projectCode="HHH") @ValidIdRange(min=90002001, max=90003000) public interface UnsupportedLoggerClass to consolidate logging about usage of features which should never be used. Such features might have been introduced for practical reasons so that people who really know what they want can use them, with the understanding that they should find a better alternative.
https://docs.jboss.org/hibernate/orm/6.0/javadocs/org/hibernate/internal/log/UnsupportedLogger.html
2022-05-16T18:46:06
CC-MAIN-2022-21
1652662512229.26
[]
docs.jboss.org
Introduction This guide describes how to transfer hosted content to Plesk Obsidian using Plesk Migrator. It is intended for hosting administrators who perform migration to servers managed via Plesk. Supported Source Hosting Platforms Plesk Migrator supports migration from the following source platforms: - Particular versions of Plesk for Linux and Plesk for Windows: 8.6, 9.5, 10.4, 11.0, 11.5, 12.0, 12.5, 17.0, 17.5, 17.8. - cPanel 11.5 - Confixx 3.3 - Helm 3.2 - Plesk Expand 2.3.2 - Parallels Pro Control Panel for Linux 10.3.6 - DirectAdmin 1.51 (when migrating from DirectAdmin installed on Ubuntu 10.x, only custom migration is supported) What Data Are Transferred Plesk Migrator transfers service plans, subscriptions with all associated domains, and websites with content (such as files,.
https://docs.plesk.com/en-US/obsidian/migration-guide/introduction.75496/
2022-05-16T18:54:45
CC-MAIN-2022-21
1652662512229.26
[]
docs.plesk.com
The Render Log prints the progress messages, warnings and errors to the console or a file as kick renders an image. As well as showing the percentage completed, these detailed statistics can be used to optimize and debug scenes. The Arnold log provides detailed statistics that are useful for debugging, optimizing and benchmarking renders. It is the first thing to examine should you encounter errors and usually the first information to send to support. The verbosity levels vary from kick to the plugins. The log below contains the most common output. The first column of the log, 00:00:00, shows the time elapsed in hh:mm:ss. The second column, 773 MB, shows the memory usage at that stage. Initialization (Info) 00:00:00 773MB | log started Tue Jul 21 15:26:25 2015 00:00:00 773MB | Arnold 4.2.7.4 windows icc-14.0.2 oiio-1.5.15 rlm-11.2.2 2015/06/15 09:39:31 00:00:00 773MB | host application: MtoA 1.2.3.1 03a85380bec8 (Master) MtoA-1.2.3.1 Maya 2016 00:00:00 773MB | running on PC, pid=12188 00:00:00 773MB | 1 x Intel(R) Xeon(R) CPU E5-1650 v2 @ 3.50GHz (6 cores, 12 logical) with 32712MB 00:00:00 773MB | Windows 7 Professional Service Pack 1 (version 6.1, build 7601) There is a single initialization process for the whole render session (only on the first render), with an updated process for every following render. 00:00:00 773MB | [mtoa.session] Initializing at frame 1.000000 00:00:00 773MB | [mtoa] Exporting Arnold options 'defaultArnoldRenderOptions' 00:00:00 773MB | [mtoa.extensions] aiOptions Using translator <built-in>, provided by <built-in>(<built-in>). 00:00:00 773MB | [mtoa.session] defaultArnoldRenderOptions | Exporting plug defaultArnoldRenderOptions.message for type aiOptions This section displays the results of importing any shaders or procedurals. Just rendering through kick on its own shows no plugin and the ass location. 00:00:00 773MB | loading plugins from . ... 00:00:00 773MB | no plugins loaded 00:00:00 773MB | [ass] loading project/scenes/simplescene.ass ... 00:00:00 773MB | [ass] read 11386 bytes, 11 nodes in 0:00.00 If a plugin is being loaded (and verbosity is high enough) the shaders will be listed: 00:00:00 773MB | loading plugin: C:/solidangle/mtoadeploy/2016/shaders/mtoa_shaders.dll ... 00:00:00 773MB | mtoa_shaders.dll: MayaMultiplyDivide uses Arnold 4.2.7.4 00:00:00 773MB | mtoa_shaders.dll: MayaClamp uses Arnold 4.2.7.4 00:00:00 773MB | mtoa_shaders.dll: MayaGammaCorrect uses Arnold 4.2.7.4 00:00:00 773MB | mtoa_shaders.dll: MayaCondition uses Arnold 4.2.7.4 .... 00:00:00 774MB | mtoa_shaders.dll: volume_sample_float uses Arnold 4.2.7.4 00:00:00 774MB | mtoa_shaders.dll: volume_sample_rgb uses Arnold 4.2.7.4 00:00:00 774MB | mtoa_shaders.dll: driver_mplay uses Arnold 4.2.7.4 00:00:00 774MB | loaded 90 plugins from 1 lib(s) in 0:00.00 License Check If a license is available, the log will display the following. 00:00:00 810MB | [rlm] checkout of "arnold 20150615" in progress ... 00:00:00 810MB | [rlm] checkout of "arnold 20150615" from server PC in 0:00.01 00:00:00 810MB | [rlm] expiration date: 31-dec-2015 (164 days left) However, if the RLM server is not running, it will display a warning. 00:00:00 810MB WARNING | [rlm] could not connect to license server on 5053@localhost Color Management 00:00:00 56MB | 00:00:00 63MB | [color_manager_syncolor] Using syncolor_color_manager Version 2018.0.80 00:00:00 63MB | with the native catolog directory from /path/to/synColor 00:00:00 63MB | and the optional custom catalog directory from /other/path/to/Shared/ When using the SYNCOLOR environment variable, kick will override the ass color management paths so the output would be: 00:00:00 63MB | [color_manager_syncolor] Using syncolor_color_manager Version 2018.0.80 00:00:00 63MB | from the preference file /path/to/synColorConfig.xml 00:00:00 63MB | and the optional custom catalog directory from /other/path/to/Shared/ Scene Contents Here it will list the numbers of lights and objects (and their types) in the scene. 00:00:00 818MB | there are 1 light and 2 objects: 00:00:00 818MB | 1 persp_camera 00:00:00 818MB | 1 distant_light 00:00:00 818MB | 1 utility 00:00:00 818MB | 1 lambert 00:00:00 818MB | 1 driver_exr 00:00:00 818MB | 1 gaussian_filter 00:00:00 818MB | 1 polymesh 00:00:00 818MB | 1 list_aggregate 00:00:00 818MB | 1 MayaShadingEngine 00:00:00 818MB | 1 renderview_display 00:00:00 818MB | 1 color_manager_syncolor Resolution / Samples This section displays the output resolution and details about the number of samples. 00:00:00 818MB | rendering image at 640 x 480, 3 AA samples 00:00:00 818MB | AA sample clamp <disabled> 00:00:00 818MB | diffuse samples 3 / depth 1 00:00:00 818MB | specular samples 3 / depth 2 00:00:00 818MB | transmission samples 1 / depth 2 00:00:00 818MB | volume indirect <disabled by depth> 00:00:00 818MB | total depth 10 00:00:00 818MB | bssrdf <disabled> 00:00:00 818MB | transparency depth 10 / fast opacity off Nodes Before rendering starts, nodes are initialized. The stats here give insight into the objects and lights in the scene and can show warnings if anything is configured incorrectly. Note that some data is not immediately initialized, some computations are delayed until the first ray hits the object so that further information can be shown during the render. 00:00:00 818MB | initializing 11 nodes ... 00:00:00 818MB | creating root object list ... 00:00:00 818MB | scene bounds: (-0.995465517 -1.01209259 -0.995465755) -> (1.00453472 0.98790741 1.00453484) 00:00:00 818MB | node initialization done in 0:00.00 (single-threaded) 00:00:00 818MB | updating 12 nodes ... 00:00:00 818MB | directionalLightShape1: distant_light using 1 sample 00:00:00 818MB | node update done in 0:00.00 (single-threaded) Drivers / AOVs Here are the drivers and AOVs used in the scene, and any warnings if they are set up incorrectly. 00:00:00 818MB | [aov] parsing output statement: "RGBA RGBA defaultArnoldFilter@gaussian_filter defaultArnoldDisplayDriver@renderview_display" 00:00:00 818MB | [aov] parsing output statement: "RGBA RGBA defaultArnoldFilter@gaussian_filter defaultArnoldDriver@driver_exr.RGBA" 00:00:00 818MB | [aov] registered driver: "defaultArnoldDisplayDriver@renderview_display" (renderview_display) 00:00:00 818MB | [aov] * "RGBA" of type RGBA filtered by "defaultArnoldFilter@gaussian_filter" (gaussian_filter) 00:00:00 818MB | [aov] registered driver: "defaultArnoldDriver@driver_exr.RGBA" (driver_exr) 00:00:00 818MB | [aov] * "RGBA" of type RGBA filtered by "defaultArnoldFilter@gaussian_filter" (gaussian_filter) 00:00:00 818MB | [aov] done preparing 1 AOV for 2 outputs to 2 drivers (0 deep AOVs) Progress Now that the render is starting, the log will show how many buckets are being used and at what size. If any importance tables need to be generated, in this case for the skydome, they will be done so now, showing the average energy value. Percentage completed is shown in 5% increments along with the average rays per pixel. This value will show at a glance if your sampling settings are too high. 00:00:00 825MB | starting 12 bucket workers of size 64x64 ... 00:00:00 835MB | 0% done - 9 rays/pixel 00:00:00 835MB | [accel] bvh4 done - 0:00.00 - 400 prims, 1 key 00:00:00 836MB | 5% done - 9 rays/pixel 00:00:00 836MB | 10% done - 9 rays/pixel 00:00:00 836MB | 15% done - 9 rays/pixel 00:00:00 837MB | 20% done - 9 rays/pixel 00:00:00 837MB | 25% done - 9 rays/pixel 00:00:00 837MB | 30% done - 9 rays/pixel 00:00:00 837MB | 35% done - 10 rays/pixel 00:00:00 837MB | 40% done - 10 rays/pixel 00:00:00 838MB | 45% done - 12 rays/pixel 00:00:00 838MB | 50% done - 12 rays/pixel 00:00:00 838MB | 55% done - 13 rays/pixel 00:00:00 838MB | 60% done - 13 rays/pixel 00:00:00 838MB | 65% done - 13 rays/pixel 00:00:00 838MB | 70% done - 13 rays/pixel 00:00:00 838MB | 75% done - 13 rays/pixel 00:00:00 838MB | 80% done - 13 rays/pixel 00:00:00 838MB | 85% done - 13 rays/pixel 00:00:00 838MB | 90% done - 13 rays/pixel 00:00:00 838MB | 95% done - 13 rays/pixel 00:00:00 838MB | 100% done - 13 rays/pixel 00:00:00 824MB | bucket workers done in 0:00.38 Output Once the render is complete, it will be written to file (or screen) using the selected output driver. The log will show the relative path. 00:00:00 824MB | [driver_exr] writing file `C:/Users/Documents/example.exr' 00:00:00 821MB | render done Scene Creation Stats These timings show the time taken to load plugins and .ass files. 00:00:00 821MB | scene creation time: 00:00:00 821MB | plugin loading 0:00.07 00:00:00 821MB | system/unaccounted 0:00.12 00:00:00 821MB | total 0:00.12 ( 1.05% machine utilization) Render Time Stats These timings show the time spent performing various tasks. Typically pixel rendering will take up most of the time if that's not the case then looking at the node initialization and geometry stats might reveal inefficiencies. Low machine utilization may indicate that other processes on the same machine are slowing down the render. But it can also indicate that a lot of time is spent performing single threaded tasks, for example in node initialization, procedurals or mesh processing. Another possibility is that there is a lot of (slow) file access, typically for textures or volumes. Looking at the texture stats might reveal problems. 00:00:00 821MB | render time: 00:00:00 821MB | node init 0:00.00 00:00:00 821MB | sanity checks 0:00.00 00:00:00 821MB | bucket rendering 0:00.38 00:00:00 821MB | mesh processing 0:00.00 00:00:00 821MB | accel. building 0:00.00 00:00:00 821MB | pixel rendering 0:00.38 (multi-threaded render, this value may not be reliable) 00:00:00 821MB | system/unaccounted 0:00.17 00:00:00 821MB | total 0:00.55 (69.23% machine utilization) Memory Stats Here memory used at startup and peak memory for various types of data are listed. When using a plugin, the startup memory shows how much memory the host application is using, when rendering from kick the startup memory is small which typically means larger scenes can be rendered. If memory usage is too high, these stats indicate which data would be most helpful to reduce. 00:00:00 821MB | memory consumed in MB: 00:00:00 821MB | at startup 807.25 00:00:00 821MB | plugins 2.52 00:00:00 821MB | AOV samples 11.67 00:00:00 821MB | output buffers 5.32 00:00:00 821MB | node overhead 0.00 00:00:00 821MB | message passing 0.02 00:00:00 821MB | memory pools 13.54 00:00:00 821MB | geometry 0.01 00:00:00 821MB | polymesh 0.01 00:00:00 821MB | accel. structs 0.02 00:00:00 821MB | strings 0.30 00:00:00 821MB | texture cache 0.00 00:00:00 821MB | unaccounted 1.64 00:00:00 821MB | total peak 842.29 Ray Stats These are the number of rays of each type, per pixel and per AA sample. If the render time is high, these numbers give insight into which types of samples are most expensive to render and would be most helpful trying to reduce. 00:00:00 821MB | ray counts: (/pixel , /sample) (% total) (avg. hits) (max hits) 00:00:00 821MB | camera 2318256 ( 8.71, 1.00) ( 64.57%) ( 0.11) ( 1) 00:00:00 821MB | shadow 228076 ( 0.86, 0.10) ( 6.35%) ( 0.00) ( 0) 00:00:00 821MB | diffuse 1043914 ( 3.92, 0.45) ( 29.08%) ( 0.00) ( 0) 00:00:00 821MB | total 3590246 ( 13.48, 1.55) (100.00%) ( 0.07) ( 1) 00:00:00 821MB | max depth 1 Shader Stats The number of shader calls in various contexts is shown here, which can be useful to figure out which shader calls are most helpful to speed up. - Primary: surface shading of non-shadow rays (this is traditional shading). - Transparent_shadow: surface shading of shadow rays when it goes through a transparent surface (needed to shade to know if the ray is blocked or can continue, and if it continues, has its color changed). - Autobump: each time autobump is computed, it has to compute displacement. - Background: the background shader. - Importance: calls made while computing importance tables. - Volume: volume shader calls. 00:00:00 821MB | shader calls: (/pixel , /sample) (% total) 00:00:00 821MB | primary 523006 ( 1.96, 0.23) (100.00%) 00:00:00 821MB | total 523006 ( 1.96, 0.23) (100.00%) You can use these shader stats to improve render times. If for instance, you see lots of transparent_shadow, that is usually an indication that your light samples are very high or you have lots of transparent objects. Another example would be if autobump is very high, you could try disabling autobump on meshes that don't seem to benefit from it as much. Geometry Stats These stats give insight into the amount of geometry in the scene. If memory usage is high or scene setup takes a long time, these can be used to find the objects that contribute to it most, and would benefit from being simplified or having their subdivision iterations reduced. 00:00:00 821MB | geometry: (% hit ) (instances) ( init mem, final mem) 00:00:00 821MB | lists 1 (100.0%) ( 0) ( 0.00, 0.00) 00:00:00 821MB | polymeshes 1 (100.0%) ( 0) ( 0.02, 0.01) 00:00:00 821MB | ----------------------------------------------------------------------------------------- 00:00:00 821MB | geometric elements: ( min) ( avg.) ( max) 00:00:00 821MB | objects (top level) 1 ( 1) ( 1.0) ( 1) 00:00:00 821MB | polygons 400 ( 400) ( 400.0) ( 400) 00:00:00 821MB | ----------------------------------------------------------------------------------------- 00:00:00 821MB | triangle tessellation: ( min) ( avg.) ( max) (/ element) (% total) 00:00:00 821MB | polymeshes 760 ( 760) ( 760.0) ( 760) ( 1.90) (100.00%) 00:00:00 821MB | unique triangles: 760 00:00:00 821MB | memory use (in MB) 0.01 00:00:00 821MB | vertices 0.00 00:00:00 821MB | vertex indices 0.00 00:00:00 821MB | packed normals 0.00 00:00:00 821MB | normal indices 0.00 00:00:00 821MB | uv coords 0.00 00:00:00 821MB | uv coords idxs 0.00 00:00:00 821MB | uniform indices 0.00 00:00:00 821MB | userdata 0.00 00:00:00 821MB | largest polymeshes by triangle count: 00:00:00 821MB | 760 tris -- pSphereShape1 Texture Stats Detailed statistics for image textures. These give insight into which textures use most memory, and which textures are untiled and would benefit from being converted to .tx files. The percentage of the main cache misses should be very small for good performance, if it is high render time can be significantly increased. If the main cache misses are too high, it helps to ensure only .tx files are used, reduce the number of textures, or tweak the texture settings. 00:00:00 841MB | OpenImageIO Texture statistics 00:00:00 841MB | Queries/batches : 00:00:00 841MB | texture : 261503 queries in 261503 batches 00:00:00 841MB | texture 3d : 0 queries in 0 batches 00:00:00 841MB | shadow : 0 queries in 0 batches 00:00:00 841MB | environment : 0 queries in 0 batches 00:00:00 841MB | Interpolations : 00:00:00 841MB | closest : 0 00:00:00 841MB | bilinear : 398256 00:00:00 841MB | bicubic : 701789 00:00:00 841MB | Average anisotropic probes : 2.8 00:00:00 841MB | Max anisotropy in the wild : 671 00:00:00 841MB | 00:00:00 841MB | OpenImageIO ImageCache statistics (000000003D574040) ver 1.5.15 00:00:00 841MB | Images : 1 unique 00:00:00 841MB | ImageInputs : 1 created, 1 current, 1 peak 00:00:00 841MB | Total size of all images referenced : 1.7 MB 00:00:00 841MB | Read from disk : 1.7 MB 00:00:00 841MB | File I/O time : 0.2s (0.0s average per thread) 00:00:00 841MB | File open time only : 0.0s 00:00:00 841MB | Tiles: 25 created, 24 current, 24 peak 00:00:00 841MB | total tile requests : 1454328 00:00:00 841MB | micro-cache misses : 35383 (2.43294%) 00:00:00 841MB | main cache misses : 25 (0.00171901%) 00:00:00 841MB | Peak cache memory : 2.3 MB 00:00:00 841MB | Image file statistics: 00:00:00 841MB | opens tiles MB read I/O time res File 00:00:00 841MB | 1 1 12 1.7 0.2s 768x 768x3.u8 C:/Users/Documents/example.tif UNTILED UNMIPPED MIP-COUNT [12,7,3,2,1,0,0,0,0,0] 00:00:00 841MB | 00:00:00 841MB | Tot: 1 12 1.7 0.2s 00:00:00 841MB | 1 not tiled, 1 not MIP-mapped Shutdown When Arnold is finished, it will release any resources, memory/threads, and shutdown. 00:00:00 841MB | releasing resources 00:00:00 831MB | unloading 27 plugins 00:00:00 829MB | unloading plugins done 00:00:00 828MB | Arnold shutdown
https://docs.arnoldrenderer.com/display/A5ARP/How+to+Read+a+Render+Log
2022-05-16T18:03:17
CC-MAIN-2022-21
1652662512229.26
[]
docs.arnoldrenderer.com
AWS IoT SiteWise concepts The following are the core concepts of AWS IoT SiteWise: - Gateway A gateway resides on the customer premises to collect, process, and route data. A gateway connects to industrial data sources by using OPC-UA , Modbus TCP , or Ethernet/IP protocols to collect data when processing or routing the data to the AWS cloud. Gateways use packs to collect data, process data at the edge, and more. For more information about available packs, see Using packs. You can create a gateway on any device or platform that can run AWS IoT Greengrass. The gateway software is made up of connectors that you can add to your AWS IoT Greengrass group. For more information, see Ingesting data using a gateway. - Packs Gateways use packs to decide how to collect, process, and route data. Currently, AWS IoT SiteWise supports the data collection pack and the data processing pack.. For more information about the available packs for your gateway, see Using packs. - Data collection pack Use the data collection pack so that your gateway can collect your industrial data and route it to the AWS destination of your choice. This pack is automatically added to your gateway and can't be removed. - Data processing pack Use the data processing pack so that your gateway can communicate with edge-configured asset models and assets. Gateways with the data processing pack automatically periodically sync with all asset models in your AWS account that are configured for the edge. - Data streams You can ingest industrial data to AWS IoT SiteWise before you create asset models and assets. AWS IoT SiteWise automatically creates data streams to receive streams of raw data from your equipment. - Data stream alias Data stream aliases help you easily identify a data stream. For example, the server1-windfarm/3/turbine/7/temperaturedata stream alias identifies temperature values coming from turbine #7 in wind farm #3. server1is the data source name that helps identify the OPC-UA server. server1-is a prefix added to all data streams reported from this OPC-UA server. - Data stream association After you create asset models and assets, you can associate data streams with asset properties defined in your assets to structure your data, then AWS IoT SiteWise can use asset models and assets to process incoming data from your data streams. You can also disassociate data streams from asset properties. For more information, see Managing data streams. - Asset When you ingest data into AWS IoT SiteWise from your industrial equipment, your devices, equipment, and processes are each represented as assets. Each asset has data associated with it. For example, a piece of equipment might have a serial number, a location, a make and model, and an install date. It might also have time series values for availability, performance, quality, temperature, pressure, and so on. You can organize assets into hierarchies, where assets have access to the data stored in its child assets. For more information, see Modeling industrial assets. - Asset model Every asset is created from an asset model. Asset models are declarative structures that standardize the format of your assets. Asset models enforce consistent information across multiple assets of the same type, so that you can process data in assets that represent groups of devices. In each asset model, you can define attributes, time series inputs (measurements), time series transformations (transforms), time series aggregations (metrics), and asset hierarchies. For more information, see Modeling industrial assets. You can control where your asset model's properties are processed by configuring your asset model for the edge. Use this feature to process and monitor asset data on your local devices. - Source destination You can use a source destination to control where to send the incoming data from your source server. You can either send your data to AWS IoT SiteWise, or you can use a AWS IoT Greengrass stream to send your data to a different location. You can configure your AWS IoT Greengrass stream to send your data to an on-premises application, or to the AWS Cloud. You configure a source destination for each source server in your gateway. - Asset property Asset properties are the structures within each asset that contain industrial data. Each property has a data type and can have a unit. A property can be an attribute, a measurement, a transform, or a metric. For more information, see Defining data properties. You can configure asset properties to compute at the edge. For more information about processing data at the edge, see Enabling edge data processing. - Attribute Attributes are asset properties that represent information that generally doesn't change, such as device manufacturer or device location. Attributes can have default values. Each asset that you create from an asset model contains the default values of the attributes of that model. For more information, see Defining static data (attributes). - Measurement Measurements are asset properties that represent a device or equipment's raw sensor time series data streams. For more information, see Defining data streams from equipment (measurements). - Transform Transforms are asset properties that represent transformed time series data. Every transform has a mathematical expression (formula) that defines how to transform data points from one form to another. The transformed data points hold a one-to-one relationship with the input data points. For more information, see Transforming data (transforms). - Metric Metrics are asset properties that represent aggregated time series data. Every metric has a mathematical expression (formula) that defines how to aggregate data points, and a time interval over which to compute that aggregation. Metrics output a single data point per given time interval. For more information, see Aggregating data from properties and other assets (metrics). - Aggregate Aggregates are basic metrics that AWS IoT SiteWise automatically computes for all time series data. For more information, see Querying asset property aggregates. - Asset hierarchy You can define asset hierarchies to create logical representations of your industrial operations. To create a hierarchy, you define a hierarchy definition in an asset model, and then you associate assets created from that model and the model specified in the hierarchy definition. Metrics in parent assets can aggregate data from child assets' properties, so you can calculate statistics that provide insight to your operation or a subset of your operation. For more information, see Defining relationships between asset models (hierarchies). - Formula Every transform and metric property has a formula that defines how that property transforms or aggregates data. Formulas consist of property inputs, operators, and functions offered by AWS IoT SiteWise. For more information, see Using formula expressions. - Property alias You can define aliases on asset properties to easily identify an asset property when you ingest or retrieve asset data. When you use a gateway to ingest data from servers, your property aliases must match the paths of your raw data streams. For more information, see Mapping industrial data streams to asset properties. - Property notification When you enable property notifications for an asset property, AWS IoT SiteWise publishes an MQTT message to AWS IoT Core each time that property receives a new value. The message payload contains information about that property value update. You can use property value notifications to create solutions that connect your industrial data in AWS IoT SiteWise with other AWS services. For more information, see Interacting with other AWS services. - Portal An SiteWise Monitor portal is a web application that you can use to visualize and share your AWS IoT SiteWise data. A portal has one or more administrators and contains zero or more projects. -.
https://docs.aws.amazon.com/iot-sitewise/latest/userguide/concept-overview.html
2022-05-16T19:17:50
CC-MAIN-2022-21
1652662512229.26
[]
docs.aws.amazon.com
Space templates are a powerful way to give your teams, departments, and large projects a head start! Spaces can include Folders, Lists, tasks, views, Automations, and more! Capture all that into a template so your team can start working together on projects faster. We have a few incredibly powerful Space templates available from our Community as well as those you create for your Workspace. Using Agile project management? Be sure to give our Agile Management Space templates a try! 🚀 Find and Use a Space Template Click + New Spaceat the top of your Spaces section in your sidebar Select the Templatestab from the Create new Space window Or click on the Space Settings ellipsis for an existing Space you want to apply a template to Template Centerand then Browse Templates Browse for a Space template from the Template Center Preview the details of the Space and then select Use Template Select your Space name, Import options, and then select Use Template Your Space will use the selected Template! 🎉 Create a Space Template Create a Space, and add all the Folders, Lists, tasks, views, and Automation you want to use in a Template Click the ellipsis Template Center Save as Template Enter a template name, select your sharing preferences, and import options Click Save Update a Space Template Create a new Space using the template you want to update Update the Folders, Lists, tasks, views, and Automations in your Space Click the ellipsis Template Center Update Existing Template Search and choose the Space template to update Click Optional: rename and update the sharing preferences for your template Select your Import options Click Save One too many templates flying around? Check out the next step to learn how to delete a Space template. Delete a Space Template Need to tidy up your Space templates? Consider deleting ones you no longer need. Be careful! Once you delete a template, it's gone forever and cannot be recovered. Open the Template center Select the template you wish to delete In the template preview, select Delete Template Want to learn more? Check out the other templates you can create and use in ClickUp Need a more granular templates? Take a look at Folder templates Create project task lists even faster with List templates
https://docs.clickup.com/en/articles/5547507-space-templates
2022-05-16T19:33:03
CC-MAIN-2022-21
1652662512229.26
[]
docs.clickup.com
Configuring Certificates for the Jamf SCCM Plug-in 3.40 or Later Using a Standalone Certificate Authority This article explains how to create and configure certificates using a standalone certificate authority (CA) for the Jamf SCCM plug-in 3.40 or later. There are one or more certificates that must be configured before you install the Jamf SCCM Proxy plug-in 3.40 or later. The following table shows the required certificates and the servers on which they must exist: Creating an ISV proxy certificate using a standalone CA involves the following steps: Downloading and modifying the .inf file for creating the certificate signing request (CSR) Creating the CSR Submitting the CSR to the CA to create a certificate Exporting the CA certificate chain Importing the CA certificate chain and the ISV certificate Creating an ISV proxy certificate from the installed certificate Copying the ISV proxy certificate to the SCCM server Registering the ISV proxy certificate with SCCM General Requirements Configuring certificates using a standalone CA for the Jamf SCCM plug-in involves creating an ISV proxy certificate. To do this, you need: A standalone CA that is not integrated with your SCCM environment A PKI certificate with a SHA-2 signature algorithm A Windows computer with the Certification Authority snap-in Console access to the SCCM server Administrative rights to the SCCM Console Step 1: Downloading and Modifying the .inf File for Creating the CSR - Download the .inf file. You can download the .inf file from: - In the .inf file, modify the following variables to include the settings you want to use.Note: The variables are indicated by double square brackets ([[ ]]). - Subject—Modify this variable to include the fully qualified domain name (FQDN) of the Jamf SCCM Proxy Service host computer. - Friendly Name—Modify this variable as follows: - For 3.51 or earlier—"JSS SCCM Proxy Certificate" - For 3.60.0 or later—"Jamf SCCM Proxy Certificate" - Provider Name—Modify this variable to include the name of the Cryptographic Service Provider (CSP) that you want to use. For a list of all CSPs, execute the following command: certutil -csplist - Provider Type—Modify this variable to include the CSP type you want to use. For a list of all CSPs with supported hash algorithms, execute the following command: certutil -csplist -v | more Step 2: Creating the CSR - Copy the .inf file to the computer on which you plan to install the Jamf SCCM Proxy Service. - Create the CSR by executing the following command: certreq -new Standalone-CA-ISV-Request.inf Standalone-CA-ISV-Request.req Step 3: Submitting the CSR to the CA to Create a Certificate The steps needed to create a certificate vary depending on your environment. For example, if you are using a standalone Microsoft CA, use steps similar to the following: - Copy the Self-Signed-ISV-Request.req file to the CA server. - Execute the following command to create the ISV certificate, selecting the CA to sign it if prompted: certreq -submit Standalone-CA-ISV-Request.req isv.cer - Copy the isv.cer file to the computer you used to create the CSR and on which you plan to install the Jamf SCCM Proxy Service. Step 4: Exporting the CA Certificate Chain - Export the CA certificate used to sign the isv.cer by executing the following command: certutil -ca.chain ca.cer - Copy the ca.cer file to the computer on which you plan to install the Jamf SCCM Proxy Service. Step 5: Importing the CA Certificate Chain and the ISV Certificate - On the computer on which you plan to install the Jamf SCCM Proxy Service, open Microsoft Management Console (MMC). - From the menu bar, choose . - Select Certificates in the list of snap-ins and click the Add button. - Select the Computer account option and click Next. - Select the Local computer (the computer this console is running on) option. - Click Finish and click OK. The certificate is displayed below the Console Root folder in the sidebar. - Expand the Certificates (Local Computer) heading. - Expand the Trusted Root Certification Authorities heading. - Right-click the Certificates folder, select , and then select the ca.cer file. - Expand the Personal heading. - Right-click the Certificates folder, select , and then select the isv.cer file. Step 6: Creating an ISV Proxy Certificate from the Installed Certificate - Right-click the newly imported certificate (identified by the friendly name) and select . - Follow the onscreen instructions to export the certificate as a DER-encoded .cer file. Step 7: Copying the ISV Proxy Certificate to the SCCM Server If you created the ISV proxy certificate on a server other than the SCCM server, copy the ISV proxy certificate (.cer) to the SCCM server. You can skip this step if you created the ISV proxy certificate on the SCCM server. Step 8: Registering the ISV Proxy Certificate with SCCM - On the SCCM server, open SCCM and click the Administration category in the sidebar. - Expand the Security folder. - Click the Certificates heading and then click the Register or Renew ISV Proxy button. - In the Register or Renew ISV Proxy dialog, select the Register certificate for a new ISV proxy option and browse for the ISV proxy certificate (.cer). - Click OK to close the Register or Renew ISV Proxy dialog. - Take note of the certificate GUID for the ISV proxy certificate. You will need to enter this when you install the Jamf SCCM Proxy Service.Note: If the Certificate GUID column is not displayed, right-click the column header and select Certificate GUID. Additional Information For additional certificate configuration methods, see the Configuring the Certificates for the Jamf SCCM Plug-in 3.40 or Later article. For more information on the SCCM plug-in, see the SCCM Plug-in User Guide, available at:
https://docs.jamf.com/technical-articles/Configuring_Certificates_for_the_Jamf_SCCM_Plug-in_3-40_or_Later_Using_a_Standalone_Certificate_Authority.html
2022-05-16T18:10:22
CC-MAIN-2022-21
1652662512229.26
[]
docs.jamf.com
Limitations and restrictions Security improvements have been made in this release, however some limitations still exist. Disable spanning tree processing on individual ports Since this feature turns off spanning tree on a particular link, it is possible for bridge loops to form, resulting in broadcast storms, a good understanding of the remote network is required before enabling this feature. Multiple backplane support The ability to configure the backplane that traffic uses is only available on Flexware uCPE platforms and the ability to configure the CPUs that the dataplane will use is only available on backplane interfaces. Port monitor hardware support This section details how additional load can effect port monitor sessions. Backplane bandwidth and expected traffic patterns on the Flexware uCPE platform should be taken into account when port monitor sessions are configured. For XS uCPE, the backplane bandwidth is 2.5 G, and all eight front panel ports are 1 G. For S, M, and L uCPEs, the backplane interconnect is 10 G and the number of front panel ports ranges from 14 in S and M, to 40 in L, some of which support 10 G SFPs. If the destination port is a vhost interface, mirrored packets for hardware-switched traffic will be punted to the CPU, causing additional load on the backplane interface, and should be taken into account when configuring port monitor sessions on the uCPE. 64-bit wide QoS counters This feature can only be used after at least one QoS policy has been associated with an interface, the existing show queuing commands will be deprecated in a future release and superseded by show policy qos. Support marking Ethernet COS bits (0-7) on outgoing packets This feature should improve future flexibility and possible support multiple mark-ups. This feature only supports a single mark-map on the UfiSpace S9500-30XS hardware, however the QoS configuration commands will allow multiple mark-maps to be defined. This is to allow for future flexibility and the possibility of supporting multiple mark-maps. This feature is a replacement for QoS's use of the NPF match <match-name> dscp <dscp-value> and and match <match-name> mark pcp <pcp-value> commands that can be used as packet classification rules by QoS. The QoS class command and the NPF match commands will not be available on the SIAD platform, and the new commands introduced by this feature will only be available on the SIAD platform, so there is no possibility of these two sets of commands interfering with one another. Shared storage and file access This feature provides users with access to shared storage and files. This feature works best with the user isolation feature, however shared storage may be configured on systems with user isolation disabled. Both of these configurations can open up sensitive information about the system and should be used sparingly. The writable shared directories should be cleaned up regularly. Deprecation of TACACS+ local-user-name authorization argument This feature allows TACACS+ to login as an already configured local user, however this capability will be removed in a future release. The local-user-name authorization argument allows TACACS+ to login as an already configured local user. Alternatively, Vyatta also supports on-the-fly creation of a local user during the login process for TACACS+ users. This is done when local-user-name is not present in the session authorization reply. Support for this feature will be removed in a future release at which time presence of the local-user-name argument in authorization replies will cause an authorization failure REST API spawn commands This section details the REST API spawn commands. When executed via the REST API, the spawn operational mode command is not run in the calling user's isolated environment. Therefore, the default ACM operational mode ruleset in 1903 is updated to prevent operator and admin level users from executing spawn. # set system acm operational-ruleset rule 9971 action deny # set system acm operational-ruleset rule 9971 command '/spawn/*' # set system acm operational-ruleset rule 9971 group vyattaop # set system acm operational-ruleset rule 9971 group vyattaadm cmd=spawn protocol=op-mode service=vyatta-exec
https://docs.vyatta.com/en/release-notes/earlier-releases/earlier-releases/vnos-release-notes-1903/limitations-and-restrictions
2022-05-16T19:44:03
CC-MAIN-2022-21
1652662512229.26
[]
docs.vyatta.com
Removing an Application from Jamf Now When you want to remove an application from your Jamf Now account, you have two options: - Hide the app from view, but leave it connected to your account. The hidden app will appear grayed out. - Completely remove the app from your account. An app does not need to be volume purchasing enabled if you want to hide it from your main Apps page in Jamf Now. If you want to remove an app that is volume purchase enabled, you will need to contact Apple Support. To remove an app that is not volume purchase enabled from Jamf Now, follow the steps below. - Log in to Jamf Now. - Click Apps. - Click the Action pop-up menu (•••) to the right of the app you want to hide or remove. - Select or .
https://docs.jamf.com/jamf-now/documentation/Removing_an_Application.html
2022-05-16T18:18:29
CC-MAIN-2022-21
1652662512229.26
[]
docs.jamf.com
Hello! While studying the API, I ran into the problem of understanding what the upper and lower words are. Below is an example of a window procedure. Two buttons are created in the WM_CREATE message (both handles are declared in a header file not shown). Further, in the WM_COMMAND message I catch the button presses. I looked at this example on one forum, but I just can't figure out at what stage the button descriptor is assigned to the lParam parameter and what happens when the left mouse button is pressed. When I turn on the debug mode, when the mouse button is pressed, the lParam parameter has already been assigned a handle to one of the buttons on the working surface. If I click on the Exit menu button (see the switch (LOWORD (wParam)) part of the code), it turns out that the wParam parameter has already been assigned the ID_FILE_EXIT number in the resources, defined through "define". When I looked at what HIWORD and LOWORD are practically everywhere "magic words" pop up :) that these are the upper and lower words. That is, if there is a number 0x3256abcd, then HIWORD is 3256, and LOWORD is abcd, but this does not explain how to catch certain responses of ready-made functions and macros from the Win32 API. Is there a general rule of what kind of code construction needs to be written to catch the pressing of a certain button, selecting text from a multiple list, changing the style of a drop-down menu, etc. LRESULT CALLBACK WndProc(HWND hwnd, UINT msg, WPARAM wParam, LPARAM lParam) { switch (msg) { case WM_CREATE: { ... fbHwnd = CreateWindow(L"BUTTON", L"Next", WS_TABSTOP | WS_VISIBLE | WS_CHILD | BS_DEFPUSHBUTTON, 150, 200, 70, 50, hwnd, NULL, (HINSTANCE)GetWindowLongPtr(hwnd, GWLP_HINSTANCE), NULL); sbHwnd = CreateWindow(L"BUTTON", L"Exit", WS_TABSTOP | WS_VISIBLE | WS_CHILD | BS_DEFPUSHBUTTON, 222, 272, 70, 50, hwnd, NULL,(HINSTANCE)GetWindowLongPtr(hwnd, GWLP_HINSTANCE), NULL); ... } break; case WM_COMMAND: { if (lParam == (int)fbHwnd) { if (HIWORD(wParam) == BN_CLICKED) { MessageBox(hwnd, L"First button", L"Warning", MB_OK); } } else if (lParam == (int)sbHwnd) { if (HIWORD(wParam) == BN_CLICKED) { MessageBox(hwnd, L"Second button", L"Warning", MB_OK); } } break; switch (LOWORD(wParam)) { ... case ID_FILE_EXIT: { PostMessageW(hwnd, WM_CLOSE, 0, 0); } break; ... } } break; } }
https://docs.microsoft.com/en-us/answers/questions/426782/using-hiword-and-loword-in-win32-api-how-does-the.html
2022-05-16T20:07:33
CC-MAIN-2022-21
1652662512229.26
[]
docs.microsoft.com
Viewing Storage VMs based on protection status You can use the Storage VMs page of the Inventory to view all the storage VMs in Active IQ Unified Manager and filter the storage VMs based on their protection status. What you’ll need You must have the Application Administrator or Storage Administrator role. A new column Protection Role is added to the storage VMs view that provides information on whether the storage VM is protected or unprotected. In the left navigation pane, click STORAGE > Storage VMs. From the VIEW menu, select Health > All Storage VMs. The Health: All Storage VMs is displayed. Click Filter to view one of the following storage VMs. Click Apply Filter. The Unsaved view displays all the storage VMs that are either protected or unprotected by storage VM disaster recovery based on your filter selections.
https://docs.netapp.com/us-en/active-iq-unified-manager-910/data-protection/task_view_storage_vms_based_on_protection_status.html
2022-05-16T19:50:59
CC-MAIN-2022-21
1652662512229.26
[]
docs.netapp.com
statistics lun show Contributors LUN throughput and latency metrics Availability: This command is available to cluster and Vserver administrators at the admin privilege level. Description This command continuously displays performance data for LUNs at a regular interval. The command output displays data in the following columns: Lun - LUN name. Vserver - vserver [-lun <text>]- Lun Selects the LUN for which you want to display performance data. [-vserver <vserver name>]- Vserver Selects the vserver LUNs to display. The default setting is 25. Examples The following example displays LUN statistics: cluster1::> statistics lun show cluster1 : 12/31/2013 16:00:04 *Total Read Write Other Read Write Latency Lun Vserver Ops Ops Ops Ops (Bps) (Bps) (us) ---- ------- ------ ---- ----- ----- ------ ----- ------- lun1 vs1 58 13 15 29 310585 3014 39 lun0 vs2 56 0 11 45 8192 28826 47 [...]
https://docs.netapp.com/us-en/ontap-cli-9111/statistics-lun-show.html
2022-05-16T19:57:41
CC-MAIN-2022-21
1652662512229.26
[]
docs.netapp.com
Use the following table to choose the correct execution option for a UDF. IF … THEN use … you are in the UDF development phase and are debugging a function EXECUTE PROTECTED. the function opens a file or uses another operating system resource that requires tracking by the operating system EXECUTE PROTECTED.Running such a function in nonprotected mode could interfere with proper Vantage operation. the function is a computational function that does not use any operating system resources EXECUTE NOT PROTECTED.Running a UDF in nonprotected mode speeds up the processing of the function considerably. Use this option only after thoroughly debugging the function and making sure it produces the correct output.
https://docs.teradata.com/r/Teradata-VantageTM-SQL-External-Routine-Programming/July-2021/C/C-User-Defined-Functions/Protected-Mode-Function-Execution/Choosing-the-Correct-Execution-Option
2022-05-16T18:02:28
CC-MAIN-2022-21
1652662512229.26
[]
docs.teradata.com
References and Useful Resources¶ Most of the guidelines here were gathered from the following list sources. The list contains a variety of useful resources for programming in C++ beyond what is presented in these guidelines. - The Chromium Projects: C++ Dos and Don’ts. - Dewhurst, S., C++ Gotchas: Avoiding Common Problems in Coding and Design, Addison-Wesley, 2003. - Dewhurst S., C++ Common Knowledge: Essential Intermediate Programming, Addison-Wesley, 2005. - Doxygen manual, - Google C++ Style Guide, - ISO/IEC 14882:2011 C++ Programming Language Standard. - Josuttis, N., The C++ Standard Library: A Tutorial and Reference, Second Edition, Addison-Wesley, 2012. - LLVM Coding Standards, llvm.org/docs/CodingStandards.html - Meyers, S., More Effective C++: 35 New Ways to Improve Your Programs and Designs, Addison-Wesley, 1996. - Meyers, S., Effective STL: 50 Specific Ways to Improve Your Use of the Standard Template Library, Addison-Wesley, 2001. - Meyers, S., Effective C++: 55 Specific Ways to Improve Your Programs and Designs (3rd Edition), Addison-Wesley, 2005. - Meyers, S., Effective Modern C++: 42 Specific Ways to Improve Your Use of C++11 and C++14, O’Reilly. - Programming Research Ltd., High-integrity C++ Coding Standard, Version 4.0, 2013. - Sutter, H. and A. Alexandrescu, C++ Coding Standards: 101 Rules, Guidelines, and Best Practices, Addison-Wesley, 2005.
https://axom.readthedocs.io/en/latest/docs/sphinx/coding_guide/references.html
2021-11-27T15:17:08
CC-MAIN-2021-49
1637964358189.36
[]
axom.readthedocs.io
Date: Mon, 9 Nov 1998 19:06:55 -0500 From: [email protected] To: <[email protected]> Message-ID: <[email protected]> Next in thread | Raw E-Mail | Index | Archive | Help i recently downloaded your freebsd shell from your ftp site. Since i could not seem to configure the install program to download it directly, i downloaded it to my Hard drive using windows, and then installed using the "from an msdod partition" option. The process seemed successful, but now when i try to boot to the freebsd shell, i get an error message saying: cant find kernel. what can i do to fix this?? please keep in mind that i have never used a unix shell before. Thank you Nick Delong To Unsubscribe: send mail to [email protected] with "unsubscribe freebsd-questions" in the body of the message Want to link to this message? Use this URL: <>
https://docs.freebsd.org/cgi/getmsg.cgi?fetch=908793+0+/usr/local/www/mailindex/archive/1998/freebsd-questions/19981108.freebsd-questions
2021-11-27T13:52:04
CC-MAIN-2021-49
1637964358189.36
[]
docs.freebsd.org
This option is used to trigger a Salesforce process upon completion of lead routing. The salesforce process builder is used to send an email, create a task, or automate further salesforce processes. The system will route and update the salesforce with API name and trigger the process builder using the API. Enter the API name in the Provide process builder API name field.
https://docs.leadangel.com/knowledge-base/trigger-process-builder-to-send-email-and-more/
2021-11-27T14:21:47
CC-MAIN-2021-49
1637964358189.36
[array(['https://docs.leadangel.com/wp-content/uploads/2021/07/image-4-1024x537.png', None], dtype=object) ]
docs.leadangel.com
Byte Nibble: Joe's Random Musings Beware of the Byte Nibblin' Mice! Parallel Programming at Microsoft for .NET As I have made clear in my some of my previous posts I’m very interested in concurrent and... Author: Joe Hegarty1 Date: 11/05/2009 Network Architecture and Integration in Games The Background Babble When I started this blog I promised myself I would post once a week, I have... Author: Joe Hegarty1 Date: 10/13/2009 My very first post! So, my very first post on my shiny new blog!I should probably say a little about me otherwise nobody... Author: Joe Hegarty1 Date: 09/21/2009
https://docs.microsoft.com/en-us/archive/blogs/joehegarty/
2021-11-27T14:02:58
CC-MAIN-2021-49
1637964358189.36
[]
docs.microsoft.com
A ruleset for an Teradata Aster system is a collection of related throttles and workload rules. You can create multiple rulesets, but only one ruleset is active on the Teradata Aster system at a time. After creating a ruleset, you can use the toolbar buttons to specify settings, such as workloads. New rulesets are automatically locked so only the owner can edit the ruleset. - From the Workload Designer view, select a system from the list. - Click to the right of Local. - Enter a name. - [Optional] Enter a description. - Click Save. - [Optional] Click Throttles and create a throttle. - [Optional] Click Workloads and create a workload. - Click to return to the Workload Designer view.The new ruleset appears in the Working section.
https://docs.teradata.com/r/GsdbDDAzsO9o7v49kofuFw/UOiozOuXiztBmEjJdk2MBQ
2021-11-27T14:51:14
CC-MAIN-2021-49
1637964358189.36
[]
docs.teradata.com
Date: Thu, 11 Feb 1999 08:54:22 -0700 From: "Michael Sorenson" <[email protected]> To: <[email protected]> Subject: FreeBSD NIC Message-ID: <[email protected]> Next in thread | Raw E-Mail | Index | Archive | Help Recently my company purchased your freebsd software to put on an old Compaq server of ours that we, with the help of our ISP are configuring for an internet server. In the process we are trying to locate a Network Interface Card that is supported by freebsd. The server is completely running EISA and so our choices have been limited. On your list there is a card from 3com listed, 3C595 Etherlink III , but when I look for it at 3com's www it comes up with a list of compatible cards and no reference to this particular card. What I am asking is, on the compatibility list there is the 3c592 and 3c597 card that are EISA, are they compatible with freebsd? If so can I use multiple cards in the same server? Thanks, Michael Sorenson To Unsubscribe: send mail to [email protected] with "unsubscribe freebsd-questions" in the body of the message Want to link to this message? Use this URL: <>
https://docs.freebsd.org/cgi/getmsg.cgi?fetch=1691021+0+/usr/local/www/mailindex/archive/1999/freebsd-questions/19990207.freebsd-questions
2021-11-27T15:04:54
CC-MAIN-2021-49
1637964358189.36
[]
docs.freebsd.org
TeamForge uses AngularJS to power some of its pages. AngularJS (a.k.a first version of Google’s Angular framework). By using AngularJS Customization you can customize angularjs pages in TeamForge UI. Do note that this also supports the usual CSS and image/logo customizations as well. For example, you might want to - Change default TeamForge brand logo and replace it with your organization’s logo - You can add a button in a form and wire up a click event handler to it. AngularJS Customization framework works similar to custom event handler mechanism. However, unlike custom event handlers, UI customization does not need the event.xml or any java code. Click here for more information about Custom Event Handlers. In a nutshell: - Package all your UI/AngularJS code as a .jar file. - In the jar file, it is recommended that - all CSS files are placed in /css folder - all bundle/images files are placed in /bundle/images folder - all JavScript and AngularJS files are placed in /js folder - Make sure that the ENABLE_UI_FOR_CUSTOM_EVENT_HANDLERS token is set to true. - Go to My Workspace > Admin. - Click SYSTEM TOOLS from the Projects menu. - Click Customizations. - Click Create and click Browse to locate your .jarfile. - Click Add.Note: Debug your event handler if you see the Error Parsing Event Jar Fileerror. Up on successful upload of the .jar file, the event cache is cleared. All the events you specified in your event handler are now captured and sent to the external web service.
https://docs.collab.net/teamforge210/addangularjscustomization.html
2021-11-27T14:28:21
CC-MAIN-2021-49
1637964358189.36
[]
docs.collab.net
Low-Level functions¶ Low-level module consists of callback-only functions, which are called by middleware and must be implemented by final application. Tip Check Porting guide for actual implementation - group LWGSM_LL Low-level communication functions. Typedefs - typedef size_t (*lwgsm_ll_send_fn)(const void *data, size_t len)¶ Function prototype for AT output data. - Param data [in] Pointer to data to send. This parameter can be set to NULL - Param len [in] Number of bytes to send. This parameter can be set to 0to indicate that internal buffer can be flushed to stream. This is implementation defined and feature might be ignored - Return Number of bytes sent Functions - lwgsmr_t lwgsm_ll_init(lwgsm GSM stack when using OS. When LWGSM_CFG_INPUT_USE_PROCESS is set to 1, this function may be called from user UART thread. - Parameters ll – [inout] Pointer to lwgsm_ll_t structure to fill data for communication functions - Returns lwgsmOK on success, member of lwgsmr_t enumeration otherwise - lwgsmr_t lwgsm_ll_deinit(lwgsm_ll_t *ll)¶ Callback function to de-init low-level communication part. - Parameters ll – [inout] Pointer to lwgsm_ll_t structure to fill data for communication functions - Returns lwgsmOK on success, member of lwgsmr_t enumeration otherwise - struct lwgsm_ll_t¶ - #include <lwgsm_typedefs.h> Low level user specific functions. Public Members - lwgsm_ll_send_fn send_fn¶ Callback function to transmit data - lwgsm_ll_reset_fn reset_fn¶ Reset callback function - struct lwgsm_ll_t::[anonymous] uart¶ UART communication parameters
https://docs.majerle.eu/projects/lwgsm/en/latest/api-reference/port/ll.html
2021-11-27T15:11:48
CC-MAIN-2021-49
1637964358189.36
[]
docs.majerle.eu
lightkurve.LightCurve.to_fits¶ - LightCurve.to_fits(path=None, overwrite=False, flux_column_name='FLUX', **extra_data)[source]¶ Converts the light curve to a FITS file in the Kepler/TESS file format. The FITS file will be returned as a HDUListobject. If a pathis specified then the file will also be written to disk. - Parameters - pathstr or None Location where the FITS file will be written, which is optional. - overwritebool Whether or not to overwrite the file, if pathis set. - flux_column_namestr The column name in the FITS file where the light curve flux data should be stored. Typical values are FLUXor SAP_FLUX. - extra_datadict Extra keywords or columns to include in the FITS file. Arguments of type str, int, float, or bool will be stored as keywords in the primary header. Arguments of type np.array or list will be stored as columns in the first extension. - Returns -
https://docs.lightkurve.org/reference/api/lightkurve.LightCurve.to_fits.html
2021-11-27T15:12:59
CC-MAIN-2021-49
1637964358189.36
[]
docs.lightkurve.org
rust-hkdf HMAC-based Extract-and-Expand Key Derivation Function (HKDF) for Rust. Uses the Digest trait which specifies an interface common to digest functions, such as SHA-1, SHA-256, etc. Installation From crates.io: [dependencies] hkdf = "0.7" Usage See the example examples/main.rs or run it with cargo run --example main Changelog - 0.7.0 - Update digest to 0.8, refactor for API changes, remove redundant generic-arraycrate. - 0.6.0 - remove std requirement. The expandsignature has changed. - 0.5.0 - removed deprecated interface, fixed omitting HKDF salt. - 0.4.0 - RFC-inspired interface, Reduce heap allocation, remove unnecessary mut, derive Clone. deps: hex-0.3, benchmarks. - 0.3.0 - update dependencies: digest-0.7, hmac-0.5 - 0.2.0 - support for rustc 1.20.0 - 0.1.1 - fixes to support rustc 1.5.0 - 0.1.0 - initial release
https://docs.rs/crate/hkdf/0.7.1
2021-11-27T15:38:18
CC-MAIN-2021-49
1637964358189.36
[array(['https://docs.rs/hkdf/badge.svg', 'Documentation'], dtype=object)]
docs.rs
: This page has no comments.
https://docs.trifacta.com/display/r050/EXAMPLE+-+DATEDIF+Function
2021-11-27T15:29:13
CC-MAIN-2021-49
1637964358189.36
[]
docs.trifacta.com
ksconf.builder package¶ ksconf.builder.cache module¶ - class ksconf.builder.cache. CachedRun(root)¶ Bases: object - class ksconf.builder.cache. FileSet¶ Bases: object A collection of fingerprinted files. Currently the fingerprint is only a SHA256 hash. Two constructore are provided for building an instance from either file that live on the filesystem, via from_filesystem()or from a persisted cached record aviable from the from_cache(). The filesystem version actively reads all inputs files at object creation time, so this can be costly, especially if repeated. ksconf.builder.core module¶ Cache build requirements: - Caching mechanism should inspet ‘inputs’ (collect file hashes) to determine if any content has changed. If input varies, then command should be re-run. - Command (decorated function) should be generally unaware of all other details of build process, and it should ONLY be able to see files listed in “inputs” - Allow caching to be fully disabled (run in-place with no dir proxying) for CI/CD - Cache should have allow a timeout parameter - decorator used to implement caching: - decorator args: - inputs: list or glob - outputs (do we need this, can we just detect this??) - Default to “.” (everything) - timeout=0 Seconds before cache should be considered stale - name=None If not given, default to the short name of the function. - (Cache “slot”), must be filesystem safe] - class ksconf.builder.core. BuildManager¶ Bases: object Management of individual build steps cache(inputs, outputs, timeout=None, name=None, cache_invalidation=None)¶ function decorator for caching build steps Wrapped function must accept BuildStep instance as first parameters XXX: Clearly document what things are good cache candidates and which are not. Example: - No extra argument to the function (at least currently) - Changes to inputs files are not supported - Deleting files aren’t supported - Can only operate in a single directory given a limited set of inputs - Cannot read from the source directory, and agrees not to write to dist (In other words, limit all activities to build_path for deterministic behavior) ksconf.builder.steps module¶ ksconf.builder.steps: Collection of reusable build steps for reuse in your build script. ksconf.builder.steps. copy_files(step, patterns)¶ Copy source files into the build folder that match given glob patterns Module contents¶ - class ksconf.builder. BuildStep(build, source=None, dist=None, output=<_io.TextIOWrapper)¶ Bases: object run(executable, *args, **kw_only)¶ Execute an OS-level command regarding the build process. The process will run withing the working directory of the build folder. ksconf.builder. default_cli(build_manager, build_funct, argparse_parents=())¶ This is the function you stick in the: if __name__ == '__main__'section of your code :-) Pass in a BuildManager instance, and a callback function. The callback function must accept (steps, args). If you have need for custom arguments, you can add them to your own ArgumentParser instance and pass them to the argparse_parents keyword argument, and then handle additional ‘args’ passed into the callback function.
https://ksconf.readthedocs.io/en/v0.8.1/api/ksconf.builder.html
2021-11-27T14:47:03
CC-MAIN-2021-49
1637964358189.36
[]
ksconf.readthedocs.io
You can create a new volume and associate the volume with a given account (every volume must be associated with an account). This association gives the account access to the volume through the iSCSI initiators using the CHAP credentials. You can also specify QoS settings for a volume during creation.
http://docs.netapp.com/hci/topic/com.netapp.doc.hci-vcp-ug-170/GUID-5FB5CCC0-A26A-41A7-88AA-0F1C5E4E0F86.html
2021-11-27T15:00:52
CC-MAIN-2021-49
1637964358189.36
[]
docs.netapp.com
Versioning Guidelines - Some definitions - Epoch: tag - Simple versioning - Complex versioning - Traditional versioning with part of the upstream version information in the release field - You need to change an old branch without rebuilding the others -. Some. Epoch: tag The Epoch: tag provides the most significant input to RPM’s version comparison function. If present, it must consist of a positive integer. It should only be introduced or incremented when necessary to avoid ordering issues. The Epoch: tag, once introduced to a package, must never be removed or decreased. Simple versioning Most upstream versioning schemes are "simple"; they generate versions like 1.2.03.007p1. They consist of one or more version components, separated by periods. Each component is a whole number, potentially with leading zeroes. The components does not sort properly. You need to apply a small fix to a release branch of Fedora without updating the newer branches. More than one of the above may apply (lucky you). Follow all of the relevant recommendations below together. Handling non-sorting versions with tilde, dot, and caret The tilde symbol (‘~’) is used before a version component which must sort earlier than any non-tilde component. It is used for any pre-release versions which wouldn’t otherwise sort appropriately. For example, with upstream releases 0.4.0, 0.4.1, 0.5.0-rc1, 0.5.0-rc2, 0.5.0, the two "release candidates" should use 0.5.0~rc1 and 0.5.0~rc2 in the Version: field. Bugfix or "patchlevel" releases that some upstream make should be handled using simple versioning. The separator used by upstream may need to be replaced by a dot or dropped. For example, if the same upstream released 0.5.0-post1 as a bugfix version, this "post-release" should use 0.5.0.post1 in the Version: field. Note that 0.5.0.post1 sorts lower than both 0.5.1 and 0.5.0.1. The caret symbol (‘^’) is used before a version component which must sort later than any non-caret component. It is used for post-release snapshots, see next section. Snapshots Snapshots (a version taken from the upstream source control system not associated with a release), must contain a snapshot information field after a caret ( ^). The first part of the field ensures proper sorting. That field may either the date in eight-digit "YYYYMMDD" format, which specifies the last modification of the source code, or a number. The packager may include up to 17 characters of additional information after the date, specifying the version control system and commit identifier. The snapshot information field is appended to version field described above, possibly including the pre-release and patchlevel information. One of the following formats should be used for the snapshot information field: <date>.<revision> <date><scm><revision> <number>.<revision> <number>.<scm><revision> Where <scm> is a short string identifying the source code control system upstream uses (e.g. "git", "svn", "hg") or the string "snap". The <scm> string may be abbreviated to a single letter. <revision> is either a short git commit hash, a subversion revision number, or something else useful in identifying the precise revision in upstream’s source code control system. If the version control system does not provide an identifier (e.g. CVS), this part should be omitted. A full hash should not be used for <revision>, to avoid overly long version numbers; only the first 7 to 10 characters. For example, if the last upstream release was 0.4.1, a snapshot could use 0.4.1^20200601g01234ae in the Version: field. Similarly, if the upstream then makes a pre-release with version 0.5.0-rc1, but it is buggy, and we need to actually package two post-pre-release snapshots, those shapshots could use 0.5.0~rc1^20200701gdeadf00f and 0.5.0~rc1^20200702gdeadaeae in the Version: field. Alternatively, those three snapshots could be versioned as 0.4.1^1.git01234ae, 0.5.0~rc1^1.gitdeadf00f and 0.5.0~rc1^2.gitdeadaeae. Note that 0.4.1^<something> sorts higher than 0.4.1, but lower than both 0.4.2 and 0.4.1.<anything>. Upstream has never chosen a version When upstream has never chosen a version, you must use Version: 0. “0” sorts lower than any other possible value that upstream might choose. If upstream does choose to release "version 0", then just set Release: higher than the previous value. simply inserting a tilde or caret is enough to make the string sortable. For example, if upstream uses a sequence like 1.2pre1, 1.2pre2, 1.2final, then 1.2~pre1, 1.2~pre2, 1.2_final could be used as Version. The underscore (‘_’) is a visual separator that does not influence sort order, and is used here because "final" does not form a separate version component. If this is not possible, use something similar to the snapshot version information field described above, with the upstream version moved to the second part of the snapshot information field: <date>.<version>. For example, if upstream releases versions I, II, …, VIII, IX use 20200101.I, 20200201.II, …, 20200801.III, 20200901.IX in the Version field. Upstream breaks version scheme sorts lower than the packages already in Fedora, then you have little recourse but to increment the Epoch: tag, or to begin using it by adding Epoch: 1. At the same time, try to work with upstream to hopefully minimize the need to involve Epoch: in the future. Examples Comparing versions with rpmdev-vercmp When in doubt, verify the sorting with rpmdev-vercmp from the rpmdevtools package: $ rpmdev-vercmp 2~almost^post 2.0.1 2~almost^post < 2.0.1 Traditional versioning with part of the upstream version information in the release field The method described in this section is deprecated, but may be used. As mentioned in the dot, and caret section above, this method is recommended for packages with complex versioning when supporting RHEL7 and other systems with old rpm versions. This method for dealing with most pre- and post-release versions and unsortable versions. Examples Examples of many possible versioning scenarios of traditional versioning are available from Package Versioning Examples. You need to change an old branch without rebuilding the others Sometimes, you may find yourself in a situation where an older branch needs a fix, but the newer branches are fine. For example, if a package has a version-release of 1.0-1%{?dist} in F35 and F36, and only F35 needs a fix. Normally, you would need to bump the release in each of the branches to ensure that F35 < F36,.
https://docs.fedoraproject.org/it/packaging-guidelines/Versioning/
2021-11-27T14:44:32
CC-MAIN-2021-49
1637964358189.36
[]
docs.fedoraproject.org
The menu options available in the admin portal are different depending on the installed license. This because since FileCap server version 4.0.0 FileCap also supports MSP (Multiple customers on one deployment) which have some additional menu options. Some menu options can be hidden depending on the license used, see the following table:
https://docs.filecap.com/filecap-administration-portal/menu-options/
2021-11-27T14:06:39
CC-MAIN-2021-49
1637964358189.36
[]
docs.filecap.com
Get Exchange¶ This topic introduces how to get the JAR file of Nebula Exchange. Download the JAR file directly¶ The JAR file of Exchange Community Edition can be downloaded directly. To download Exchange Enterprise Edition, get Nebula Graph Enterprise Edition Package first. Get the JAR file by compiling the source code¶ You can get the JAR file of Exchange Community Edition by compiling the source code. The following introduces how to compile the source code of Exchange. Enterpriseonly You can get Exchange Enterprise Edition in Nebula Graph Enterprise Edition Package only. Prerequisites¶ - Download pulsar-spark-connector_2.11, and unzip it to io/streamnative/connectorsdirectory of the local Maven library. Steps¶ Clone the repository nebula-exchangein the /directory. git clone -b v2.6 Switch to the directory nebula-exchange. cd nebula-exchange/nebula-exchange Package Nebula Exchange. mvn clean package -Dmaven.test.skip=true -Dgpg.skip -Dmaven.javadoc.skip=true After the compilation is successful, you can view a directory structure similar to the following in the current directory. . ├── README-CN.md ├── README.md ├── pom.xml ├── src │ ├── main │ └── test └── target ├── classes ├── classes.timestamp ├── maven-archiver ├── nebula-exchange-2.x.y-javadoc.jar ├── nebula-exchange-2.x.y-sources.jar ├── nebula-exchange-2.x.y.jar ├── original-nebula-exchange-2.x.y.jar └── site In the target directory, users can find the exchange-2.x.y.jar file. Note The JAR file version changes with the release of the Nebula Java Client. Users can view the latest version on the Releases page. When migrating data, you can refer to configuration file target/classes/application.conf. Failed to download the dependency package¶ If downloading dependencies fails when compiling: - Check the network settings and ensure that the network is normal. Modify the mirrorpart of Maven installation directory libexec/conf/settings.xml: <mirror> <id>alimaven</id> <mirrorOf>central</mirrorOf> <name>aliyun maven</name> <url></url> </mirror>
https://docs.nebula-graph.io/2.6.1/nebula-exchange/ex-ug-compile/
2021-11-27T14:08:54
CC-MAIN-2021-49
1637964358189.36
[]
docs.nebula-graph.io
Welcome to mozilla.org’s documentation!¶ bedrock is the code name of the new mozilla.org. It is bound to be as shiny, awesome, and open sourcy as always. Perhaps even a little more. bedrock is a web application based on Django. Patches are welcome! Feel free to fork and contribute to this project on Github. Contents¶ - Installing Bedrock - Localization - Developing on Bedrock - How to contribute - Continuous Integration & Deployment - Front-end testing - Managing Redirects - JavaScript Libraries - Newsletters - Tabzilla - Mozilla.UITour - Send to Device widget - Firefox Accounts Signup Form - Analytics
http://bedrock.readthedocs.io/en/latest/
2017-05-22T19:20:46
CC-MAIN-2017-22
1495463607046.17
[]
bedrock.readthedocs.io
Guide and Architecture Guide Berkley Paket Berkley Packet Filter hints at a packet filtering specific purpose, the instruction set is generic and flexible enough these days that there are many use cases for BPF apart from networking. See References for a list of projects which use BPF. Cilium uses BPF heavily in its data path, see Architecture Guide for further information. The goal of this chapter is to provide a BPF reference guide in oder and sparc6464/Kconfig: select HAVE_EBPF_JIT arch/powerpc/Kconfig: select HAVE_EBPF_JIT if PPC. Eventually LLVM needs to compile the entire code into a flat sequence of BPF instructions for a given program section.. #include <linux/bpf.h> #ifndef __section # define __section(NAME) \ __attribute__((section(NAME), used)) #endif #ifndef __inline # define __inline \ inline __attribute__((always_inline)) #endif static __inline int foo(void) { return XDP_DROP; } __section("prog") int xdp_drop(struct xdp_md *ctx) { return foo(); } char __license[] __section("license") = "GPL"; tag c5f7825e5dac396f # tc filter show dev em1 egress filter protocol all pref 49152 bpf filter protocol all pref 49152 bpf handle 0x1 tc-example.o:[egress] direct-action will be provided through the detailed view with ip -d linkonce the kernel API gains support for dumping additional attributes. In order to remove the existing XDP program from the interface, the following command must be issued:# ip link set dev em1 xdp [ 3389.935847] JIT code: 00000000: 55 48 89 e5 48 83 ec 60 48 89 5d f8 44 8b 4f 68 [ 3389.935849] JIT code: 00000010: 44 2b 4f 6c 4c 8b 87 d8 00 00 00 be 0c 00 00 00 [ 3389.935850] JIT code: 00000020: e8 1d 94 ff e0 3d 00 08 00 00 75 16 be 17 00 00 [ 3389.935851] JIT code: 00000030: 00 e8 28 94 ff e0 83 f8 01 75 07 b8 ff ff 00 00 [ 3389.935852] JIT code: 00000040: eb 02 31 c0 c9 c3/net/, there is a tool called bpf_jit_disasm. It reads out the latest dump and prints the disassembly for further inspection: # ./bpf_jit_disasm 70 bytes emitted from JIT compiler (pass:3, flen:6) ffffffffa0069c8f + <x>: 0: push %rbp 1: mov %rsp,%rbp 4: sub $0x60,%rsp 8: mov %rbx,-0x8(%rbp) c: mov 0x68(%rdi),%r9d 10: sub 0x6c(%rdi),%r9d 14: mov 0xd8(%rdi),%r8 1b: mov $0xc,%esi 20: callq 0xffffffffe0ff9442 25: cmp $0x800,%eax 2a: jne 0x0000000000000042 2c: mov $0x17,%esi 31: callq 0xffffffffe0ff945e 36: cmp $0x1,%eax 39: jne 0x0000000000000042 3b: mov $0xffff,%eax 40: jmp 0x0000000000000044 42: xor %eax,%eax 44: leaveq 45: retq Alternatively, the tool can also dump related opcodes along with the disassembly. # ./bpf_jit_disasm -o 70 bytes emitted from JIT compiler (pass:3, flen:6) ffffffffa0069c8f + <x>: 0: push %rbp 55 1: mov %rsp,%rbp 48 89 e5 4: sub $0x60,%rsp 48 83 ec 60 8: mov %rbx,-0x8(%rbp) 48 89 5d f8 c: mov 0x68(%rdi),%r9d 44 8b 4f 68 10: sub 0x6c(%rdi),%r9d 44 2b 4f 6c 14: mov 0xd8(%rdi),%r8 4c 8b 87 d8 00 00 00 1b: mov $0xc,%esi be 0c 00 00 00 20: callq 0xffffffffe0ff9442 e8 1d 94 ff e0 25: cmp $0x800,%eax 3d 00 08 00 00 2a: jne 0x0000000000000042 75 16 2c: mov $0x17,%esi be 17 00 00 00 31: callq 0xffffffffe0ff945e e8 28 94 ff e0 36: cmp $0x1,%eax 83 f8 01 39: jne 0x0000000000000042 75 07 3b: mov $0xffff,%eax b8 ff ff 00 00 40: jmp 0x0000000000000044 eb 02 42: xor %eax,%eax 31 c0 44: leaveq c9 45: retq c3. Setting up a BPF development environment¶ The first thing we really need is to get the dependencies right. This might be tricky so below is a step by step guide to get up and running on Fedora and Ubuntu. Fedora 25¶ sudo dnf install -y git gcc ncurses-devel elfutils-libelf-devel bc \ openssl-devel libcap-devel clang llvm Note Note If you are running some other Fedora derivative and dnf is missing, try using yum instead. Ubuntu 17.04¶ sudo apt-get install -y make gcc libssl-dev bc libelf-dev libcap-dev \ clang gcc-multilib llvm libncurses5-dev git Compiling the kernel¶ You should have git installed by now, if not see installation instructions. Getting the source git clone --depth 1 git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next.git Supplying the depth option let’s us clone a smaller portion of the history and it’s faster. If you want the full history drop –depth 1. Navigate to the kernel source cd net-next We need a valid configuration to compile our new kernel. cp /boot/config-`uname -r` .config Prepare the configuration for use make clean make olddefconfig Most distros have the appropriate defaults, but verify the BPF knobs are sane grep BPF .config # should be something like CONFIG_CGROUP_BPF=y CONFIG_BPF=y CONFIG_BPF_SYSCALL=y CONFIG_NETFILTER_XT_MATCH_BPF=m CONFIG_NET_CLS_BPF=m CONFIG_NET_ACT_BPF=m CONFIG_BPF_JIT=y CONFIG_LWTUNNEL_BPF=y CONFIG_HAVE_EBPF_JIT=y CONFIG_BPF_EVENTS=y CONFIG_TEST_BPF=m If you get very different results, you can correct the defaults via the menu configuration make menuconfig The build might generate a lot of files depending on your configuration. Try to at least to have 30G of space available before compiling. To compile the kernel and modules make -j`grep -Pc '^processor\t' /proc/cpuinfo` make modules Install them all by running the commands below one at a time. Stop to check for signs of failure or errors. sudo make modules_install sudo make install make headers_install Before we can reboot we need to update the menu entries in grub depending on which bootloader you use this might be different. It might look something like Ubuntu sudo update-grub Fedora sudo grub2-mkconfig -o /boot/grub2/grub.cfg If you are using EFI then use this path instead sudo grub2-mkconfig -o /boot/efi/EFI/fedora/grub.cfg If all of the above went fine it’s time to reboot sudo reboot Verifying the setup¶ After you have booted into your new kernel, navigate to the BPF test verifier. We want to start checking everything is at it should be. cd net-next/tools/testing/selftests/bpf/ make sudo ./test_verifier The tests verifier should print out all the checks being performed. Look for the summary near the end Summary: 418 PASSED, 0 FAILED If you see any failures stop, please contact us on Slack with the full test output. You can also try out the other tests but some of those might fail depending on your hardware setup. References¶ Mentioned lists of projects, talks, papers, and further reading material are likely not complete. Thus, feel free to open pull requests to complete the list. Projects using BPF¶ The following list includes () Talks & Publications¶ The following list includes publications and talks related to BPF and XDP: -, Further Reading¶ - Dive into BPF: a list of reading material, Quentin Monnet () - XDP - eXpress Data Path, Jesper Dangaard Brouer ()
http://docs.cilium.io/en/latest/bpf/
2017-05-22T19:27:44
CC-MAIN-2017-22
1495463607046.17
[]
docs.cilium.io
The Programs page of the Internet Explorer Customization Wizard 11 lets you pick the default programs to use for Internet services, like email, contact lists, and newsgroups, by importing settings from your computer. Important The customizations you make on this page only apply to Internet Explorer for the desktop. To use the Programs page Determine whether you want to customize your connection settings. You can pick: Do not customize Program Settings. Pick this option if you don’t want to set program associations for your employee’s devices. -OR- Import the current Program Settings. Pick this option to import the program associations from your device and use them as the preset for your employee’s program settings. Note If you want to change any of your settings, you can click Modify Settings to open the Internet Properties box, click Set associations, and make your changes. Click Next to go to the Additional Settings page or Back to go to the Add a Root Certificate page.
https://docs.microsoft.com/en-us/internet-explorer/ie11-ieak/programs-ieak11-wizard
2017-05-22T20:10:28
CC-MAIN-2017-22
1495463607046.17
[]
docs.microsoft.com