content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
Learners can use the platform to interact with one another, regarding video content: Introduce themselves Ask questions Share thoughts Reply to other learners' questions In general, this is a way for learners to share thoughts, theories & ideas, and in some cases - even feelings (learners sharing how the video made them feel) . Some courses include learners from different regions, backgrounds or even countries. Starting a course with learners introduction can overcome social distancing, help the learners feel more comfortable and encourage collaborative learning. Learners can introduce themselves as part of the discussion (the instructor can invite the users to introduce themselves), or even as an introduction video (on which other users can comment). There is a lot that a learner can learn from his peers. Asking questions will not only help the learner to have a better understanding of the video, but can also expose learners to other ways of thinking. Having the opportunity to share one's personal view of things can create meaningful discussions between learners, the kind that nurtures outside of the box thinking and celebrates similarities and differences between learners. For a learner, being able to answer others' questions means to take a step forward in terms of learning. When learners are given the legitimization to answer their peers, their answers become more in-depth, established, and based on knowledge, as the responsibility to provide a good, relevant, meaningful answer is on their shoulders. Want to share with us other benefits and usages of Learners Interaction? We'd love to hear them! Click here to tell us.
https://docs.annoto.net/guides/annoto-use-cases/students-interaction
2021-01-15T17:44:57
CC-MAIN-2021-04
1610703495936.3
[]
docs.annoto.net
Information for submitting a group of modules¶ Topics Submitting a group of modules¶ This section discusses how to get multiple related modules into Ansible. This document is intended for both companies wishing to add modules for their own products as well as users of 3rd party products wishing to add Ansible functionality. It’s based on module development tips and tricks that the Ansible core team and community have accumulated.). Before you start coding¶ Although it’s tempting to get straight into coding, there are a few things to be aware of first. This list of prerequisites is designed to help ensure that you develop high-quality modules that flow easily through the review process and get into Ansible more quickly. - Read though all the pages linked off Should you develop a module?; paying particular focus to the Contributing your module to Ansible. - New modules must be PEP 8 compliant. See PEP 8 for more information. - Starting with Ansible version 2.7, all new modules must support Python 2.7+ and Python 3.5+. If this is an issue, please contact us (see the “Speak to us” section later in this document to learn how). - Have a look at the existing modules and how they’ve been named in the All modules, especially in the same functional area (such as cloud, networking, databases). - Shared code can be placed into lib/ansible/module_utils/ - Shared documentation (for example describing common arguments) can be placed in lib/ansible/plugins/doc_fragments/. - With great power comes great responsibility: Ansible module maintainers have a duty to help keep modules up to date. As with all successful community projects, module maintainers should keep a watchful eye for reported issues and contributions. - Although not required, unit and/or integration tests are strongly recommended. Unit tests are especially valuable when external resources (such as cloud or network devices) are required. For more information see Testing Ansible and the Testing Working Group. * Starting with Ansible 2.4 all Network modules MUST have unit tests. Naming convention¶ As you may have noticed when looking under lib/ansible/modules/ we support up to two directories deep (but no deeper), e.g. databases/mysql. This is used to group files on disk as well as group related modules into categories and topics the Module Index, for example: Database modules. The directory name should represent the product or OS name, not the company name. Each module should have the above (or similar) prefix; see existing All modules for existing examples. Note: - File and directory names are always in lower case - Words are separated with an underscore ( _) character - Module names should be in the singular, rather than plural, eg commandnot commands Speak to us¶ Circulating your ideas before coding is a good way to help you set off in the right direction. After reading the “Before you start coding” section you will hopefully have a reasonable idea of the structure of your modules. We’ve found that writing a list of your proposed module names and a one or two line description of what they will achieve and having that reviewed by Ansible is a great way to ensure the modules fit the way people have used Ansible Modules before, and therefore make them easier to use. Where to get support¶ Ansible has a thriving and knowledgeable community of module developers that is a great resource for getting your questions answered. In the Ansible Community Guide you can find how to: - Subscribe to the Mailing Lists - We suggest “Ansible Development List” (for codefreeze info) and “Ansible Announce list” #ansible-devel- We have found that IRC #ansible-develon FreeNode’s IRC network works best for module developers so we can have an interactive dialogue. - IRC meetings - Join the various weekly IRC meetings meeting schedule and agenda page Your first pull request¶ Now that you’ve reviewed this document, you should be ready to open your first pull request. The first PR is slightly different to the rest because it: - defines the namespace - provides a basis for detailed review that will help shape your future PRs - may include shared documentation (doc_fragments) that multiple modules require - may include shared code (module_utils) that multiple modules require The first PR should include the following files: lib/ansible/modules/$category/$topic/__init__.py- An empty file to initialize namespace and allow Python to import the files. Required new file lib/ansible/modules/$category/$topic/$yourfirstmodule.py- A single module. Required new file lib/ansible/plugins/doc_fragments/$topic.py- Code documentation, such as details regarding common arguments. Optional new file lib/ansible/module_utils/$topic.py- Code shared between more than one module, such as common arguments. Optional new file And that’s it. Before pushing your PR to GitHub it’s a good idea to review the Contributing your module to Ansible again. After publishing your PR to, a Shippable CI test should run within a few minutes. Check the results (at the end of the PR page) to ensure that it’s passing (green). If it’s not passing, inspect each of the results. Most of the errors should be self-explanatory and are often related to badly formatted documentation (see YAML Syntax) or code that isn’t valid Python 2.6 or valid Python 3.5 (see Ansible and Python 3). If you aren’t sure what a Shippable test message means, copy it into the PR along with a comment and we will review. If you need further advice, consider join the #ansible-devel IRC channel (see how in the “Where to get support”). We have a ansibullbot helper that comments on GitHub Issues and PRs which should highlight important information. Subsequent PRs¶ By this point you first PR that defined the module namespace should have been merged. You can take the lessons learned from the first PR and apply it to the rest of the modules. Raise exactly one PR per module for the remaining modules. Over the years we’ve experimented with different sized module PRs, ranging from one module to many tens of modules, and during that time we’ve found the following: - A PR with a single file gets a higher quality review - PRs with multiple modules are harder for the creator to ensure all feedback has been applied - PRs with many modules take a lot more work to review, and tend to get passed over for easier-to-review PRs. You can raise up to five PRs at once (5 PRs = 5 new modules) after your first PR has been merged. We’ve found this is a good batch size to keep the review process flowing. Maintaining your modules¶ Now that your modules are integrated there are a few bits of housekeeping to be done. Bot Meta Update Ansibullbot so it knows who to notify if/when bugs or PRs are raised against your modules BOTMETA.yml. If there are multiple people that can be notified, please list them. That avoids waiting on a single person who may be unavailable for any reason. Note that in BOTMETA.yml you can take ownership of an entire directory. Review Module web docs Review the autogenerated module documentation for each of your modules, found in Module Docs to ensure they are correctly formatted. If there are any issues please fix by raising a single PR. If the module documentation hasn’t been published live yet, please let a member of the Ansible Core Team know in the #ansible-devel IRC channel. New to git or GitHub¶ We realize this may be your first use of Git or GitHub. The following guides may be of use: - How to create a fork of ansible/ansible - How to sync (update) your fork - How to create a Pull Request (PR) Please note that in the Ansible Git Repo the main branch is called devel rather than master, which is used in the official GitHub documentation After your first PR has been merged ensure you “sync your fork” with ansible/ansible to ensure you’ve pulled in the directory structure and and shared code or documentation previously created. As stated in the GitHub documentation, always use feature branches for your PRs, never commit directly into devel.
https://docs.ansible.com/ansible/2.8/dev_guide/developing_modules_in_groups.html
2021-01-15T18:00:03
CC-MAIN-2021-04
1610703495936.3
[]
docs.ansible.com
Now that you've successfully trained the model, you may want to test its performance before using it in the production environment. The Model Evaluation tool allows you to perform a cross validation on a specified model version. Once the evaluation is complete, you’ll be able to view various metrics that will inform the model’s performance. Model Evaluation performs a K-split cross validation on data you used to train your custom model. In the cross validation process, it will: 1. Set aside a random 1/K subset of the training data and designate as a test set, 2. Train a new model with the remaining training data, 3. Pass the test set data through this new model to make predictions, 4. Compare the predictions against the test set’s actual labels, and 5. Repeat steps 1) through 4) across K splits to average out the evaluation results. To run the evaluation on your custom model, it will need the meet the following criteria: A custom trained model model version with: At least 2 concepts At least 10 training inputs per concept (At least 50 inputs per concept is recommended) You can run the evaluation on a specific model version of your custom model in the Portal. Go to your Application, click on your model of interest, and select the Versions tab. Simply click on the Evaluate button for the specific model version. The evaluation may take up to 30 minutes. Once it is complete, the Evaluate button will become View button. Click on the View button to see the evaluation results. Note that the evaluation may result in an error if the model version doesn’t satisfy the requirements above. For more information on how to interpret the evaluation results and to improve your model, check out the Evaluation corner under the “Advanced” section below.
https://docs.clarifai.com/api-guide/model/evaluate
2021-01-15T18:24:34
CC-MAIN-2021-04
1610703495936.3
[]
docs.clarifai.com
What. The platform consist of three web applications: - Storefront or SF for short, which is the customer facing web application, - RESTful services for storefront and - Mission control app (a.k.a. Admin app, a.k.a. across the platform. Be sure to mark kudos to us in the website footer in the following format: For Bill of Materials please refer to documentation of the specific version. For advanced modules, professional services and support, or if you get stuck and need some advice you can always contact us using form on official site. We also offer Enterprise flavour of the platform for customers with more demanding performance criteria and customers wishing to engage in B2B operations. Please contact us if you require more information using form on official site. What is the project status at the moment? Project is in active development. The current version is 3.6.3 GA released in Q4 2019. Planned releases Current version releases roadmap can be found here. If you would like to contact us directly please use contact form on the official site. The project is in active development and our team continues to bring amazing new features and improvements. You can review current state of events in our public jira . All feature requests are welcome but please leave enough details for them to be considered. Where to start? If you have not yet explored demo or enterprise demo those are good places to start to get aquatinted with the storefront. You can also request access to admin app via contact form. Then you can simply get the project from GitHub and have a play with it. Guide for basic installation can be found in documentation for specific version (see documentation section below). Decided that our platform is the right choice and need a competent team of professionals to make your dream store come true? Don't hesitate and contact us. Have questions or would like to contribute? Since we moved GitHub it is super easy to contribute to the project. All you need is to fork (), implement your desired features and then send us a pull request. We will review your feature and make it part of the official release. If you have any questions you can get in touch with us using contact form on official site or post an question to our google group.
https://docs.inspire-software.com/docs/pages/diffpagesbyversion.action?pageId=1343696&selectedPageVersions=10&selectedPageVersions=11
2021-01-15T16:46:14
CC-MAIN-2021-04
1610703495936.3
[]
docs.inspire-software.com
Error Handling This chapter describes the mechanisms Caché Basic provides for handling application errors. On Error Goto The On Error Goto statement lets you define an action to take should a runtime error occur: On Error Goto MyError x = 1 / 0 'induce an error PrintLn "We will never get here..." ' ... MyError: PrintLn "Error: " & Err.Description When a runtime error occurs, execution will jump to the local label specified by the On Error Goto statement. The Err object (see below) will contain information about the error. The Err Object The Err object is a built-in object that contains information about the current error. Typically you use this within an On Error handler. For more information refer to the Err Object reference page. The System.Status Object Many of the methods in the Caché class library return success or failure information via the %Status data type. For example, the %Save method, used to save an instance of %Persistent object to the database, returns a %Status value indicating whether or not the object was saved. With Caché Basic you can use the System.Status object to inspect and manipulate %Status values. For example, the following code tries to save an object that has missing required values: person = New Sample.Person() person.Name = "Nobody" person.SSN = "" ' required! ' Save this object status = person.%Save() If (System.Status.IsError(status)) Then System.Status.DisplayError(status) End If For more information, refer to the System Object reference page as well as the %SYSTEM.Status class.
https://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=GBAS_ERROR
2021-01-15T18:23:34
CC-MAIN-2021-04
1610703495936.3
[]
docs.intersystems.com
RepublishOaiPmhWorkflowOperation Description The Republish OAI-PMH workflow operation will update metadata in your OAI-PMH repositories. In case that the media has not been published before, this operation will skip. Otherwise all elements matching the flavors and tags will be replaced. In case of missing elements in the media package, the published elements will be also removed. Parameter Table Operation Example <operation id="republish-oaipmh" description="Update recording metadata in default OAI-PMH repository"> <configurations> <configuration key="source-flavors">dublincore/*,security/*</configuration> <configuration key="repository">default</configuration> </configurations> </operation>
https://docs.opencast.org/r/5.x/admin/workflowoperationhandlers/republish-oaipmh-woh/
2021-01-15T18:17:52
CC-MAIN-2021-04
1610703495936.3
[]
docs.opencast.org
FatturaPA FatturaPA is a Splynx add-on that allows the exporting of Splynx invoices in xlm format, which can then be exported to the system of electronic invo -> Integrations -> / Integrations / Modules list / splynx_fatturapa using the edit This process is illustrated in the images below: Select a partner to configure: The configure the partner parameters: ProgressivoInvio - No particular criteria is established; the mode of valorization of the field, foreseen to contain an identifier alphanumeric of the transmitted file, is delegated to the user's evaluation according to requirements, but in compliance with the established characteristics from the XSD scheme. FormatoTrasmissione - assumes a fixed value equal to "SDI11". CodiceDestinatario - 567/5000 The field must contain the 6-character code, present on IndexPA between the information related to the billing service electronics, associated with the office, within the administration recipient, performs the function of receiving (and possibly processing) of the invoice. Alternatively, you can enhance the field with the Office code "Central" or with the default value "999999", when the conditions occur provided for by the provisions of the Ministry's interpretative circular of the Economy and Finance no. 1 of 31 March 2014. RegimeFiscale - The field must contain one of the codes provided in the associated value list; the code identifies, on the basis of the commercial sector or the income situation, the tax regime in which the seller/lender registry of companies with which he is registered and registered the seller / lender. NumeroREA - The field must contain the number with which the seller / lender is registered with the business register CapitaleSociale - The field must contain the amount of the capital actually paid as a result of the last budget; is expected a numerical value consisting of an integer and two decimals; the decimals, separated by the whole with the dot character ("."), always go indicated even if zero (eg: 28000000.00). SocioUnico - The field must contain the value "SU" in the case single member, or "SM" in the case of multi-personal company. StatoLiquidazione - The field must contain the value "LS" in the case of companies in liquidation, or "LN" in" in the case of euros). EsigibilitaIVA - The field can be used with "I" for VAT immediate repayment, "D" for VAT with deferred collectability, "S" for spin-off of payments. CondizioniPagamento - In this field "TP01" should be indicated in the case of payment in installments, "TP02" in the case of total payment in a single payment, "TP03" in case of payment of an advance ModalitaPagamento - The field must contain one of the encoded values present in the associated list. You also have to configure the partner settings under Config/System/Company Information Company name - required Street - required ZIP Code - required City - required Phone - required VAT number - required (format must be like - IT11111111111) VAT % - required Bank Account - required Bank name - required Thereafter, you need to set up the Customer additional fields for customers whose invoices you will export, as depicted below: Fields: Street, ZIP code and City in customers information must be set!!! IdPaese - required (format must be like - IT) IdCodice - required (format must be like - 00071090303) Provincia - required (format must be like - CB) Nazione - required (format must be like - IT) Rif.Ufficio - required (if Category = Private person then Rif.Ufficio = CodiceDestinatario (CodiceDestinatario can be viewed in Config / Integrations / Modules list / Splynx Add-on Fatturapa ), if Category = Company then set your Office code here) Once you've Installed and configured the splynx-fatturapa add-on, you can navigate to Administration / Reports / Fatturapa Export and export invoices in XML format, as depicted below: Click the following button If the process completed successfully, you will see the new record in the Fatturapa table with the possibility to download an archive with invoices in XML format. If an error occurred, you will see a new record in the table with an exclamation mark in the "Actions" column. Click the following button
https://docs.splynx.com/addons_modules/FatturaPA/FatturaPA.md
2021-01-15T18:34:01
CC-MAIN-2021-04
1610703495936.3
[array(['http://docs.splynx.com/images/get?path=en%2Faddons_modules%2FFatturaPA%2F0.png', '1.png'], dtype=object) array(['http://docs.splynx.com/images/get?path=en%2Faddons_modules%2FFatturaPA%2F1.png', '1.png'], dtype=object) array(['http://docs.splynx.com/images/get?path=en%2Faddons_modules%2FFatturaPA%2F4.png', '1.png'], dtype=object) array(['http://docs.splynx.com/images/get?path=en%2Faddons_modules%2FFatturaPA%2F2.png', 'edit'], dtype=object) array(['http://docs.splynx.com/images/get?path=en%2Faddons_modules%2FFatturaPA%2F5.png', '1.png'], dtype=object) array(['http://docs.splynx.com/images/get?path=en%2Faddons_modules%2FFatturaPA%2F6.png', '1.png'], dtype=object) array(['http://docs.splynx.com/images/get?path=en%2Faddons_modules%2FFatturaPA%2F7.png', '1.png'], dtype=object) array(['http://docs.splynx.com/images/get?path=en%2Faddons_modules%2FFatturaPA%2F8.1.png', '1.png'], dtype=object) array(['http://docs.splynx.com/images/get?path=en%2Faddons_modules%2FFatturaPA%2F9.png', '1.png'], dtype=object) array(['http://docs.splynx.com/images/get?path=en%2Faddons_modules%2FFatturaPA%2F10.png', '1.png'], dtype=object) array(['http://docs.splynx.com/images/get?path=en%2Faddons_modules%2FFatturaPA%2F11.png', '1.png'], dtype=object) array(['http://docs.splynx.com/images/get?path=en%2Faddons_modules%2FFatturaPA%2F12.png', '1.png'], dtype=object) array(['http://docs.splynx.com/images/get?path=en%2Faddons_modules%2FFatturaPA%2F16.png', '1.png'], dtype=object) array(['http://docs.splynx.com/images/get?path=en%2Faddons_modules%2FFatturaPA%2F13.png', '1.png'], dtype=object) array(['http://docs.splynx.com/images/get?path=en%2Faddons_modules%2FFatturaPA%2F14.png', '1.png'], dtype=object) array(['http://docs.splynx.com/images/get?path=en%2Faddons_modules%2FFatturaPA%2F15.png', '1.png'], dtype=object) array(['http://docs.splynx.com/images/get?path=en%2Faddons_modules%2FFatturaPA%2F17.png', '1.png'], dtype=object) array(['http://docs.splynx.com/images/get?path=en%2Faddons_modules%2FFatturaPA%2F18.png', '1.png'], dtype=object) array(['http://docs.splynx.com/images/get?path=en%2Faddons_modules%2FFatturaPA%2F19.png', '1.png'], dtype=object) ]
docs.splynx.com
How to Prevent Shapes from Being Connected to Themselves Environment Description How to prevent a RadDiagramShape to connect to itself when you single click on one of its connector points. Solution To prevent this use one of the following two solutions: Solution #1 Set the ReflexiveRouter of RadDiagram to null. this.diagram.RoutingService.ReflexiveRouter = null; Solution #2 Handle the ConnectionManipulationCompleted event of RadDiagram, if the connection source is the same as the Shape of the event arguments. private void RadDiagram_ConnectionManipulationCompleted(object sender, Telerik.Windows.Controls.Diagrams.ManipulationRoutedEventArgs e) { if (e.Connection.Source == e.Shape) { e.Handled = true; } }
https://docs.telerik.com/devtools/wpf/knowledge-base/kb-diagrams-prevent-shape-from-being-contected-to-itself
2021-01-15T18:30:27
CC-MAIN-2021-04
1610703495936.3
[]
docs.telerik.com
When you manually install guest operating systems and applications on a virtual machine, you introduce a risk of misconfiguration. By using a template to capture a hardened base operating system image with no applications installed, you can ensure that all virtual machines are created with a known baseline level of security. You can use templates that can contain a hardened, patched, and properly configured operating system to create other, application-specific templates, or you can use the application template to deploy virtual machines. Procedure - ♦ Provide templates for virtual machine creation that contain hardened, patched, and properly configured operating system deployments.If possible, deploy applications in templates as well. Ensure that the applications do not depend on information specific to the virtual machine to be deployed. What to do next For more information about templates, see the vSphere Virtual Machine Administration documentation.
https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.security.doc/GUID-3399BC47-45E8-494B-9B57-E498DD294A47.html
2021-01-15T17:31:28
CC-MAIN-2021-04
1610703495936.3
[]
docs.vmware.com
This template is a great starting point for any post-implementation review. It's designed to help you keep track of what has been done, what needs to be done, and where you need more information in order to complete your review. The ITIL Post Implementation Review Template is a document that can be used to review the implementation of an IT Service Management (ITSM) initiative. It guides stakeholders through the process of understanding what was done and how well it went, identifies areas for improvement, and provides recommendations on where to go next. Template Details : Format: MS Excel The template includes the following sections - Purpose Of the PIR - Scope of the PIR - Objectives of the Change Requests - Post Implementation Review Details - PIR Assessment Questionnaire - Benefits Realization - Customer Requirement - Lessons Learned and Recommendations
https://www.itil-docs.com/products/post-implementation-review
2021-09-16T21:48:51
CC-MAIN-2021-39
1631780053759.24
[]
www.itil-docs.com
Layer Management (Available in all TurboCAD Variants) Default UI Menu: Format/Layers Ribbon UI Menu:. Note: Layers are not related to how objects are stacked in relation to their order of creation. If you change an object's layer, it does not affect its position in the object stack. Setting Up Layers There are two primary ways of creating and editing layers. With regard to layers you can do essentially operations with either.Open the Layer manager by selecting Format / Layers, or by clicking the Layers icon on the toolbar. This dialog is used to create layers, to assign properties to each layer, and to organize layers. It is divided into two frames. The first frame show the tree containing the layer filter, and layer templates. The second frame show the layers. controlled or derive from the item selected in the tree. The top node of the tree controls and shows all layers in the drawing. Layer 0 is the default layer, and all objects are placed here unless otherwise specified, or unless another layer is created and made active. Layer $CONSTRUCTION is created when construction geometry is created Layer 0, the default layer can not be deleted, but you can change its properties. Layer Set: located on the property toolbar. Color: Click on the the color box to open the color dialog, then select a color -. Warning: The Draw Order commands will not function as you expect if objects are on different layers and the layers have different Order values. Pen Width: Sets the line width. Objects will have the layer width if their width is set to By Layer. Print Style: Specifies the Print Style to be used by objects on that layer . VP Columns in Layers window: In addition to the main columns in the Layers window there are five columns with a VP prefix. VP stands for Viewport, and these columns are use to control how objects will appear within a selected viewport. The columns are: VP Visible, VP Color, VP Line Style, Pen Width, Print Style.The VP columns operate in the same way as the main columns, except the settings effect on selected viewports.To use the VP column settings: - Open the Layer Manager. - Go to Paper Space. - Select the Viewports you wish to configure. - Apply the settings in the VP columns of the Layer Manager. Note: Remember that settings which control object appearance via the Layer Manager (e.g. VP Color) only affect the properties objects which are set to By Layer. Layer Manager Toolbar: Deletes the layer.. Creating a New Layer - Open the Layer manager by selecting Format / Layers, or by clicking the Layers icon on the toolbar. A dialog opens. In the dialog, right click on Layer and select new layer. Then assign a name for the layer in the Layer column (or accept the default name). - Adjust the various layer settings, such as color and line style. Deleting a Layer You may delete any layer except Layers 0. Layers can be deleted even if they contain objects. If the layer to be deleted is set as the default for a tool (in the General page of a tool's Properties window), you will receive a warning message before the layer is deleted. - Select Layers, and select the layer to be deleted. - Click Delete Layer. If the layer contains objects, the objects will be deleted. This action can be undone, in case you delete objects inadvertently.. Deleting Construction and Constraints Layer: (Available in Platinum and Professional) Construction objects are placed on layer "$CONSTRUCTION". You can change construction geometry color and line styles via the layer manager. You can now also delete $Construction layer and $Constraints layer by selecting them in the Layer palette. Layer Templates Layer templates allow you to create and save alternate configurations for layers. Layer templates store how layers are setup, but do not store layers thenselves. Layer templates are saved in a *.lrs file which can be stored anywher in you system directory. Layer Sets Default UI Menu: Format/Layers Ribbon UI Menu: A layer set is a group of layers which can be displayed as a group. This is useful for displaying certain aspects of a drawing without changing visibility settings of each layer individually. The default layer set is "All Layers," which appears in the Format menu. Creating and Manipulating Layer Sets - In the Layers window (Format / Layers), click Edit Layer Sets. - When the Layer Set dialog opens select New. 3. Assign a name to the set, or accept the default name. The name appears on the Layer Set list. 4. On the list of layers, check the visibility of each layer you want to include in the layer set. 5. To display a layer set, open the Format menu. Note: While a layer set is displayed, the properties of each layer are not editable. To delete a layer set, select it from the Layer Set list and click Delete. To change the layers that appear in a layer set, select it from the Layer Set list and change the visibility settings. Layers of Groups and Blocks If. Layer Sorting You can use the Layer Manager or the Design Director to sort layers into a desired order. To sort the layer click the icon at the top of any column e.g. name. Click that Icon again will reverse the order of layer by that category. When layers are sorted the order of layers will be the same throughout the application. this means that the order of layer shown in drop-down boxes will be the same as that in the Layer Manager or Design Director. Manipulating Layers and Properties By.
http://docs.imsidesign.com/projects/TurboCAD-2019-User-Guide-Publication/TurboCAD-2019-User-Guide/Drawing-Aids/Layer-Management/
2021-09-16T21:16:48
CC-MAIN-2021-39
1631780053759.24
[array(['../../Storage/turbocad-2019-user-guide-publication/layer-management-2019-02-11.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/3-6-layer-management-img0003.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/3-6-layer-management-img0004.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/3-6-layer-management-img0005.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/layer-management-2019-02-11-1.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/3-6-layer-management-img0006.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/3-6-layer-management-img0007.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/3-6-layer-management-img0008.png', 'img'], dtype=object) ]
docs.imsidesign.com
What's New in Version 8.2 July 2018 Version 8.2 of Alloy Navigator Express introduces online product activation and online help and provides many other features and enhancements over previous releases. User interface Map your relationships graphically Understanding relationships between your critical IT infrastructure and the service you provide to your customers leverages invaluable information as important links show in a graphical network-like map. No more mistakes We've extended spell checking to HTML fields so you can ensure messages you send to customers and Knowledge Base articles you write are free of those pesky spelling errors. Easier working with views Switching between views and searching in grids is more intuitive now. The drop-down list of views has been moved to a more expected place: the upper left corner of the Module menu, immediately above the grid. Documentation Faster, relevant, anywhere help We've moved our already powerful context-sensitive help system online to ensure up-to-the-minute information and speedy search you can access anytime —. IT Assets Virtual relationships Virtual machines and their hosts are now automatically associated in their own dedicated area so you can leverage these critical relationships. Drive solid state data Now not only can you understand the utilization of hard drive space, but now you can easily recognize which computers are using solid state technology. Web Portal and Self Service Portal Create anywhere The Web Portal now enables you to create any item from anywhere, saving you time and effort when working with multiple product areas. Nifty slide outs When opening items in the Web Portal, a quick slide out panel gives you immediate access to information you need making navigation speedy and efficient. Compact navigation We've condensed the Web Portal side bar navigation into a sleek new mobile approach that will save you screen real estate. Better self-navigation We've streamlined the Self-Service Portal interface for the Knowledge Base and Service Catalog to ensure your customers can more easily find what they need. Discovery and audit OS recognition improvements Recognizing operating systems has never been faster which means you'll see decreased auditing time and faster turnaround in getting you the critical information you need. Device detection improvements Not only can you now detect a wider variety of printers and NAS devices, but you can expect more detailed information such as the hardware's manufacturer. Details to the switch port Now you can get complete end-to-end port mapping information for discovered switches ensuring you understand not only what switches you have, but what devices are connected to which port. Administration Reduced management, increased security Previous versions required multiple SQL Server accounts with elevated permissions, but now you'll only need a single account, reducing administration and providing greater security. One-click activation Upgrading your license has never been easier, whether you're upgrading or adding new technicians. No more fiddling with license files, just activate your product over the internet. Maximize technician usage The account utilization chart will help you quickly understand how your technicians work, resulting in a solid strategy for leveraging concurrent licensing and reducing IT costs. Audit your deleted data The mystery of deleted data has now been solved with new logging capabilities so you'll know exactly who removed what and when they did it. Switching account types Converting a technician account to a SSP customer account or vice versa no longer requires recreating accounts. Just open the account and choose the account type you want. Workflow management Workflow connectivity Now when you rename Macros Placeholder quick access You'll no longer to back out of deep workflow because now you can right-click any placeholder to bring you directly to the source. Business logic Automation for the Person workflow The workflow for Person records is more flexible. Now you can control which status will have Person records whose user accounts are disabled in the Active Directory: Inactive or Retired. You can also set the number of days before Persons in Inactive status are marked as Retired, or disable the automatic retirement. Different types of Tickets in the Self Service Portal Now you can customize the Self Service Portal workflow to allow your customers to create different types of Tickets. Rapid message flow For those of you finding yourselves in critical, time-sensitive industries where you cannot wait for communications to go out, the outgoing e-mail now allows for the instantaneous sending of e-mail.
https://docs.alloysoftware.com/alloynavigatorexpress/8/docs/releasenotes/820.htm
2021-09-16T21:27:01
CC-MAIN-2021-39
1631780053759.24
[]
docs.alloysoftware.com
UpdateEndpointWeightsAndCapacities.]){0,62}. Pattern: arn:aws[a-z\-]*:sagemaker:[a-z0-9\-]*:[0-9]{12}:endpoint/.*:
https://docs.amazonaws.cn/sagemaker/latest/APIReference/API_UpdateEndpointWeightsAndCapacities.html
2021-09-16T22:10:44
CC-MAIN-2021-39
1631780053759.24
[]
docs.amazonaws.cn
Visualize your data in real time at critical moments like game launch and during promotions. The Livestream page samples your Analytics data as it happens without any other data processing. Note that other areas of the Analytics Dashboard do not show data until processing is complete, introducing a delay of several hours. Livestream requires a Unity Plus or Pro subscription. The Livestream page has these sections: The Live Metrics section shows charts of incoming metrics data. The numbers in the display show the current day’s cumulative totals (reset at midnight GMT). The chart portions show activity over the last 5 minutes, starting when you load the Livestream page. The Activity Map marks the geographic locations of incoming Analytics metrics events. The Top Country Metrics section shows metrics from the most active countries. The Top Custom Events section shows the most common Standard and Custom Events dispatched while players use your game. The event counts are cumulative since you loaded the Livestream page. Note that the events shown on the Livestream page are sampled for efficiency. This means that not every event is individually counted and, if you open Livestream in two different pages, the numbers and events shown can be slightly different.
https://docs.unity3d.com/kr/2018.3/Manual/UnityAnalyticsLivestream.html
2021-09-16T23:29:47
CC-MAIN-2021-39
1631780053759.24
[]
docs.unity3d.com
Unity has a rich and sophisticated animation system (sometimes referred to as ‘Mecanim’). It provides: Unity’s animation system is based on the concept of Animation Clips, Controller. The Animator Controller acts as a “State Machine” definitions. These special features are enabled by Unity’s Avatar system, where humanoid characters are mapped to a common internal format. Each of these pieces - the Animation Clips, the Animator Controller, and the Avatar, are brought together on a GameObject via the Animator Component.
https://docs.unity3d.com/ru/2021.1/Manual/AnimationOverview.html
2021-09-16T23:12:44
CC-MAIN-2021-39
1631780053759.24
[]
docs.unity3d.com
Quick Start with garage¶ Table of Content What is garage?¶ garage is a reinforcement learning (RL) toolkit for developing and evaluating algorithms. The garage library also provides a collection of state-of-the-art implementations of RL algorithms. The toolkit provides a wide range of modular tools for implementing RL algorithms, including: Composable neural network models Replay buffers High-performance samplers An expressive experiment definition interface Tools for reproducibility (e.g. set a global random seed which all components respect) Logging to many outputs, including TensorBoard Reliable experiment checkpointing and resuming Environment interfaces for many popular benchmark suites Supporting for running garage in diverse environments, including always up-to-date Docker containers Why garage?¶ garage aims to provide both researchers and developers: a flexible and structured tool for developing algorithms to solve a variety of RL problems, a standardized and reproducible environment for experimenting and evaluating RL algorithms, a collection of benchmarks and examples of RL algorithms. Kick Start garage¶ This quickstart will show how to quickly get started with garage in 5 minutes. import garage Algorithms¶ An array of algorithms are available in garage: They are organized in the github repository as: └── garage ├── envs ├── experiment ├── misc ├── np ├── plotter ├── replay_buffer ├── sampler ├── tf └── torch Note: clickable links represents the directory of algorithms. A simple pytorch example to import TRPO algorithm, as well as, the policy GaussianMLPPolicy, value function GaussianMLPValueFunction and sampler LocalSampler in garage is shown below: import gym import torch from garage.envs import GarageEnv, normalize def trpo_garage_pytorch(): env = GarageEnv(normalize(gym.make(env_id))) # specify env_id policy = PyTorch_GMP(env.spec, hidden_sizes= [32, 32],=0.99, gae_lambda=0.97) The full code can be found here. To know more about implementing new algorithms, see this guide Running Examples¶ Garage ships with example files to help you get started. To get a list of examples, run: garage examples This prints a list of examples along with their fully qualified name, such as: tf/dqn_cartpole.py (garage.examples.tf.dqn_cartpole.py) To get the source of an example, run: garage examples tf/dqn_cartpole.py This will print the source on your console, which you can write to a file as follows: garage examples tf/dqn_cartpole.py > tf_dqn_cartpole.py You can also directly run an example by passing the fully qualified name to python -m, as follows: python -m garage.examples.tf.dqn_cartpole.py You can also access the examples for a specific version on GitHub by visiting the tag corresponding to that version and then navigating to src/garage/examples. Running Experiments¶ In garage, experiments are run using the “experiment launcher” wrap_experiment, a decorated Python function, which can be imported directly from the garage package. from garage import wrap_experiment Moreover, objects, such as trainer, environment, policy, sampler e.t.c are commonly used when constructing experiments in garage. """A regression test for automatic benchmarking garage-PyTorch-TRPO.""" import torch from garage import wrap_experiment from garage.envs import GymEnv, normalize from garage.experiment import deterministic from garage.trainer import Trainer hyper_parameters = { 'hidden_sizes': [32, 32], 'max_kl': 0.01, 'gae_lambda': 0.97, 'discount': 0.99, 'n_epochs': 999, 'batch_size': 1024, } @wrap_experiment def trpo_garage_pytorch(ctxt, env_id, seed): """Create garage PyTorch TRPO model and training. Args: ctxt (garage.experiment.ExperimentContext): The experiment configuration used by Trainer to create the snapshotter. env_id (str): Environment id of the task. seed (int): Random positive integer for the trial. """ deterministic.set_seed(seed) trainer = Trainer(ctxt) env = normalize(GymEnv(env_id)) policy = PyTorch_GMP(env.spec, hidden_sizes=hyper_parameters['hidden_sizes'],=hyper_parameters['discount'], gae_lambda=hyper_parameters['gae_lambda']) trainer.setup(algo, env) trainer.train(n_epochs=hyper_parameters['n_epochs'], batch_size=hyper_parameters['batch_size']) This page will give you more insight into running experiments. Plotting results¶ In garage, we use TensorBoard for plotting experiment results. This guide will provide details how to set up tensorboard when running experiments in garage. Experiment outputs¶ Localrunner is a state manager of experiments in garage, It is set up to create, save and restore the state, also known as snapshot object, upon/ during an experiment. The snapshot object includes the hyperparameter configuration, training progress, a pickled object of algorithm(s) and environment(s), tensorboard event file etc. Experiment results will, by default, output to the same directory as the garage package in the relative directory data/local/experiment. The output directory is generally organized as the following: └── data └── local └── experiment └── your_experiment_name ├── progress.csv ├── debug.log ├── variant.json ├── metadata.json ├── launch_archive.tar.xz └── events.out.tfevents.xxx wrap_experiment can be invoked with arguments to support actions like modifying default output directory, changing snapshot modes, controlling snapshot gap etc. For example, to modify the default output directory and change the snapshot mode from last (only last iteration will be saved) to all, we can do this: @wrap_experiment(log_dir='./your_log_dir', snapshot_mode='all') def my_experiment(ctxt, seed, lr=0.5): ... During an experiment, garage extensively use logger from Dowel for logging outputs to StdOutput, and/ or TextOutput, and/or CsvOutput. For details, you can check this. Open Source Support¶ Since October 2018, garage is active in the open-source community contributing to RL researches and developments. Any contributions from the community is more than welcomed. Resources¶ If you are interested in a more in-depth and specific capabilities of garage, you can find many other guides in this website such as, but not limited to, the followings: This page was authored by Iris Liu (@irisliucy).
https://garage.readthedocs.io/en/latest/user/get_started.html
2021-09-16T21:30:42
CC-MAIN-2021-39
1631780053759.24
[]
garage.readthedocs.io
Plutus scripts Cardano uses scripts to validate actions. These scripts, which are pieces of code, implement pure functions with True or False outputs. Script validation is the process of invoking the script interpreter to run a given script on appropriate arguments. What are scripts? A script is a program that decides whether or not the transaction that spends the output is authorized to do so. Such a script is called a validator script, because it validates whether the spending is allowed. A simple validator script would check whether the spending transaction was signed by a particular key – this would exactly replicate the behavior of simple pay-to-pubkey outputs. However, with a bit of careful extension, we can use scripts to express useful logic on the chain. The way the EUTXO model works is that validator scripts are passed three arguments: - Datum: this is a piece of data attached to the output that the script is locking (strictly, again, just the hash is present). This is typically used to carry state. - Redeemer: this is a piece of data attached to the spending input. This is typically used to provide an input to the script from the spender. - Context: this is a piece of data that represents information about the spending transaction. This is used to make assertions about the way the output is being sent (such as “Bob signed it”). Intuitive example For example, a kid wants to go on a Ferris wheel, but before getting on, they must be taller than the safety sign. We could express that idea in pseudo code, like: if isTallEnough(attraction=ferrisWheel,passenger=michael):getOnFerrisWheel()def isTallEnough(attraction,kid):return kid["height"] >= attraction["minimumHeight"]def getOnFerrisWheel():print ("get On the Ferris Wheel")ferrisWheel = {"minimumHeight":120}michael = {"height":135} In this example the following applies: - The datum is the information about this transaction: michael.height. - The context is the state of the world, at that point meaning: ferrisWheel.minimumHeight. - The reedemer, is the action to perform: getOnFerrisWheel() The validator script is the function that uses all that information isTallEnough Defi example Now let’s look at an example from the DeFi domain. We could implement an atomic swap, as follows: - The datum contains the keys of the two parties in the swap, and a description of what they are swapping - The redeemer is unused. - The context contains a representation of the transaction. The logic of the validator script is as follows: does the transaction make a payment from the second party to the first party, containing the value that they are supposed to send? If so, then they may spend this output and send it where they want (or we could insist that they send it to their key, but we might as well let them do what they like with it). Code examples You can find real code examples of validator scripts on every smart contract, for example: - Plutus transaction tutorial: On this validator, it always succeeds. - Plutus Hello World: On this validator if the datum is equal to ‘Hello’ it is converted to an integer. - Plutus Pioneers English Auction: On this line the validator makes sure that the new bid (datum) is superior to the previous one, until time is up. Cost model parameters The cost model for Plutus Core scripts has a number of parameters, which are part of the Cardano protocol parameters. Developers can adjust those parameters individually. See the following for more details:
https://docs.cardano.org/plutus/plutus-validator-scripts/
2021-09-16T22:19:32
CC-MAIN-2021-39
1631780053759.24
[]
docs.cardano.org
It is possible to associate atoms in reactants to products with atom-to-atom mapping. Unlike atom indexes, map labels are constant and cannot be changed when the molecule is altered. The various mapping tools that are available through Structure > Mapping can be used to map a drawn reaction either manually or automatically. {info} To see the map labels, the View > Advanced > Atom Mapping option must be turned on. It is possible to assign the same free map number to atoms by selecting Insert Reaction Arrow from the Tools toolbar, then drawing the reaction arrow from the first atom to the second one. For more information about alternative manual mapping methods, see the following sections: If the first atom does not have an atom map number, but the second atom has one, both atoms have to be numbered with the smallest integer which is bigger than zero and does not belong to any atom on the canvas yet. Map numbers of the selected atoms can be removed by the clicking Structure > Mapping > Unmap Atoms, or by typing m0 for the selected atoms. It is also possible to map reaction automatically by using the automap function which is available through Structure > Mapping. The following automatic mapping methods are available in MarvinSketch:
https://docs.chemaxon.com/display/docs/mapping-reactions
2021-09-16T21:33:38
CC-MAIN-2021-39
1631780053759.24
[]
docs.chemaxon.com
Date: Sun, 8 Apr 2012 04:51:36 +0100 From: Tradus <[email protected]> To: [email protected] Subject: Your Free shopping vouchers worth Rs. 500 are waiting to be claimed. Message-ID: <[email protected]> Next in thread | Raw E-Mail | Index | Archive | Help =0A=0A=0A=0A=0A=0A=0A=0A =0A =0A =0A =0A [1]3D"" =0A =0A =0A =0A =0A =0A =0A =0A Hi,=0A [joi=] =0A = =0A =0A We have observe= d that you have not yet accepted your friend, Rakesh kumar's invitation to join Tradus.in=0A =0A = =0A Tradus is an exclusive online marketplace, where you can choose and comp= are from a wide variety of products available. It brings you the latest boo= ks, electronic & fashion products at amazingly low prices.=0A = =0A =0A =0A =0A = =0A =0A = =0A = To Start Off your Journey on Tradus, you have be= en gifted Shopping Vouchers= worth Rs 500^*=0A =0A =0A =0A =0A =0A =0A =0A =0A [2]3D"Accept =0A =0A =0A [referra=] =0A = =0A =0A ^* Shopping Vouchers would be sent you via an email af= ter you have registered=0A =0A=0A =0A = =0A=0A You are subscribe= d to Tradus Non Buyers - 2011 as freebsd-questions@= freebsd.org. If you do not wish to receive any further communication= s, please [3]click here. [ZnJlZWJzZC1xd=] = =0A References 1. 3D" 2. 3D" 3. 3D" Want to link to this message? Use this URL: <>
https://docs.freebsd.org/cgi/getmsg.cgi?fetch=18393+0+archive/2012/freebsd-questions/20120415.freebsd-questions
2021-09-16T22:41:07
CC-MAIN-2021-39
1631780053759.24
[]
docs.freebsd.org
Date: Wed, 9 Feb 2011 17:14:32 -0800 From: StumbleUpon <[email protected]> To: [email protected] Subject: Reminder - aurinete wants you on StumbleUpon Message-ID: <[email protected]> Next in thread | Raw E-Mail | Index | Archive | Help [StumbleUpon] () aurinete click here () (c) StumbleUpon 2001 - 2011 Want to link to this message? Use this URL: <>
https://docs.freebsd.org/cgi/getmsg.cgi?fetch=527222+0+archive/2011/freebsd-questions/20110213.freebsd-questions
2021-09-16T21:14:07
CC-MAIN-2021-39
1631780053759.24
[]
docs.freebsd.org
Date: Sun, 13 Oct 2013 23:01:02 [email protected]> In-Reply-To: <[email protected]> References: <[email protected]> <[email protected]> Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help. >> I have one directory of data that I want to keep. > > You should still make a backup, because "I want to keep" does > imply exactly that in regards of an OS installation. :-) Absolutely. With no backup, only one tiny thing can go wrong and the data is gone. Want to link to this message? Use this URL: <>
https://docs.freebsd.org/cgi/getmsg.cgi?fetch=99342+0+archive/2013/freebsd-questions/20131020.freebsd-questions
2021-09-16T22:13:18
CC-MAIN-2021-39
1631780053759.24
[]
docs.freebsd.org
Talking to the compiler (the :meta mechanism) In some circumstances, one might wish to provide hints or instructions that a given block of code has special properties: you might always want to inline it, or you might want to turn on special compiler optimization passes. Starting with version 0.4, Julia has a convention that these instructions can be placed inside a :meta expression, which is typically (but not necessarily) the first expression in the body of a function. :meta expressions are created with macros. As an example, consider the implementation of the @inline macro: macro inline(ex) esc(isa(ex, Expr) ? pushmeta!(ex, :inline) : ex) end Here, ex is expected to be an expression defining a function. A statement like this: @inline function myfunction(x) x*(x+3) end gets turned into an expression like this: quote function myfunction(x) Expr(:meta, :inline) x*(x+3) end end Base.pushmeta!(ex, :symbol, args...) appends :symbol to the end of the :meta expression, creating a new :meta expression if necessary. If args is specified, a nested expression containing :symbol and these arguments is appended instead, which can be used to specify additional information. To use the metadata, you have to parse these :meta expressions. If your implementation can be performed within Julia, Base.popmeta! is very handy: Base.popmeta!(body, :symbol) will scan a function body expression (one without the function signature) for the first :meta expression containing :symbol, extract any arguments, and return a tuple (found::Bool, args::Array{Any}). If the metadata did not have any arguments, or :symbol was not found, the args array will be empty. Not yet provided is a convenient infrastructure for parsing :meta expressions from C++.
https://docs.julialang.org/en/v1.0/devdocs/meta/
2021-09-16T21:31:44
CC-MAIN-2021-39
1631780053759.24
[]
docs.julialang.org
This article will explain what validations that are done throughout the session and what you need to think of as a merchant to ensure you adhere to these standards. This will be described call-by-call following the image below. During the create session, KP validates that the property of a session, as defined in the payload, is correct. This includes: Validation logic for any updates of the session will be validated as a new session. See section above for what validations this include. Load and Authorize calls both triggers risk and fraud assessments on Klarna’s side. Meaning that Klarna does an internal check on whether credit could be given to this user or not. To achieve this, a more strict address check is done. In this validation the address must fulfill all standards of the country and the address must exist in the address register of the specific country. If any of these standards are broken, the validation will fail. If any address field - in either billing or shipping - is incorrect, an error message will be returned specifying what field it is. All other validations of data quality throughout the Klarna Payments session are done using a static set of rules, defined by fixed definitions. However for the place order and create customer token calls, the validation is instead done against the most recent payload sent in by you, as a merchant. The reason behind this being done is to ensure that session handling of Klarna and the merchant are aligned. As well as that the definition of the billing address are still aligned throughout the session, so the right person is billed for the purchase. The merchant should always use the same payload in the place order/create customer token call as the most recent one sent in. The following use cases describe the most common use cases of how the validation works. All cases include billing and shipping address, but the same logic applies to e.g. order lines. 1. User details added in create session - same in Place Order Outcome - Success Address details of the payload in place order matches that of those in create session. Since no other details have been added in between, the validation will be OK. 2. User details added in Authorize - same in Place Order. Outcome - Success Address details of the payload in place order matches that of those in authorize. Since no other details have been added in between, the validation will be OK. 3. No user details added by merchant - address module used. NOTE: In the example above, no billing details are added. However email must be sent in. Outcome - Success No address details have been added in any payload up until Place order. An empty payload in place order will thus match the most recent payload and the validation will be OK. 4. Details changed in Authorize. Initial payload used in place order Outcome - Failure Address details initially added in create session. These are updated with a new payload in Authorize. The initial payload it used in the place order, resulting in a failure as the most recent one is different. NOTE: This change could be both major (a completely new address) or minor (one field changed). It would still fail the validation.
https://docs.klarna.com/klarna-payments/in-depth-knowledge/validations-in-kp/
2021-09-16T20:51:12
CC-MAIN-2021-39
1631780053759.24
[]
docs.klarna.com
Following are the release details: In this release, The LFX Insights delivers support for the new Trends dashboards providing a comprehensive analytics of project performance, in the form of metrics, for all projects and project-groups. These data visualizations help project teams better understand project performance and monitor the health of the project. Global Trends: Shows aggregated performance data of all projects onboarded to Insights. Project Group Trends: Shows aggregated performance data of all sub projects under the project group or foundation. Individual Project Trends: Shows aggregated performance data of the individual project. The LFX Insights, Jun 2021 Release delivers support for new data source— Social Media Metrics. In this release, Insights supports only twitter to track and visualize twitter data. In the upcoming releases, Insights will provide support for Facebook and LinkedIn. To know how the social media data are calculated, refer to Insights FQAs. For details about social media metrics dashboards, Social Media Metrics. The LFX Insights, May 2021 Release delivers support for new data sources and metrics— GitHub Reviews, Changeset Reviews metrics as source control systems, Circle CI as build system, and Google Groups as Email system to visualize project related communication activities. Gerrit Changeset Approval and GitHub PR Efficiency dashboards are enhanced for better clarity of project data. The following new features are added in May 2021 release of LFX Insights: For more information on added features, see New GitHub Reviews Dashboard and an improved GitHub Efficiency Dashboard are added to provide more clarity around pull request merge times. GitHub Efficiency Dashboard is redesigned to help project maintainers set goals around PR merge times. For details about new visualizations of GitHub PR, see Reviews and Efficiency. Google Groups addition expands Insights email coverage. It provides richer context around what the community is talking about, and help project community managers to better engage and acknowledge their community members. To know amore about how google groups data is onboarded, see How does Insights collect Google Groups Mailing List Data? For details about visualizations, see Google Groups. Circle CI Dashboards: LFX Insights supports a new build system— CircleCI, providing various builds related-metrics right on the Insights dashboard for your project, helping you monitor your project’s build pipeline and improve workflow efficiency. For details about visualizations, see CircleCI. Gerrit Changesets Dashboards are redesigned to help project maintainers analyze and set goals around changeset approvals and merge times. For details about the added/enhanced visualizations, see Gerrit Changeset.
https://docs.linuxfoundation.org/lfx/insights/releases
2021-09-16T22:23:24
CC-MAIN-2021-39
1631780053759.24
[]
docs.linuxfoundation.org
Major Problem Report Template the symptoms. ITIL Major Problem Report Template 8 Steps to Resolve an ITIL Problem Whenever the service desk encounters a major problem within its systems, there are eight steps that it needs to perform in order to resolve the problem – Detection The first step is to identify the root cause of the problem and not just the lone incident which is only a symptom of the problem. This requires identifying incidents throughout the organization, either proactively or by receiving multiple service requests. Recording Since a service desk usually has more than one employee, it is important to share all the incidents. This will help in identifying the root cause of the problem and will serve as a log for lessons learned and future improvements. Categorizing Since each user fills in a service request a bit differently, it is important to categorize them as they come in. This will help in modeling the incidents, which will allow the service desk to collate as many incidents into one major problem and solve it. It is possible to have more than one category. For example, the Main category is “Communication”, whilst the second one is “Desk Phone”. Prioritizing Once all the requests are recorded and categorized, it is important to prioritize them in order of urgency and importance. Ideally, the more important ones will be solved first, although most organizations solve the urgent ones first. Diagnosing Now comes the hardest part, figuring out what the problem is and deciding on how to solve it. This step of course requires the most amount of time, and it shouldn’t be rushed. Once the problem has been diagnosed and the root problem is detected, the service desk needs to suggest the best solution for rooting out the problem. Workaround If the suggestion in the previous step was approved, now the team needs to implement the solution across the organization. This step requires the whole team to understand the problem, the solution, and to implement it in the same manner. Documentation After the problem has been solved, documenting the entire process (from stet #1 to #6) will assist in resolving future issues. This should be done in a set template, which will help in familiarizing the team with the process. Lessons Learned The final step is more often than not skipped since most of the service desk employees don’t see an immediate benefit in doing so. This task usually falls on the team leader of the service desk and is very important for the organization to grow and learn from its mistakes. How it fits into the ITIL methodology This process is crucial for the long-term success of the organization and is the core foundation of a robust service desk entity. The problem resolving collates many different incident reports, and is internal as well as external facing.
https://www.itil-docs.com/blogs/problem-management/major-problem-report-template
2021-09-16T21:53:21
CC-MAIN-2021-39
1631780053759.24
[array(['https://cdn.shopify.com/s/files/1/0576/7063/1573/files/ITIL-Major-Problem-Report-Template_1024x1024.jpg?v=1625106577', 'ITIL Major Problem Report Template'], dtype=object) ]
www.itil-docs.com
How to configure auto-prioritization of Tickets Alloy Navigator Express’ default workflow provides a set of rules (Ticket Prioritization Business Policy) for prioritizing Tickets. These rules apply the necessary calculation every time a Ticket is created or modified. You can use these configuration as is, or customize it as needed. Alloy Navigator Express uses the default Ticket Prioritization policy to populate the Due Date of the Ticket as displayed below. This default policy does not have any conditions (i.e. have “any” in every row under When ticket) and applies to all Tickets. ANX calculates the Due Date when the Ticket is created and re-calculates the Due Date when the Ticket Priority or Submit Date is changed. Result: Ticket’s Due Date is populated in accordance with the its priority. However, some categories, departments, or VIPs may require higher priority than others. You can create several policies with different conditions and different rules for calculating the Ticket’s Due Date. Each policy will be triggered when all its conditions are true. In addition, you can expand the default Ticket Priority lookup list with additional items and specify prioritization rules for your custom Priority values. Note that the Ticket Prioritization dialog will show your custom Priority values at the bottom. Custom rules for calculating the Due Date for Tickets:
https://docs.alloysoftware.com/alloynavigatorexpress/8/docs/howtos/topics/how-to-configure-auto-prioritization-anx.htm
2021-09-16T22:17:12
CC-MAIN-2021-39
1631780053759.24
[array(['../Resources/Images/ticket-prioritization-x_544x478.png', None], dtype=object) array(['../Resources/Images/ticket-prioritization-custom-x_544x478.png', None], dtype=object) ]
docs.alloysoftware.com
ListFeatureGroups List FeatureGroups based on given filter and order. Request Syntax { "CreationTimeAfter": number, "CreationTimeBefore": number, "FeatureGroupStatusEquals": " string", "MaxResults": number, "NameContains": " string", "NextToken": " string", "OfflineStoreStatusEquals": " string", "SortBy": " string", "SortOrder": " string" } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. - CreationTimeAfter Use this parameter to search for FeatureGroupss created after a specific date and time. Type: Timestamp Required: No - CreationTimeBefore Use this parameter to search for FeatureGroupss created before a specific date and time. Type: Timestamp Required: No - FeatureGroupStatusEquals A FeatureGroupstatus. Filters by FeatureGroupstatus. Type: String Valid Values: Creating | Created | CreateFailed | Deleting | DeleteFailed Required: No - MaxResults The maximum number of results returned by ListFeatureGroups. Type: Integer Valid Range: Minimum value of 1. Maximum value of 100. Required: No - NameContains A string that partially matches one or more FeatureGroups names. Filters FeatureGroups by name. Type: String Length Constraints: Minimum length of 1. Maximum length of 64. Required: No - NextToken A token to resume pagination of ListFeatureGroupsresults. Type: String Length Constraints: Maximum length of 8192. Pattern: .* Required: No - OfflineStoreStatusEquals An OfflineStorestatus. Filters by OfflineStorestatus. Type: String Valid Values: Active | Blocked | Disabled Required: No - SortBy The value on which the feature group list is sorted. Type: String Valid Values: Name | FeatureGroupStatus | OfflineStoreStatus | CreationTime Required: No - SortOrder The order in which feature groups are listed. Type: String Valid Values: Ascending | Descending Required: No Response Syntax { "FeatureGroupSummaries": [ { "CreationTime": number, "FeatureGroupArn": "string", "FeatureGroupName": "string", "FeatureGroupStatus": "string", "OfflineStoreStatus": { "BlockedReason": "string", "Status": "string" } } ], "NextToken": "string" } Response Elements If the action is successful, the service sends back an HTTP 200 response. The following data is returned in JSON format by the service. - FeatureGroupSummaries A summary of feature groups. Type: Array of FeatureGroupSummary objects - NextToken A token to resume pagination of ListFeatureGroupsresults. Type: String Length Constraints: Maximum length of 8192. Pattern: .* Errors For information about the errors that are common to all actions, see Common Errors. See Also For more information about using this API in one of the language-specific Amazon SDKs, see the following:
https://docs.amazonaws.cn/sagemaker/latest/APIReference/API_ListFeatureGroups.html
2021-09-16T21:31:46
CC-MAIN-2021-39
1631780053759.24
[]
docs.amazonaws.cn
Jenkins¶ You can use Jenkins CI both for: - Building and testing your project, which manages dependencies with Conan, and probably a conanfile.txt file - Building and testing conan binary packages for a given conan package recipe (with a conanfile.py) and uploading to a conan remote (Artifactory or conan_server) There is no need for any special setup for it, just install conan and your build tools in the Jenkins machine and call the needed conan commands. Artifactory and Jenkins integration¶ If you are using Artifactory you can take advantage of the Jenkins Artifactory Plugin. Check here how to install the plugin and here you can check the full documentation about the DSL. The Artifactory Jenkins plugin provides a powerful DSL language to call conan, connect with your Artifactory instance, upload and download your packages from Artifactory and manage your build information. Example: Test your project getting requirements from Artifactory¶ This is a template to use Jenkins with Artifactory plugin and Conan to retrieve your package from Artifactory server and publish the build information about the downloaded packages to Artifactory. In this script we assume that we already have all our dependencies in the Artifactory server, and we are building our project that uses Boost and Poco libraries. Create a new Jenkins Pipeline task using this script: //Adjust your artifactory instance name/repository and your source code repository def artifactory_name = "artifactory" def artifactory_repo = "conan-local" def repo_url = '' def repo_branch = 'master' node { def server = Artifactory.server artifactory_name def client = Artifactory.newConanClient() stage("Get project"){ git branch: repo_branch, url: repo_url } stage("Get dependencies and publish build info"){ sh "mkdir -p build" dir ('build') { def b = client.run(command: "install ..") server.publishBuildInfo b } } stage("Build/Test project"){ dir ('build') { sh "cmake ../ && cmake --build ." } } } Example: Build a conan package and upload it to Artifactory¶ In this example we will call conan test package command to create a binary packages and then upload it to Artifactory. We also upload the build information: def artifactory_name = "artifactory" def artifactory_repo = "conan-local" def repo_url = '' def repo_branch = "release/1.2.11" node { def server = Artifactory.server artifactory_name def client = Artifactory.newConanClient() def serverName = client.remote.add server: server, repo: artifactory_repo stage("Get recipe"){ git branch: repo_branch, url: repo_url } stage("Test recipe"){ client.run(command: "create") } stage("Upload packages"){ String command = "upload * --all -r ${serverName} --confirm" def b = client.run(command: command) server.publishBuildInfo b } }
https://docs.conan.io/en/1.3/integrations/jenkins.html
2021-09-16T21:46:11
CC-MAIN-2021-39
1631780053759.24
[array(['../_images/jenkins_stages.png', 'jenkins_stages'], dtype=object) array(['../_images/jenkins_stages_creator.png', 'jenkins_stages_creator'], dtype=object) ]
docs.conan.io
To have additional features like the full preview, you need to install some external components. LibreOffice - Please download and install LibreOffice in your system: - Setup the application following the instructions at - Create a symbolic link from the command line (Terminal server) using the instructions below $ cd /Applications/LibreOffice.app/Contents/MacOS $ ln -s soffice soffice.bin XCode Please download and install XCode in your system: After the installation of XCode you also need to install the optional package 'Command Line Tools', so proceed as follows: - Launch XCode - Open the menu XCode->Preferences->Downloads - Select the 'Components' tab - Install 'Command Line Tools' item Once you have your command line tools installed you can quit XCode. Homebrew Please download and install Homebrew in your system: You can install Homebrew by executing this command: $ ruby -e "$(curl -fsSL)" ImageMagick You have to install and configure ImageMagick (rel 6.6 or greater). LogicalDOC uses ImageMagick to manipulate images for previewing. To install it on MAC, execute this command: $ brew install imagemagick GhostScript LogicalDOC needs to print documents to a virtual device sometimes when performing barcode recognition. In general in GhostScript is a package installed by default. To install it on MAC, execute this command: $ brew install ghostscript Pdftohtml This is a converter from Pdf to HTML format, LogicalDOC makes use of this utility to prepare the documents for annotations. Without this you will not be able to insert annotations inside the content of the document. To install it on MAC, execute this command: $ brew install pdftohtml. However, to install it on MAC, execute this command: $ brew install tesseract OpenSSL OpenSSL is the most known Open Source SSL implementation. This package is required to sign documents server-side. To install it on MAC, execute this command: $ brew install openssl Antivirus ClamAV LogicalDOC is integrated with the ClamAV antivirus to check if a submitted document is infected, the the best way is to install ClamXav, a free graphical front end that includes the clamav software:. Once installed execute it and it will guide to in the installation of the ClamAV antivirus, at the end check to have the command clamscan installed in your system, probably in /usr/local/clamXav/bin/clamscan AcmeCadConverter This utility is used to manage AutoCAD preview and conversion. The LogicalDOC distribution cannot include a licensed version of this utility so the preview will contain a watermark, to remove this watermark you will have to purchase a license from here: Once you have a valid license key please do the following in LogicalDOC: - Stop LogicalDOC - Open the text file conf/context.properties - Locate the property acmecad.key and put here your license key - Save and start LogicalDOC As this is a Windows application, you need to install wine in your MAC to use it so follow this procedure: 1. Go to the XQuartz homepage, download XQuartz, and install it. 2. Install wine by executing these commands: $ brew install wine $ sudo ln -s /usr/local/Cellar/wine/1.6.2/bin/wine /usr/local/bin/wine The path /usr/local/Cellar/wine/1.6.2/bin/wine is where your wine was installed, it may be different in your system To check if all is ok, execute this command: $ wine /LogicalDOC/acmecad/AcmeCADConverter.exe
https://docs.logicaldoc.com/en/installation/install-on-macos/install-third-party-software-macos
2021-09-16T22:34:22
CC-MAIN-2021-39
1631780053759.24
[array(['/images/stories/en/command_line_tools.jpg', None], dtype=object)]
docs.logicaldoc.com
Each LogicalDOC installation has it's own license that contains the list of available features and various operating parameters like the maximum number of users and documents. Your license is identified by a unique User Number also known as Activation Code or License Number. In order to check the details of your license and take actions on it, you can enter LogicalDOC with the admin account and then go to Administration > System > License If you cannot enter LogicalDOC,. Registration data Each time you activate the license, your registration data will be sent to identify you, so it is important to make sure they are correct and current. You control your registration data by pressing on the button Show registration data. License Activation The “License Activation” is the process of successfully installing in your device a valid license file for your copy of the LogicalDOC application. When you install LogicalDOC, you are required to input your User Number and in general the activation process is carried on by the installer itself. After that, in the future you may be required to activate again for a variety of reasons like the followings: - the installer was unable to activate your license during the installation - you have to install a different license(eg you installed a trial and now you want to activate the regular license) - you have to update the license to unlock aspects you purchased Go to the Activation Procedure Limited number of Activations Each User Number can be activated up to 3 times, further activations can be granted but you have to ask to the support service explaining your reasons. License Unbind The "License Unbind" is the procedure to detach the license from the current hardware so it can be activated in another computer. Typically you unbind a license when you want to migrate your installation to a new server. Go to the Unbind Procedure
https://docs.logicaldoc.com/en/license-management
2021-09-16T22:42:51
CC-MAIN-2021-39
1631780053759.24
[array(['/images/stories/en/license/registration.gif', None], dtype=object)]
docs.logicaldoc.com
Windows Intune Subscriptions Available Now to Manage PCs in the Cloud Windows Intune, Microsoft’s new cloud-based PC management and security solution, launched yesterday at the Microsoft Management Summit (MMS) in Las Vegas. Intune is for organizations of all sizes, allowing IT to manage PCs over the Internet at a low monthly cost. You can try it before you buy it with the free 30-day trial. And the best news is that Intune subscribers will also receive upgrade rights to Windows 7 Enterprise and future versions of Windows, helping you lower support costs by standardizing on a single version of Windows while giving your users the best Windows experience. You’ll also have the option of adding Microsoft Desktop Optimization Pack (MDOP) tools, which will include two new updates announced today: Microsoft BitLocker Administration and Monitoring, and Diagnostics and Recovery Toolset. Thanks for reading, Mitch
https://docs.microsoft.com/en-us/archive/blogs/technet_flash_feed/windows-intune-subscriptions-available-now-to-manage-pcs-in-the-cloud
2021-09-16T20:52:00
CC-MAIN-2021-39
1631780053759.24
[]
docs.microsoft.com
The CompositeRepository class¶ - class ferenda. CompositeRepository(config=None, **kwargs)[source]¶ Acts as a proxy for a list of sub-repositories. Calls the download() method for each of the included subrepos. Parse calls each subrepos parse() method in order until one succeeds, unless config.failfast is True. In that case any errors from the first subrepo is re-raised. documentstore_class¶ alias of CompositeStore -). config¶ The LayeredConfigobject that contains the current configuration for this docrepo instance. You can read or write individual properties of this object, or replace it with a new LayeredConfigobject entirely. download(basefile. parse(basefile).
https://ferenda.readthedocs.io/en/latest/api/compositerepository.html
2021-09-16T21:58:58
CC-MAIN-2021-39
1631780053759.24
[]
ferenda.readthedocs.io
Findeo comes with some plugins bundled. They are installed during Setup Wizard process, and here’s an explaination what is what: - Realteo – it’s our real estate listings plugin that helps you create, manage and categorise your properties. It comes with front-end Submit Property page, Bookmarks, Paid Properties Option and many more. - WPBakery Page Builder – The Visual Composer is used to create page layouts with drag and drop interface. This plugin is included in theme on developers/extended license, which means I bought it for higher price to be able to include it in my theme. You as end user can use it only with this theme, and do not get the license key that is used to auto-updates. Updates of Visual Composer come with the updates of the theme (we always include new version in theme few days after update release). You can of course buy your own copy of VC if you want it faster or need support from plugin authors. - Revolution Slider – is an innovative, responsive WordPress Slider Plugin that displays your content the beautiful way. It’s included on the same rules as WPBakery Page Builder - Findeo Shortcodes – all visual elements from the theme are provided by this plugin. It’s required for the theme. - Findeo VC Bridge – plugin that adds Findeo shortcodes as elements in Visual Composer. It’s required for the theme. - Contact Form 7 and Contact Form 7 – Dynamic Text Extension – The Contact Form 7 plugin is used to display contact forms in the theme, and also used as a contact form to agents on the listing pages. It requires some additional configuration, that is described here
https://www.docs.purethemes.net/findeo/knowledge-base/included-plugins/
2021-09-16T21:00:30
CC-MAIN-2021-39
1631780053759.24
[]
www.docs.purethemes.net
Using Testnet¶ Last updated for testnet DREP Chain has officially launched 4 Testnets:¶ DREP Testnet1.0 Darwin core performance summary: DREP Testnet2.0 Riemann core performance summary: DREP Testnet3.0 Euler core performance summary: DREP Testnet4.0 Planck core performance summary: Why Use Testnet?¶ The testnet is a wonderful place where you can experiment with the Drep applications without worrying that a mistake will cost you real money. It is actually recommended that people use the testnet to learn the basics of the Drep software and any new features. Drep is currently on its Fourth Testnet. Testnets are periodically reset to help keep a manageable blockchain file size. How to Run a Testnet Node¶ Running a testnet node is incredibly easy. Your application of choice will need to download the testnet blockchain, and you will need to create a new account for testnet use. Your mainnet blockchain and account will remain untouched. Switching between the two is incredibly easy. Command-Line Suite¶ To launch drep on testnet, First, get the Testnet configuration file, and put the configuration file into the system default directory,configuraton files in *.zip. - Linux Download the executable from here,select drep-linux-amd64-v1.0.0.zip,decompression it. and place it under the directory: /usr/local/bin,Place the configuration file config.json (in drep-linux-amd64-v1.0.0/testnet_config/) under the directory: ~/.drep,Start the node using command: drep console - Windows Download the executable from here, select drep-win-amd64-v1.0.0.zip,decompression it.Place the configuration file(drep-win-amd64-v1.0.0/configFile/test-net) under the directory: %LOCALAPPDATA%\Drep,Open the command-line interface (cmd.exe), navigate into the directory at which the executable is placed.Start the node using command: drep.exe console - mac Download the executable from here,select drep-darwin-amd64-v1.0.0.zip,decompression it.Place the configuration file config.json(in drep-darwin-amd64-v1.0.0/configFile/test-net) under the directory: ~/Library/Application Support/Drep.Place drep under the directory /usr/bin .Start the node using command: drep console Acquiring Testnet Coins¶ You can acquire coins through the Drep Testnet Faucet. Please return any coins to the address listed at the bottom of that page when you’re done playing with the testnet.
http://docs.drep.org/advanced/using-testnet/
2021-09-16T21:42:33
CC-MAIN-2021-39
1631780053759.24
[]
docs.drep.org
How to schedule uploading audit results via FTP on macOS Introduced in 8.6 Network Inventory offers the ability to deliver audit snapshots from remote sites over FTP, FTPS, or SFTP. However, the feature is supported only for Windows computers. This article describes how to set up the audit of macOS computers so that it regularly runs and delivers audit snapshots via FTP. Here are the steps you should take: Prepare an audit package First, launch Network Inventory, set up an audit source and prepare a deployable audit package for macOS audit. If you already have an FTP Audit Source in Network Inventory, you can use your existing source that already receives audit snapshots from remote Windows computers or sites. That source can also receive audit results from macOS computers. If you don't have such a source or you want different settings for receiving macOS snapshots, create a new audit source as follows: Open the properties of a site and click New > FTP under Audit Sources. On the General tab, type in a source name. TIP: Keep the Audit Profile setting to the default value, because macOS audit does not apply any audit profile settings anyway. Switch to the FTP tab and specify FTP settings under Incoming Server for audit snapshots. Provide the credentials for an account that has read access to the specified FTP resource, and test your connection. TIP: In theory, you don't need to fill out the Outgoing Server for the Audit Agent section, because you will tell the audit agent which outgoing FTP server to use later, in a bash script. But since the Outgoing Server for the Audit Agent settings are mandatory, you must still specify them, and they will automatically appear under Incoming Server for audit snapshots as long as the Use same settings as my Outgoing Server check box is selected. To enable the AutomationServer to check the FTP server for new snapshots automatically, keep the Check for new snapshots every check box selected and specify the frequency for checking the FTP server. That frequency is also called the upload interval. Click OK to close the dialog box. Your audit source is ready. INFO: For detailed instructions on creating FTP audit sources, see Adding FTP Audit Sources. In Network Inventory, create a deployable audit package. The Remote Audit with the FTP delivery method is not intended to audit macOS computers. That is why the package that could be created from the FTP audit source would not contain the audit agent for macOS, whose code name is ina_mac. To create an audit package for macOS, create an additional audit source. In your site's window, click New > Portable under Audit Sources. Type in any source name and click Apply to apply your changes. Click Create Package and create an audit package in any folder on your local computer. INFO: For detailed instructions on creating portable audit sources, see Adding Portable Audit Sources. Get to your destination folder and locate these two items: - AuditData - the folder in which audit snapshots are stored before loading or sending them to the database, ina_mac — the audit agent for macOS (also called the Mac Inventory Analyzer). Copy the AuditData folder and the ina_mac file to one of your macOS machines. For example, here: /usr/local/bin/. On the macOS machine, in the Terminal window, run chmod +x ina_mac. This will make ina_mac executable. Create a bash script Second, create a bash script that launches Alloy audit agent and uploads the audit results via FTP. It could require a bit of programming skills should you decide to modify it in any way. Create a new file named run-ina-mac. Open the run-ina-mac file you created in a text editor and copy and paste the following code. where: /usr/local/bin/- the path to the audit package, {user}and {password}- the credentials of a user having Write access to your FTP server, {ip-address}- the IP address or name of your FTP server. IMPORTANT: Marked red are the placeholders for the FTP credentials and the FTP server address. You must replace those values with your actual ones. Save the file and then make it executable by running chmod +x run-ina-macin the Terminal window. Schedule the audit Finally, schedule the audit using the launchd daemon, an advanced system process manager. INFO: For additional information about using launchd, see Creating MacOS startup jobs with launchd. You can use the plist file below as a template. where: /usr/local/bin/- the path to the run-ina-mac file, <key>Hour</key>, <integer>11</integer>, <key>Minute</key>, <integer>30</integer>- the schedule to start the audit every day at 11:30 AM. NOTE: You may need to replace the sample values with your actual ones. Receive the audit results Once you have set up the FTP audit source and automated the audit of macOS machines using the launchd daemon, the If you have disabled auto-upload in the audit source or when you do not want to wait until the current upload interval ends, you can check the audit source manually and immediately receive all pending audit data as follows. In Network Inventory, select Audit > Receive Snapshots from the main menu. The Receive Snapshots dialog box opens. Select the source to check and click OK. As soon as the AutomationServer instance finishes the new "Check" task, new snapshots (if any were available at the moment) will appear in Network Inventory. INFO: For details on receiving snapshots, see Network Inventory User's Guide: Checking Audit Sources for New Snapshots.
https://docs.alloysoftware.com/alloynavigatorexpress/8/docs/howtos/topics/schedule-ftp-audit-macos.htm
2021-09-16T20:52:53
CC-MAIN-2021-39
1631780053759.24
[]
docs.alloysoftware.com
What's New in Version 8.2.3 November 2018 Version 8.2.3 of Alloy Navigator Express is a maintenance release. It updates Alloy Audit Tools and resolves some issues reported in previous versions. Audit Tools Alloy Audit Tools, used to audit computers and display audit results, have been updated to version 6.1.4. Resolved issues Administrative Settings When the Registration Confirmation notification for self-registered SSP customers contains an invalid placeholder, the Administrative Settings console now displays an error message that clearly identifies the issue. Pending accounts cannot be approved until the administrator resolves the issue. Entering the size directly into the font size box no longer causes an access violation in E-mail Notifications. The text on the Configuration Settings dialog box is no longer getting cut off when importing workflow packs for object classes with long names, such as Knowledge Base Articles. Import Wizard The Import Wizard now works correctly when launched from the Main Console (Tools > Administrative > Import Wizard). The Mail Connector no longer adds blank lines to the beginning of the Description field when creating Tickets from HTML email messages. The Mail Connector now correctly processes email messages in right-to-left languages such as Arabic. Main Console The All IT Assets data grid now displays correct serial numbers. This fix also affects displaying asset names, organizations, locations, and owners. Web Portal for technicians Now Actions in the Web Portal work correctly regardless of the choice of the default language for the SQL Server login used as the Database Account. Previously, attempts to perform an Action failed if the default language was set to anything other than English. Now Web Portal users can manage attachments (for example, delete attached files) right from the Attachments section on Action Forms. Adding or removing columns no longer affects view's grouping. In earlier versions, views could revert to a grouped state after ungrouping and then customizing their columns. The Advanced Filter for customizing views in the Web Portal no longer throws the "Object reference not set to an instance of an object" error when attempting to navigate to the desired field. Mobile Portal Resolved issues with signing in under Windows accounts. Self Service Portal The Self Service Portal now correctly applies filters added to the Related CI field on Action Forms for submitting Service Requests. Reports Scheduled Reports now apply the user's locale when calculating Date Range report parameters. This resolves the issue with unexpected results for "This Week" reports in countries where Monday is not the first day of the week. The issue with custom legacy reports not working in the previous version has been resolved. The layout of the Customer Satisfaction Rating by Technician reports has been corrected. Now all columns are properly aligned. Processing errors In earlier versions, the user could receive an ambiguous error message "Logon Failure: the user has not been granted the requested logon type at this computer." This happened when the Database Account did not have appropriate rights for the local computer. The error message has been improved to provide specific instructions on resolving the issue.
https://docs.alloysoftware.com/alloynavigatorexpress/8/docs/releasenotes/823.htm
2021-09-16T20:58:58
CC-MAIN-2021-39
1631780053759.24
[]
docs.alloysoftware.com
If you’re interested in sending back in stock alerts on Facebook Messenger, you will now need to now provide us permission from your Facebook page. To do this, please apply for the ‘One-Time Notification’ permission within the ‘Advanced Messaging’ section of your Page Settings. You will then need to agree to the beta terms to get permission from Facebook. Can’t locate the permissions setting or need help setting up notifications on Facebook Messenger? Please reach out to us for support.
https://docs.appikon.com/en/articles/4717851-how-to-give-permission-to-back-in-stock-app-to-send-restock-alerts-on-facebook-messenger
2021-09-16T21:38:37
CC-MAIN-2021-39
1631780053759.24
[array(['https://downloads.intercomcdn.com/i/o/211637143/1baef1715e264b2e79b5345f/one+time+notifications+fb+.png', None], dtype=object) ]
docs.appikon.com
The user must provide a username and password in order to log in: After a successful login, the user is landing on the Dashboard page. The menu bar has four items: Registration, Upload, Staging and Search. A Quick search is also available on the top line, next to the logged in username. {primary} Session expiration Warning dialog appears 5 minutes before the session would expire. After clicking "Ok" in the warning dialog, the session will be refreshed. The next warning dialog should appear again in 25 minutes. If no action is taken the dialog stays on the screen for 5 minutes, than the system will reset the application state by navigating to the login screen.
https://docs.chemaxon.com/display/lts-gallium/login
2021-09-16T21:34:22
CC-MAIN-2021-39
1631780053759.24
[]
docs.chemaxon.com
Understanding co-located and external clusters The Kafka clusters that Streams Replication Manager (SRM) connects to can be categorized into two groups. They can be co-located with or external to SRM. The category dictates how you configure the SRM service and the srm-control tool. SRM connects to and replicates data between Kafka clusters, which consist of one or more Kafka brokers, deployed on clusters. These Kafka clusters that SRM connects to can be categorized into two groups. They can either be co-located with or external to SRM. Which category a specific cluster falls into is decided based on the relation between that cluster and the SRM service. A co-located Kafka cluster is the Kafka cluster that is running in the same cluster as the SRM service. Any other Kafka cluster that is remote to SRM either logically or geographically is considered external. For example, consider the following deployment: This deployment has two clusters, and both clusters have a Kafka cluster. However, only Cluster East has SRM deployed on it. From the perspective of SRM East, Kafka Cluster East is co-located, while Kafka Cluster West is external. In a more advanced deployment with multiple SRM services, a single Kafka cluster will fall into both categories. From the perspective of a specific SRM service a cluster will be co-located, while for others it will be external. For example, consider the following deployment: In this example, both clusters have a Kafka cluster as well as SRM. From the perspective of SRM East, Kafka Cluster East is co-located, Kafka Cluster West is external. From the perspective of SRM West, Kafka Cluster West is co-located, Kafka Cluster East is external. It is also possible to not have co-located Kafka clusters. For example, consider the following deployment: In this example, the clusters that have Kafka deployed on them do not have SRM. Instead, data is replicated by an SRM instance deployed on a separate cluster. From the perspective of SRM South, both Kafka Cluster East and West are external, there is no co-located cluster. In a scenario like this, configuration tasks related to the co-located cluster do not need to be completed. Being able to correctly identify what category a Kafka cluster falls into is important as the category dictates how you configure each SRM service and the srm-control tool. In general, a co-located Kafka cluster requires less configuration than external Kafka clusters. This is because Cloudera Manager is able to automatically pass certain configuration properties about the co-located Kafka cluster to SRM. External Kafka clusters on the other hand must be fully configured and specified manually. For more information on how to configure and set up SRM, review any of the configuration examples available in Using Streams Replication Manager in CDP Public Cloud overview or Configuration examples.
https://docs.cloudera.com/runtime/7.2.11/srm-overview/topics/srm-understanding-colocated-external-clusters.html
2021-09-16T22:32:05
CC-MAIN-2021-39
1631780053759.24
[]
docs.cloudera.com
1 Introduction In this how-to you will learn how to set up a GitHub repository. The repository will contain your development content and can be shared with others, in order to contribute to the application. 2 Preparation - Make sure you have a GitHub account - Make sure you have a Mendix account - Read the GitHub - Create a repo guide 3 Creating Your Repo First of all, your repo needs a name. We advise you use the same name that it is/will be published under in the Marketplace, and then using UpperCamelCase to replace spaces. For example: “My first app” would be “Mendix/MyFirstApp” on GitHub. Same as with the description. It should say what the App does, so it would be easiest to keep this in line with the App on the Marketplace. (Add the Mendix .gitignore to make sure you keep your repo clean.) 4 Folder Structure When making a new widget, we suggest you use the App Store Widget Boilerplate, available on GitHub. It’s a set-up with everything you need to get started developing a Mendix widget. 5 Releases If you want to make a new release for the Marketplace, we advise you start off with a new tag on the appropriate commit on the master or release branch. From these tags, you can create a new Release in GitHub. In this release you can set your release notes (which you can then use for the Marketplace release as well) and give it a more official name. If you add the .mpk as a binary file to the release tag (see image blow) the Marketplace will automatically sync the .mpk to your new draft. We suggest also linking this to the upcoming Marketplace release by mentioning that release number in the description.
https://docs.mendix.com/howto7/collaboration-requirements-management/starting-your-own-repository
2021-09-16T21:35:26
CC-MAIN-2021-39
1631780053759.24
[array(['attachments/18448643/18580533.png', None], dtype=object)]
docs.mendix.com
Regional and Language URLs First thing you have to know, this module does not add any visible enhancements at your storefront. Its goal is to improve targeting of site content to a specific country. And second, you get any benefits from this module only when your store is multilingual and/or multi-regional. Regional URLs module helps you to tell search engines (e.g. Google) that you have multiple versions of a page for different languages or regions. And search engine point users to the most appropriate version of your page by language or region. You can also add locale as subdirectory into store base url with this modules. We generate correct and valid URLs for every store view your Magento has. There are no redirects with this URLs and no 404 pages. Search bots gonna love them. Module has simple and pretty self-explanatory settings. But in case you need some info you can find it here - module configuration. Extension works perfectly with: - Home page and CMS Pages; - Category Pages (with and without layered navigation filters); - Product Pages. Regional URLs modules is a part of SEO Suite toolkit. And we do not provide it as independent Magento 2 extension.
https://docs.swissuplabs.com/m2/extensions/hreflang/
2021-09-16T22:56:20
CC-MAIN-2021-39
1631780053759.24
[array(['/images/m2/hreflang/example.png', 'rel="alternate" hreflang'], dtype=object) ]
docs.swissuplabs.com
db_condition($conjunction) Returns a new DatabaseCondition, set to the specified conjunction. Internal API function call. The db_and(), db_or(), and db_xor() functions are preferred. string $conjunction: The conjunction to use for query conditions (AND, OR or XOR). \Drupal\Core\Database\Query\Condition A new Condition object, set to the specified conjunction. as of Drupal 8.0.x, will be removed in Drupal 9.0.0. Create a \Drupal\Core\Database\Query\Condition object, specifying the desired conjunction: new Condition($conjunctin); \Drupal\Core\Database\Query\Condition function db_condition($conjunction) { return new Condition($conjunction); } © 2001–2016 by the original authors Licensed under the GNU General Public License, version 2 and later. Drupal is a registered trademark of Dries Buytaert.
https://docs.w3cub.com/drupal~8/core-includes-database.inc/function/db_condition/8.1.x
2021-09-16T22:12:12
CC-MAIN-2021-39
1631780053759.24
[]
docs.w3cub.com
The following JSON is an examples of the current defaults: { "amqp": "amqp://localhost", "apiServerAddress": "172.31.128.1", "apiServerPort": 9080, "dhcpPollerActive": false, "dhcpGateway": "172.31.128.1", "dhcpProxyBindAddress": "172.31.128.1", "dhcpProxyBindPort": 4011, "dhcpSubnetMask": "255.255.240.0", "gatewayaddr": "172.31.128.1", "trustedProxy": false, "httpEndpoints": [ { "address": "0.0.0.0", "port": 8080, "httpsEnabled": false, "proxiesEnabled": true, "authEnabled": false, "routers": "northbound-api-router" }, { "address": "172.31.128.1", "port": 9080, "httpsEnabled": false, "proxiesEnabled": true, "authEnabled": false, "routers": "southbound-api-router" } ], "httpDocsRoot": "./build/apidoc", "httpFileServiceRoot": "./static/files", "httpFileServiceType": "FileSystem", "fileServerAddress": "172.31.128.2", "fileServerPort": 3000, "fileServerPath": "/", "httpProxies": [ { "localPath": "/coreos", "server": "", "remotePath": "/amd64-usr/current/" } ], "httpStaticRoot": "/opt/monorail/static/http", "authTokenSecret": "RackHDRocks!", "authTokenExpireIn": 86400, "mongo": "mongodb://localhost/pxe", "sharedKey": "qxfO2D3tIJsZACu7UA6Fbw0avowo8r79ALzn+WeuC8M=", "statsd": "127.0.0.1:8125", "syslogBindAddress": "172.31.128.1", "syslogBindPort": 514, "tftpBindAddress": "172.31.128.1", "tftpBindPort": 69, "tftpRoot": "./static/tftp", "minLogLevel": 2, "logColorEnable": false, "enableUPnP": true, "ssdpBindAddress": "0.0.0.0", "heartbeatIntervalSec": 10, "wssBindAddress": "0.0.0.0", "wssBindPort": 9100 } The following table describes the configuration parameters in config.json: The log levels for filtering are defined at These configurations can also be overridden by setting environment variables in the process that’s running each application, or on the command line when running node directly. For example, to override the value of amqp for the configuration, you could use: export amqp=amqp://another_host:5763 prior to running the relevant application. To use TLS, a private RSA key and X.509 certificate must be provided. On Ubuntu and Mac OS X, the openssl command line tool can be used to generate keys and certificates. For internal development purposes, a self-signed certificate can be used. When using a self-signed certificate, clients must manually include a rule to trust the certificate’s authenticity. By default, the application uses a self-signed certificate issued by Monorail which requires no configuration. Custom certificates can also be used with some configuration. Parameters See the table in Configuration Parameters for information about HTTP/HTTPS configuration parameters. These parameters beging with HTTP and HTTPS. A node gets discovered and the BMC IPMI comes up with a default username/password. User can automatically set IPMI OBM settings using a default user name(‘__rackhd__’) and an auto generated password in rackHD by adding the following to RackHD config.json: "autoCreateObm": "true" If a user wants to change the BMC credentials later in time, when the node has been already discovered and database updated, a separate workflow located at on-taskgraph/lib/graphs/bootstrap-bmc-credentials-setup-graph.js can be posted using Postman or Curl command. add the below content in the json body for payload (example node identifier and username, password shown below) { "name": "Graph.Bootstrap.With.BMC.Credentials.Setup", "options": { "defaults": { "graphOptions": { "target": "56e967f5b7a4085407da7898", "generate-pass": { "user": "7", "password": "7" } }, "nodeId": "56e967f5b7a4085407da7898" } } } By running this workflow, a boot-graph runs to bootstrap an ubuntu image on the node again and set-bmc-credentials-graph runs the required tasks to update the BMC credentials. Below is a snippet of the ‘Bootstrap-And-Set-Credentials graph’, when the graph is posted the node reboots and starts the discovery process module.exports = { friendlyName: 'Bootstrap And Set Credentials', injectableName: 'Graph.Bootstrap.With.BMC.Credentials.Setup', options: { defaults: { graphOptions: { target: null }, nodeId: null } }, tasks: [ { label: 'boot-graph', taskDefinition: { friendlyName: 'Boot Graph', injectableName: 'Task.Graph.Run.Boot', implementsTask: 'Task.Base.Graph.Run', options: { graphName: 'Graph.BootstrapUbuntu', defaults : { graphOptions: { } } }, properties: {} } }, { label: 'set-bmc-credentials-graph', taskDefinition: { friendlyName: 'Run BMC Credential Graph', injectableName: 'Task.Graph.Run.Bmc', implementsTask: 'Task.Base.Graph.Run', options: { graphName: 'Graph.Set.Bmc.Credentials', defaults : { graphOptions: { } } }, properties: {} }, waitOn: { 'boot-graph': 'finished' } }, { label: 'finish-bootstrap-trigger', taskName: 'Task.Trigger.Send.Finish', waitOn: { 'set-bmc-credentials-graph': 'finished' } } ] }; To remove the BMC credentials, User can run the following workflow located at on-taskgraph/lib/graphs/bootstrap-bmc-credentials-remove-graph.js and can be posted using Postman or Curl command. add the below content in the json body for payload (example node identifier and username, password shown below) { "name": "Graph.Bootstrap.With.BMC.Credentials.Remove", "options": { "defaults": { "graphOptions": { "target": "56e967f5b7a4085407da7898", "remove-bmc-credentials": { "users": ["7","8"] } }, "nodeId": "56e967f5b7a4085407da7898" } } } This section describes how to generate and install a self-signed certificate to use for testing. If you already have a key and certificate, skip down to the Installing Certificates section. First, generate a new RSA key: openssl genrsa -out privkey.pem 2048 The file is output to privkey.pem. Keep this private key secret. If it is compromised, any corresponding certificate should be considered invalid. The next step is to generate a self-signed certificate using the private key: openssl req -new -x509 -key privkey.pem -out cacert.pem -days 9999 The days value is the number of days until the certificate expires. When you run this command, OpenSSL prompts you for some metadata to associate with the new certificate. The generated certificate contains the corresponding public key. Once you have your private key and certificate, you’ll need to let the application know where to find them. It is suggested that you move them into the /opt/monorail/data folder. mv privkey.pem /opt/monorail/data/mykey.pem mv cacert.pem /opt/monorail/data/mycert.pem Then configure the paths by editing httpsCert and httpKey in /opt/monorail/config.json. (See the Configuration Parameters section above). If using a self-signed certificate, add a security exception to your client of choice. Verify the certificate by restarting on-http and visiting https://<host>/api/current/versions. Note: For information about OpenSSL, see the OpenSSL documentation. This section describes how to setup HTTP/HTTPS endpoints in RackHD. An endpoint is an instance of HTTP or HTTPS server that serves a group of APIs. Users can choose to enable authentication or enable HTTPS for each endpoint. There are currently two API groups defined in RackHD: [ { "address": "0.0.0.0", "port": 8443, "httpsEnabled": true, "httpsCert": "data/dev-cert.pem", "httpsKey": "data/dev-key.pem", "httpsPfx": null, "proxiesEnabled": false, "authEnabled": false, "routers": "northbound-api-router" }, { "address": "172.31.128.1", "port": 9080, "httpsEnabled": false, "proxiesEnabled": true, "authEnabled": false, "routers": "southbound-api-router" } ]
http://rackhd.readthedocs.io/en/latest/rackhd/configuration.html
2017-03-23T08:14:45
CC-MAIN-2017-13
1490218186841.66
[]
rackhd.readthedocs.io
Advanced Topics¶ These documents cover more advanced topics within Scrapy Cluster in no particular order. - Upgrade Scrapy Cluster - How to update an older version of Scrapy Cluster to the latest - Integration with ELK - Visualizing your cluster with the ELK stack gives you new insight into your cluster - Docker - Use docker to provision and scale your Scrapy Cluster - Crawling Responsibly - Responsible Crawling with Scrapy Cluster - Production Setup - Thoughts on Production Scale Deployments - DNS Cache - DNS Caching is bad for long lived spiders - Response Time - How the production setup influences cluster response times - Kafka Topics - The Kafka Topics generated when typically running the cluster - Redis Keys - The keys generated when running a Scrapy Cluster in production - Other Distributed Scrapy Projects - A comparison with other Scrapy projects that are distributed in nature
http://scrapy-cluster.readthedocs.io/en/dev/topics/advanced/index.html
2017-03-23T08:06:17
CC-MAIN-2017-13
1490218186841.66
[]
scrapy-cluster.readthedocs.io
The Tag API provides functionality to automatically categorize nodes into groups based on data present in a node’s catalogs or by manually assigning a tag to a node. When done automatically, tag matching is done using a series of rules. If all rules of a given tag match the latest version of a node’s catalog set, then that tag will be assigned to the node. A node may be assigned many tags, both automatically through rules matching or manually by the user. Upon discovering a node, the tag will be assigned based on all existing tag definitions in the system. Tags for all nodes will be re-generated whenever a tag definition is added. Tags that are currently assigned to a node are not automatically removed from nodes when the rules backing a tag are deleted. Example With a node that has the following catalog fields: { "source": "dmi", "data": { "Base Board Information": { "Manufacturer": "Intel Corporation" } }, "memory": { "total": "32946864kB" "free": "31682528kB" } /* ... */ } We could match against these fields with this tag definition: { "name": "Intel 32GB RAM", "rules": [ { "path": "dmi.Base Board Information.Manufacturer", "contains": "Intel" }, { "path": "dmi.memory.total", "equals": "32946864kB" } ] } In both cases, the “path” string starts with “dmi” to signify that the rule should apply to the catalog with a “source” value of “dmi”. This example makes use of the “contains” and “equals” rules. See the table at the bottom of this document for a list of additional validation rules that can be applied. When running the on-http process, these are some common API commands you can send. If you want to view or manipulate tags directly on nodes, please see the API notes at Node Tags. Create a New tag POST /api/current/tags { "name": "Intel-32GB-RAM", "rules": [ { "path": "dmi.Base Board Information.Manufacturer", "contains": "Intel" }, { "path": "ohai.dmi.memory.total", "equals": "32946864kB" } ] } Get List of tags GET /api/current/tags curl <server>/api/current/tags Get Definition for a Single tag GET /api/current/tags/:tagname curl <server>/api/current/tags/<tagname> Delete a Single tag DELETE /api/current/tags/:tagname curl -X DELETE <server>/api/current/tags/<tagname> List nodes with a tag GET /api/current/tags/:tagname/nodes curl <server>/api/current/tags/<tagname>/nodes curl -H "Content-Type: application/json" -X POST -d @options.json <server>/api/current/tags/<tagname>/nodes/workflows
http://rackhd.readthedocs.io/en/latest/rackhd/tags.html
2017-03-23T08:13:19
CC-MAIN-2017-13
1490218186841.66
[]
rackhd.readthedocs.io
Online membership sign up This chapter explains how to allow visitors to your website to sign up as members of your organisation. It looks at the steps necessary to create membership sign up pages, some things to consider when doing so (including testing your membership pages), and the ways in which you can integrate membership sign up pages into your website. Before reading this chapter, you may wish to read the chapter Defining memberships which gives useful background to many concepts (like membership types, membership statuses, and so on). About membership sign up pages Membership sign up pages are created in the same way as online contribution pages. In essence, you create an online contribution page and then add to this page the ability to sign up for a membership. The reason we do this is because the actual work flow for becoming a member of an organisation is normally quite similar to the work flow for contributing money to an organisation, and tying them together allows membership pages to take advantage of a lot of the functionality that contribution pages have, for example, we can offer premiums as part of the membership sign up. Note that even if your membership is free, you should still use a contribution page for online sign up (see free memberships below for instructions on how to do this). Contribution pages are very powerful and have a lot of options that are grouped together into tabs. Once you have given your contribution page a name, these tabs are displayed at the top of the page as you work through the rest of the set up process. In this chapter, we concentrate on the tabs and options of contribution pages that are most useful for memberships. A couple of tabs that are worth highlighting include the Memberships tab, which contains the bulk of the membership configuration, and the Profiles tab, which allows you to collect information about the people or organisations that are filling out your membership form. We recommend you also review the chapters on creating online contribution pages which will give you a better understanding of all the tools you have at your disposal when creating membership pages. The Title tab The title tab (which is also the first page that you see when you create a new membership page) allows you to set the title for the membership page and set some basic information like the Financial Type etc. that will be recorded for memberships that are made through this page. This tab also has space for you to include an introductory message to be displayed on your membership page. You can include images and other simple HTML in this introductory text. Organisational memberships The title tab contains a check box to allow people to become members on behalf of an organisation, which is the recommended way to offer organisational memberships. When enabled, you are prompted to select a profile (see the profiles chapter for more information) that will be used to collect organisational information. Organisational sign up can either be optional or required. The Amounts tab The amounts tab allows you to set various financial options, including the payment processor that is used on the page. Note that you can select more than one payment processor which will give people who are signing up a choice. For more information on setting up payment processors, and things to consider when choosing a payment processor see the chapter Payment processors. Note that the amounts tab is not the place where membership fees are configured - they are configured on the Memberships tab. If you want to use this page for collecting membership amounts and do not want to solicit extra contributions, leave the Contribution Amounts section enabled checkbox unticked. If you do want to solicit contributions on top of membership fees, then tick the box and either add some suggested contribution options or configure a contribution price set. Free memberships If you are offering free memberships, and do not want any monetary donations, you should leave the 'Execute real-time monetary transactions' box unticked. The Memberships tab Since we are using this contribution page for membership sign up and renewal, we need to check the Membership Section Enabled to use this contribution page for memberships. The first few boxes allow you add text that will be displayed when this page is used for initial membership sign up and for renewals. When a logged in user with a current or expired membership views the membership sign up page, CiviCRM automatically replaces the membership sign up page with a membership renewal page which contains the text from the renewals box. After the text boxes, are a few options that you can use to configure the membership types available on the membership form. Looking at the simple use cases first, you select which membership types should be available on the page, which should be the default, and which can be auto-renewed (you'll need to have set up your membership types as auto-renew and have a payment processor that supports automatic recurring payments). If you enable auto-renew for a membership then on the web page users will see "Please renew my membership automatically. (Your initial membership fee will be processed once you complete the confirmation step. You will be able to cancel automatic renewals at any time by logging in to your account or contacting us.)" Membership payment receipt emails will include a link for the member to cancel the auto-renewal. If you want, you can make membership signup optional. This is often useful if you have a contribution page on which you want to offer the ability to become a member, but not require it (you will need to check the box for "Contribution Amounts section enabled" on the Amounts tab). You can decide whether such payments are recorded separately from membership fee payments. If you cannot accomplish what you need using the Membership Types table (for example if you want to offer sign up to two memberships at the same time, or offer sign ups with multiple membership terms), then you should use a membership price set (which is covered in its own chapter Membership price sets). Some of the things you can do with price sets include: - allow users to sign up for multiple classes of membership (e.g. "national" and "local" memberships) at the same time - let people sign up for multiple membership terms at the same time - offer other options such as a paid subscription in addition to membership signup. The Receipt tab After the site visitor completes the membership signup or renewal form, they will be redirected to a thank-you page and can have an email receipt generated and sent to them. This fourth step in the wizard allows you to configure those options. You can customise the message that gets added to the membership receipt on this page, and the email address that the receipt will come from. If you want to further customise the receipt email template you can do so using the Mailings > Message templates screen. You may also want to CC or BCC every membership receipt to a staff member so they are alerted immediately every time someone becomes a member. The Tell-A-Friend tab CiviCRM allows you to add a tell-a-friend feature to the thank-you page. The page lets your members share details about your organization with their friends by emailing them a link and information. Those friends that are told about the membership sign up will also be added to CiviCRM if they do not already exist and their source field will show that they were added via tell-a-friend. Collecting information as part of membership sign up (the Profiles tab) You can use profiles to collect information about your members as they fill in the sign up form. By default, contribution pages will include only an email field. Adding a profile to the contribution form will add a collection of fields that CiviCRM will display as part of the membership sign up form. You can use profiles to collect extra information about the contact, for example their address, their interests, etc. If the person signing up to become a member is logged in, their profile fields will be populated with data from CiviCRM where available. Don't add a membership profile. Collection of that information occurs automatically during the online membership sign up process. The profiles tab allows you to select an already existing profile to include on your membership page, and if you have permission, to edit an existing profile or create a new profile to be included on this page. WARNING: If you edit an existing profile here, it will be changed in all places where that profile is used. Premiums tab Premiums are thank you gifts and incentives offered to people that donate to your organisation. They are most commonly associated with tiered donation levels (e.g. donate $50 to receive a T-shirt) but can also be used in conjunction with your membership pages. Before including premiums on a contribution page, you must configure them through Contributions > Premiums (Thank-you Gifts). The Premiums tab of the contribution page wizard controls the introductory text, contact information, and other premium-related details. Testing membership sign up pages Once you finish configuring and setting up your membership page, you are advised to test drive the process to make sure everything is working according to your expectations. Test functionality is available on Contributions > Manage Contribution Pages, click Links next to your membership sign-up/renewal page and click Test-drive. Any membership data you send through the form in test mode will be added to CiviCRM as test data and not be included in any membership stats or when searching for members, etc. If you want to find and delete test memberships, you can do so by clicking the 'is test' check box in Find Memberships. Try and put yourself in the eyes of someone who wants to become a member of your organisation and go through the process a number of times, with different combinations of fields each time. Make sure that the data all appears as you would expect in CiviCRM. Once you've tested the process and have made any necessary changes, get other members of staff or friends from outside your organisation to test the process. When using the Test-drive Registration option, you see the same registration pages as a regular user, but the online payment isn't really debited from your card (see Payment processors for more information on dummy processors and card details you can use for test transactions). It is worthwhile periodically testing and reviewing your membership process to make sure that it is as smooth as possible. You will receive indirect feedback from your members as they use the form. If they are not entering data in the way you intended then you will need to make some changes. From time to time, you may want to solicit direct feedback from people who have recently become members to see how easy it was for them to become a member and ask their opinions on ways in which you could improve your form. Adding membership sign up pages to your website Once you've made your contribution page, you need to make it visible on your website. The method for this depends on the CMS. Instructions for each CMS are below. Membership sign up pages are built to inherit your website theme and should look reasonably nice out of the box. You can include images on the HTML text areas in the page to make them look more attractive. Many organisations want to spend time improving the look and feel of their membership pages in order to increase membership sign up rates. The methods for changing the design of these pages are outside the scope of this book but a website designer who is familiar with your CMS and CiviCRM will be able to help. In Drupal Go to Contributions > Manage Contribution Pages > click Links next to your membership sign-up/renewal page > click Live Page to view the finished page. You can then copy the URL and include it in a content page or assign it to a menu item. In Wordpress Go to Contributions > Manage Contribution Pages > click Links next to your membership sign-up/renewal page > click Live Page. Copy the URL and insert it into an HTML link or menu. Or use a plugin such as Page Links To create a URL 'slug'. Or click the Wordpress shortcode icon to insert a form into any page or post. In Joomla! The most direct way to expose your membership signup/renewal page to the front of your website in Joomla. Permissions needed for online membership sign up/renewal. Anonymous and Authenticated roles need the following CMS permissions to be able to join or renew online: - Profile listings and forms: This will let you collect core contact information ( name, address etc.) when people sign up. - Access all custom data: You must enable this permission to collect custom data. - Make online contributions: This permission must be granted unless your memberships are free and you have no interest in accepting donations when people sign up or renew.
https://docs.civicrm.org/user/en/4.6/membership/online-membership-sign-up/
2017-03-23T08:17:17
CC-MAIN-2017-13
1490218186841.66
[array(['../../img/membership-tabs.png', None], dtype=object) array(['../../img/Title%20settings%201%20.jpg', None], dtype=object) array(['../../img/contribution%20amounts.jpg', None], dtype=object) array(['../../img/membership%20signup%201.jpg', None], dtype=object) array(['../../img/membership%20page%20receipt%201.jpg', None], dtype=object) array(['../../img/membership%20page%20receipt%202.jpg', None], dtype=object) array(['../../img/tell%20a%20friend.jpg', None], dtype=object) array(['../../img/membership-profiles.png', None], dtype=object) array(['../../img/membership-profiles.png', None], dtype=object) array(['../../img/Wordpress-Shortcodes-small.png', None], dtype=object)]
docs.civicrm.org
. This should be a memorable name to identify the certificate. For example, use the same value as in the Subject field, but without the CN= prefix. You can use placeholders for user data or device properties. The value that you enter (with placeholders replaced by the actual data) must be a valid X.500 name. For example: For information on available placeholders, see Placeholders in profiles and policies..
http://docs.sophos.com/esg/smc/7-0/admin/en-us/webhelp/references/ConfigurationSCEPAndroid.htm
2017-03-23T08:09:18
CC-MAIN-2017-13
1490218186841.66
[]
docs.sophos.com
Category:Email From WebarchDocs Pages relating to email services on Webarchitects and Ecohost servers. Pages in category ‘Email’ The following 17 pages are in this category, out of 17 total. H I W - We'd like details on setting up an email account hosted on ethical servers that we have more control over, and maybe for hosting a blog - We're moving our domain hosting to you. Do you have a record of our MX records? - What can I do about forged emails in my name? I'm getting notification of undelivered ones - What happens when my email is up and running?
https://docs.webarch.net/wiki/Category:Email
2017-03-23T08:06:17
CC-MAIN-2017-13
1490218186841.66
[]
docs.webarch.net
Actions and Condition Context Keys for Amazon SimpleDB Amazon SimpleDB provides the following service-specific actions and condition context keys for use in IAM policies. Actions for Amazon SimpleDB For information about using the following Amazon SimpleDB API actions in an IAM policy, see Amazon SimpleDB Actions in the Amazon Simple Workflow Service Developer Guide. Condition context keys for Amazon SimpleDB For information about using condition keys in an IAM policy to control access to Amazon SimpleDB, see Amazon SimpleDB Keys in the Amazon SimpleDB Developer Guide. Amazon SimpleDB has no service-specific context keys that can be used in an IAM policy. For the list of the global condition context keys that are available to all services, see Global Condition Keys in the IAM Policy Elements Reference.
http://docs.aws.amazon.com/IAM/latest/UserGuide/list_sdb.html
2017-03-23T08:12:53
CC-MAIN-2017-13
1490218186841.66
[]
docs.aws.amazon.com
With the Corporate Documents configuration you define settings for the Corporate Documents feature of the Sophos Secure Workspace app. For each storage provider you can define the following settings separately:.
http://docs.sophos.com/esg/smc/7-0/admin/en-us/webhelp/references/ConfigurationCorporateDocumentsSophosContainerAndroid.htm
2017-03-23T08:16:31
CC-MAIN-2017-13
1490218186841.66
[]
docs.sophos.com
Multicast DNS (RFC 6762) Responder. This is roughly the mDNS equivalent of Dns_server, in that it accepts mDNS query packets and responds with the matching records from a zone file. The simplest usage is with shared resource records only, which requires the following steps: The use of unique resource records requires alternative steps: As per RFC 6762 section 9, if at any time the responder observes a response that conflicts with a record that was previously already confirmed as unique, it restarts the probing sequence. Therefore, it is necessary to invoke the "stop_probe" function to shut down the responder. type ip_endpoint = Ipaddr.V4.t * int An endpoint address consisting of an IPv4 address and a UDP port number. Encapsulates the dependencies that the responder requires for performing I/O.
http://docs.mirage.io/dns/Mdns_responder/index.html
2017-03-23T08:20:24
CC-MAIN-2017-13
1490218186841.66
[]
docs.mirage.io
inst.vncoption to the end of the command line. inst.vncpassword=boot option as well. Replace PASSWORDwith the password you want to use for the installation. The VNC password must be between 6 and 8 characters long. inst.vncpassword=option. It should not be a real or root password you use on any system. 13:14:47 Please manually connect your VNC viewer to 192.168.100.131:5901 to begin the install. 192.168.100.131:5901).
https://docs.fedoraproject.org/en-US/Fedora/25/html/Installation_Guide/sect-vnc-installations-direct-mode.html
2017-03-23T08:16:44
CC-MAIN-2017-13
1490218186841.66
[]
docs.fedoraproject.org
File Transfer Protocol (FTP) is a standard protocol used for transmitting files between computers on the Internet over TCP/IP. FTP allows energy data in .CSV file format to be pushed automatically and periodically to Wattics FTP server. Most head end systems, gateways, PLCs, BMS, DCIMs, etc natively support FTP data push, making it an easy choice for data integration to Wattics. Step 1: Request your Wattics FTP credentials Wattics will provide you with FTP credentials within 24 hours: Host: ftp-collector.wattics.com Username: wattics_123 Password: ADE94bgK Folder: source Step 2: Configure your system to push .CSV files to our FTP servers Use the FTP credentials provided to configure your data system. We will be happy to help you figure it out if it proves complicated. - What data intervals do we support: We support anything from 5mn, 15mn support sales
http://docs.wattics.com/2017/03/06/get-your-energy-data-to-wattics-via-ftp/
2017-03-23T08:11:44
CC-MAIN-2017-13
1490218186841.66
[array(['/wp-content/uploads/2017/03/CSVviaFTP-Wattics.jpg', 'ftp'], dtype=object) ]
docs.wattics.com
Introduction¶ Mypy is a static type checker for Python. If you sprinkle your code with type annotations, mypy can type check your code and find common bugs. As mypy is a static analyzer, or a lint-like tool, your code’s type annotations are just hints and don’t interfere when running your program. You run your program with a standard Python interpreter, and the annotations are treated primarily as comments. Using the Python 3 function annotation syntax (using the PEP 484 notation) or a comment-based annotation syntax for Python 2 code, you will be able to efficiently annotate your code and use mypy to check the code for common errors. Mypy has a powerful, easy-to-use, type system with modern features such as type inference, generics, function types, tuple types and union types. As a developer, you decide how to use mypy in your workflow. You can always escape to dynamic typing as mypy’s approach to static typing doesn’t restrict what you can do in your programs. Using mypy will make your programs easier to debug, maintain, and understand. This documentation provides a short introduction to mypy. It will help you get started writing statically typed code. Knowledge of Python and a statically typed object-oriented language, such as Java, are assumed. Note Mypy is still experimental. There will be changes that break backward compatibility.
https://mypy.readthedocs.io/en/latest/introduction.html
2017-03-23T08:13:31
CC-MAIN-2017-13
1490218186841.66
[]
mypy.readthedocs.io
Release Notes¶ Scikit-monaco v0.2 release notes¶ - Add MISER algorith for recursive stratified sampling. - All integration routines now respond to KeyboardInterrupt. - Additional benchmarks for importance sampling. Scikit-monaco v0.1.5 release notes¶ Version 0.1.5 provides an important bugfix. Seeds were not being properly generated, such that repeated calls to mcquad or mcimport that came within the same value of int(time.time()) had the same seed. Seeding is now left to the (more capable) hands of the RNG, which just gets passed seed(None) if the user hasn’t specified a seed.
http://scikit-monaco.readthedocs.io/en/latest/release_notes.html
2017-03-23T08:17:33
CC-MAIN-2017-13
1490218186841.66
[]
scikit-monaco.readthedocs.io
Neuroscientists need to manage and integrate data from anatomy, physiology, behavior and simulation data on multiple spatial and temporal scales and across modalities, individuals and species. Large amounts of data with complex data types are to be produced in the coming decades - and viable solutions for databasing, data sharing and interoperability of software tools are needed. “Hierarchical Data Format (HDF5) is a data model, library, and file format for storing and managing data. It supports an unlimited variety of datatypes, and is designed for flexible and efficient I/O and for high volume and complex data.” NeuroHDF is an effort to combine the flexibility and efficiency of HDF5 for neuroscience datasets through the specification of a simple layout for different data types with minimal Metadata. The NeuroHDF Interest Group consists of the members of this group.
http://neurohdf.readthedocs.io/en/latest/
2017-03-23T08:06:21
CC-MAIN-2017-13
1490218186841.66
[]
neurohdf.readthedocs.io
Input Mask Dialog The Input Mask dialog lets you create and modify the masks that are the values of the RadMaskedTextBox control's Mask and DisplayMask properties. You can display the Input Mask dialog in two ways: From the RadMaskedTextBox Smart Tag, choose the SetMask link. When you bring up the Input Mask dialog in this way, the mask you create or choose is assigned to the Mask property, which controls the mask that is used when the user can edit the value. Click the ellipsis button next to the Mask or DisplayMask property in the properties pane. The DisplayMask property is an optional override to the Mask property, for formatting the value of the masked text box when it does not have focus. At the top of the dialog is a table of pre-defined masks that you can choose for common input tasks, along with sample values. To choose a pre-defined mask, select its row in the table. The mask automatically appears in the Mask text box, with a preview to show the prompts and literals below it. To specify a mask that is not pre-defined, choose from the table of pre-defined masks. When you choose custom, the last mask that was selected in the table remains in the Mask text box for you to use as a starting point in entering a new mask. The preview updates as you edit the mask. For complicated masks, you may choose to use the MaskPart Collection Editor instead.
https://docs.telerik.com/devtools/aspnet-ajax/controls/maskedtextbox/design-time/input-mask-dialog
2017-11-18T02:54:17
CC-MAIN-2017-47
1510934804518.38
[]
docs.telerik.com
OverviewOverview Jet-Magento Integration is an extension, developed by CedCommerce, establishes a synchronization of inventory, price, other details for product creation and its management between the Magento store and Jet.com. This extension interacts with Jet Marketplace to integrate the synchronized product listing between the Magento and the Jet.com retailers. After installing the extension, merchant can create the Jet Categories and the dependent attributes on the Magento store. The process enables merchant to configure the desired product category into Magento for automatic submission of the selected product to the same Category on Jet.com. Jet Magento Integration extension provides following features: - Profile based product upload - Easy Jet Category and Attribute mapping - Manage Jet Product and Upload Product (directly from grid and bulk upload all products) - Product Synchronization - Automatic process on each product edit - Manual synchronization process - Review Product Upload Errors - Automated Order Import and Acknowledgement - Shipment and Cancellation of Orders - Automated Shipment with Shipworks and Shipstation - Multiple Shipment of an Order - Fetch and Submit Return - Create Refund - Upload Configurable Product - Upload simple Product - Archive Selected Product - Archive Products in Bulk - Unarchive Selected Product - Unarchive Products in Bulk - Shipping Exception - Return Exception - CRON Facility - Knowledge Base - Map Magento Global variant attributes Caution: Extension is heavily dependent on Crons for running various automated processes. So, make sure that Cron Job is properly configured and working on the server.
https://docs.cedcommerce.com/magento/jet-magento-integration-guide-0-3-4-2
2017-11-18T02:46:38
CC-MAIN-2017-47
1510934804518.38
[array(['https://docs.cedcommerce.com/wp-content/plugins/documentor/skins/bar/images/search.png', None], dtype=object) ]
docs.cedcommerce.com
The View Connection Server upgrade process has specific requirements and limitations. View Connection Server requires a valid license key for this latest release. The domain user account that you use to install the new version of View Connection Server must have administrative privileges on the View Connection Server host. The View Connection Server administrator must have administrative credentials for vCenter Server. When you run the installer,. When you back up View Connection Server, the View LDAP configuration is exported as encrypted LDIF data. To restore the encrypted backup View configuration, you must provide the data recovery password. The password must contain between 1 and 128 characters. Security-Related Requirements View Connection Server requires an SSL certificate that is signed by a CA (certificate authority) and that your clients can validate. Although a default self-signed certificate is generated in the absence of a CA-signed certificate when you install View Connection Server, you must replace the default self-signed certificate as soon as possible. Self-signed certificates are shown as invalid in View Administrator. Also, updated clients expect information about the server's certificate to be communicated as part of the SSL handshake between client and server. Often updated clients do not trust self-signed certificates. For complete information about security certificate requirements, see "Configuring SSL Certificates for View Servers" in the View Installation guide. Also see the Scenarios for Setting Up SSL Connections to View document, which describes setting up intermediate servers that perform tasks such as load balancing and off-loading SSL connections.Note: If your original servers already have SSL certificates signed by a CA, during the upgrade, View imports your existing CA-signed certificate into the Windows Server certificate store. Certificates for vCenter Server, View Composer, and View servers must include certificate revocation lists (CRLs). For more information, see "Configuring Certificate Revocation Checking on Server Certificates" in the View Installation guide.Important: If your company uses proxy settings for Internet access, you might have to configure your View Connection Server hosts to use the proxy. This step ensures that servers can access certificate revocation checking sites on the Internet. You can use Microsoft Netshell commands to import the proxy settings to View Connection Server. For more information, see "Troubleshooting View Server Certificate Revocation Checking" in the View Administration guide. firewall between a security server and a View Connection Server instance, you must configure the firewall to support IPsec. See the View Installation document. If you plan to perform fresh installations of View Connection Server instances on additional physical or virtual machines, see the complete list of installation requirements in the View Installation document.
https://docs.vmware.com/en/VMware-Horizon-6/6.1/com.vmware.horizon-view.upgrade.doc/GUID-79F50098-CCB3-4688-8732-9832413C6454.html
2017-11-18T02:52:44
CC-MAIN-2017-47
1510934804518.38
[]
docs.vmware.com
. Before you begin Your Active Directory administrator must create a View Composer user for AD operations. This domain user must have permission to add and remove virtual machines from the Active Directory domain that contains your linked clones. For information about the required permissions for this user, see Create a User Account for View Composer AD Operations. In View Administrator, verify that you completed the vCenter Server Information and View Composer Settings pages in the Add vCenter Server wizard. View Storage Accelerator for View.
https://docs.vmware.com/en/VMware-Horizon-7/7.2/com.vmware.horizon-view.installation.doc/GUID-35A6A7F3-6C4E-4BE0-8A9B-77B7D8FA1B2D.html
2017-11-18T02:52:21
CC-MAIN-2017-47
1510934804518.38
[]
docs.vmware.com
Smart Contract Framework The smart contract framework provides consistent APIs across several programming languages that programmers can use to manipulate state on the blockchain. The smart contract framework currently supports .NET Framework, Java framework and its respective compiler will be released within a few months. In the future, we will also support Python, C/C++, and Go.
http://docs.neo.org/en-us/sc/fw.html
2017-11-18T02:52:15
CC-MAIN-2017-47
1510934804518.38
[]
docs.neo.org
Using Earth Files¶ An Earth File is an XML description of a map. Creating an earth file is the easiest way to configure a map and get up and running quickly. In the osgEarth repository you will find dozens of sample earth files in the tests folder, covering various topics and demonstrating various features. We encourage you to explore and try them out! Also see: Earth File Reference Contents of an Earth File¶ osgEarth uses an XML based file format called an Earth File to specify exactly how source data turns into an OSG scene graph. An Earth File has a .earth extension, but it is XML. Fundamentally the Earth File allows you to specify: - The type of map to create (geocentric or projected) - The image, elevation, vector and model sources to use - Where the data will be cached A Simple Earth File¶ Here is a very simple example that reads data from a GeoTIFF file on the local file system and renders it as a geocentric round Earth scene: <map name="MyMap" type="geocentric" version="2"> <image name="bluemarble" driver="gdal"> <url>world.tif</url> </image> </map> This Earth File creates a geocentric Map named MyMap with a single GeoTIFF image source called bluemarble. The driver attribute tells osgEarth which of its plugins to use to use to load the image. (osgEarth uses a plug-in framework to load different types of data from different sources.) Some of the sub-elements (under image) are particular to the selected driver. To learn more about drivers and how to configure each one, please refer to the Driver Reference Guide. Note: the ``version`` number is required! Multiple Image Layers¶ osgEarth supports maps with multiple image sources. This allows you to create maps such as base layer with a transportation overlay or provide high resolution insets for specific areas that sit atop a lower resolution base map. To add multiple images to a Earth File, simply add multiple “image” blocks to your Earth File: <map name="Transportation" type="geocentric" version="2"> <!--Add a base map of the blue marble data--> <image name="bluemarble" driver="gdal"> <url>c:/data/bluemarble.tif</url> </image> <!--Add a high resolution inset of Washington, DC--> <image name="dc" driver="gdal"> <url>c:/data/dc_high_res.tif</url> </image> </map> The above map provides two images from local data sources using the GDAL driver. Order is important when defining multiple image sources: osgEarth renders them in the order in which they appear in the Earth File. Tip: relative paths within an Earth File are interpreted as being relative to the Earth File itself. Adding Elevation Data¶ Adding elevation data (sometimes called “terrain data”) to an Earth File is very similar to adding images. Use an elevation block like so: <map name="Elevation" type="geocentric" version="2"> <!--Add a base map of the blue marble data--> <image name="bluemarble" driver="gdal"> <url>c:/data/bluemarble.tif</url> </image> <!--Add SRTM data--> <elevation name="srtm" driver="gdal"> <url>c:/data/SRTM.tif</url> </elevation> </map> This Earth File has a base bluemarble image as well as a elevation grid that is loaded from a local GeoTIFF file. You can add as many elevation layers as you like; osgEarth will combine them into a single mesh. As with images, order is important - For example, if you have a base elevation data source with low-resolution coverage of the entire world and a high-resolution inset of a city, you need specify the base data FIRST, followed by the high-resolution inset. Some osgEarth drivers can generate elevation grids as well as imagery. Note: osgEarth only supports single-channel 16-bit integer or 32-bit floating point data for use in elevation layers. Caching¶ Since osgEarth renders data on demand, it sometimes needs to do some work in order to prepare a tile for display. The cache exists so that osgEarth can save the results of this work for next time, instead of processing the tile anew each time. This increases performance and avoids multiple downloads of the same data. Here’s an example cache setup: <map name="TMS Example" type="geocentric" version="2"> <image name="metacarta blue marble" driver="tms"> <url></url> </image> <options> <!--Specify where to cache the data--> <cache type="filesystem"> <path>c:/osgearth_cache</path> </cache> </options> </map> This Earth File shows the most basic way to specify a cache for osgEarth. This tells osgEarth to enable caching and to cache to the folder c:/osgearth_cache. The cache path can be relative or absolute; relative paths are relative to the Earth File itself. There are many ways to configure caching; please refer to the section on Caching for more details.
http://osgearth.readthedocs.io/en/latest/user/earthfiles.html
2017-12-11T02:11:22
CC-MAIN-2017-51
1512948512054.0
[]
osgearth.readthedocs.io
You cannot reprint elements from a PBS KIDS page in a newsletter or booklet without permission from PBS. I have a technical problem with a PBS KIDS Web site. First, try restarting your device: Hold down the power button until the screen reads «slide to power off.» Power off the device by moving the slider. See cool humanoid robots, robotic fish, robots used by the army, demonstrations, exhibitions and more. Java: Some PBS KIDS games use the Java plug-in. Скачать: Ultimate_eBook_mayrev.pdf Похожие записи:
http://wp-docs.ru/2016/09/14/10647-%D1%81%D0%BA%D0%B0%D1%87%D0%B0%D1%82%D1%8C-%D1%88%D0%B0%D0%B1%D0%BB%D0%BE%D0%BD-new-life-cinema
2017-12-11T01:56:36
CC-MAIN-2017-51
1512948512054.0
[array(['http://8dle.ru/uploads/posts/2014-07/1404653433_1401205728_fullstory.jpg', None], dtype=object) ]
wp-docs.ru
- Run an execution profile - App-V 5.0 Sequencer execution profile - App-V 4.6 SP1 Sequencer execution profile - Edit an execution profile You can use the App-V 4.6 SP1 Sequencer execution profile with Install Capture, Self-Provisioning, or Forward Path to package applications for deployment using the App-V Client 4.6 SP1. .sft file is automatically imported. Because the .sft 4.6 SP1 Sequencer execution profile, perform the following additional setup on the capture machine: For general instructions: As mentioned earlier, by default this execution profile generally installs the application on the capture machine twice. To suppress the installation outside of the sequencer, give the ImportSft replaceable a value of 1. 4.6 SP1 Sequencer execution profile.
https://docs.citrix.com/de-de/dna/7-6/dna-configure/dna-install-capture-config/dna-execution-profiles/dna-execution-profile-appv-46sp1.html
2017-12-11T01:51:55
CC-MAIN-2017-51
1512948512054.0
[]
docs.citrix.com
Use UI policy instead of a client script When possible, consider using a UI policy instead of a client script. UI policies provide these benefits over client scripts: UI policies have an Order field to allow full control over the order in which client-side operations take place. UI policies do not require scripting to make a field mandatory, read-only, or visible. Note: UI policies apply after client scripts.
https://docs.servicenow.com/bundle/geneva-servicenow-platform/page/app-store/good_practices/client_script/concept/c_UseUIPolicy.html
2017-12-11T02:22:29
CC-MAIN-2017-51
1512948512054.0
[]
docs.servicenow.com
Security Incident Reconnaissance workflow template Reconnaissance is usually a preliminary step toward a further attack seeking to exploit a device or system. The Security Incident - Reconnaissance - Template allows you to perform a series of tasks designed to handle reconnaissance on your network. Before you begin Role required: sn_si.write About this task The workflow is triggered when the Category in a security incident is set to Reconnaissance activity. This action causes a response task to be created for the first activity in the workflow. Procedure Open the security incident for this potential attack, or create a new security incident. In Category, select Reconnaissance activity. Save the record. Scroll down and open the Response Tasks related list. The first of a series of response tasks appears. Each time the record is saved, your response to the previous task either causes the next response task to be created or the workflow to end.Table 1. Response tasks in Reconnaissance Template Response task Action Results Reconnaissance activity verified? Determine whether any observed reconnaissance has been verified.In the task, select Yes or No in Outcome. If you select Yes, the Identify impacted systems task is created. If you select No, the workflow ends. Identify impacted systems Determine the systems impacted by the reconnaissance. When this task is complete, the Allow reconnaissancefor law enforcement analysis? task is created. Allow reconnaissance for law enforcement analysis? Determine whether you want the reconnaissance to be analyzed by law enforcement agencies. In the task, select Yes or No in Outcome. If you select Yes, the Law enforcement process task is created.If you select No, the Update system(s) to prevent reconnaissance task is created. Law enforcement process Perform the law enforcement process as defined by your company. When this task is complete, the Update system(s) to prevent reconnaissance task is created. Update system(s) to prevent reconnaissance Perform the steps necessary to update the systems affected by the reconnaissance. reconnaissance incident. Update the State field in the task as appropriate. When this task is complete, the workflow ends. Related TasksSecurity Incident Confidential Data Exposure workflow templateSecurity Incident Denial of Service workflow templateSecurity Incident Lost Equipment workflow templateSecurity Incident Malicious Software workflow templateSecurity Incident Phishing workflow templateSecurity Incident Policy Violation workflow templateSecurity Incident Rogue Server or Service workflow templateSecurity Incident Spam workflow templateSecurity Incident Unauthorized Access workflow templateSecurity Incident Web/BBS Defacement workflow template
https://docs.servicenow.com/bundle/jakarta-security-management/page/product/security-incident-response-orchestration/task/si-recon-wf-template.html
2017-12-11T02:19:48
CC-MAIN-2017-51
1512948512054.0
[]
docs.servicenow.com
Your First Folder Introduction Folders are a great way to group similar forms or create a process for your employees. You can create as many folders as you need. Folders can be setup to capture the following information: - Scheduled dates - Location - Description - Status (open or closed) Add a Folder Web Instructions - Click Foldersin the main menu. - Click Add Folder. - Select the Folder Type for the new Folder. - Search for the Account you want to add the folder to. - Click Create Folder. Mobile Instructions - Tap Foldersfrom the Main Menu. - Tap +. - Fill in the New Folder details as desired: - Folder ID: If you do not enter a specific ID, we will generate one for you. - Folder Type: Required, Select the type of folder you'd like to create. - Account: Required. - Location - Description - Schedule Details: Select a start and end time to schedule your folder. You can also add a note for the person(s) you share the folder with. - Share Folder With: Select people to share the folder with. The folder will display in their schedule as well as yours. - Tap Create Folder. Learn more about the amazing power of Folders!
http://docs.inspectall.com/article/7-your-first-folder
2017-12-11T02:22:12
CC-MAIN-2017-51
1512948512054.0
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/564a7e7890336002f86de0be/images/564a8eab90336002f86de0fc/file-Cb3Byeny2T.jpg', 'Add a Folder'], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/564a7e7890336002f86de0be/images/564a8ecd90336002f86de0fd/file-5msAw6tuaM.gif', 'Add a Folder Mobile'], dtype=object) ]
docs.inspectall.com
Migrate customizations to cart layouts Move customization to cart layout widgets before you. Related TasksEnable the cart layoutsRelated ConceptsReview of cart layout settings
https://docs.servicenow.com/bundle/istanbul-it-service-management/page/product/service-catalog-management/task/t_MoveCustomizationsToCartLayouts.html
2017-12-11T02:18:12
CC-MAIN-2017-51
1512948512054.0
[]
docs.servicenow.com
Add IoCs and observables to an existing case You can add IoCs and observables to existing cases. After the security incidents have been added to cases, you can use Security Case Management to analyze the data. Before you beginThe Threat Intelligence plugin must be activated to use Security Case Management.Role required: sn_ti.case_user_write Procedure Navigate to the artifacts (IoCs or observables) you want to add to existing cases. To add IoCs to one or more cases, navigate to Threat Intelligence > IoC Repository > Indicators. To add observables to one or more cases, navigate to Threat Intelligence > IoC Repository > Observables. In the list, select the artifact records you want added to existing cases. Note: If you select multiple cases, the selected IoCs or observables are added to each of the selected cases. From the Actions on selected items drop-down list, select Add to Security Case. The Add to Security Case dialog box opens. If you already have cases assigned to you, they display in the list. Select the cases into which you want to add the selected IoCs or observables. Click Add. A message indicates that the selected records have been added to the cases, along with a link to the cases in Security Case Management. Related TasksCreate a case from IoCs or observablesCreate an observable from a caseRun a local sightings search on observables in a case
https://docs.servicenow.com/bundle/jakarta-security-management/page/product/threat-intelligence-case-management/task/add-records-to-cases-threat.html
2017-12-11T02:18:08
CC-MAIN-2017-51
1512948512054.0
[]
docs.servicenow.com
Polymers are composed of multiply repeated subunits. The repeating units are enclosed by polymer brackets. Polymers can be represented as structure-based or source-based polymers, depending on how much structural detail is known. This section consists of the following subsections: Structure-based Representation of Polymers Structural Repeating Unit (SRU) Polymers Repeating Units with Repetition Ranges - Frequency Variation Source-based Representation of Polymers
https://docs.chemaxon.com/display/lts-gallium/polymers.md
2021-09-17T04:25:21
CC-MAIN-2021-39
1631780054023.35
[]
docs.chemaxon.com
Contact forms made easy A contact form is an essential and crucial tool on any website, especially when dealing with products, services, and customers in general. Learn how to create a contact form, easily, without too much coding using our contact center software as the backend tool. If you don't know how to create a form, or you simply don't want to bother creating everything from the ground up, feel free to download, tweak, and use this one. The form is already integrated with ContactJS, just modify the parameters following the step 2 instructions! On your website, create a form element with ID fh-contactjs, and add the following fields: If you want, you can reduce the form by eliminating any optional field. To show each field validation error, simply add a div element with the ID of the field followed by "field-error" suffix. For example: <div id="first-name-field-error"></div> With the form in place, it's time to add the ContactJs library — a small tool we have created that converts forms submissions into Full Help conversations that you can manage and reply directly from your help desk. <script src="[email protected]/dist/FullHelp.ContactJs.min.js"></script> <script> (function () { var contactjs = new FullHelp.ContactJs({ account: 1, host: '', source: 'Form @ ' + window.location.host }); })(); </script> Copy the code and paste it on your website, right before the </body> or </head> closing tags. Then, update the account and host parameters with your help desk corresponding values. To get the id of your account, simply go to your help desk instance, click over your username (top-right corner), then click on Account.
https://docs.fullhelp.com/en/article/32/how-to-create-a-contact-form
2021-09-17T04:33:02
CC-MAIN-2021-39
1631780054023.35
[]
docs.fullhelp.com
Use the following procedure steps to install the Data ONTAP SMI-S Agent in order to successfully monitor NetApp FAS series storage devices using WhatsUp Gold. The Agent is required for monitoring volume statistical data and must be installed on a machine that can communicate with both WhatsUp Gold and the storage device or devices being monitored. smis cimserver status cimuser -a -u <username> -w <password> Note: The user created using this command must match an existing local Windows user account. Additionally, when creating the credential in WhatsUp Gold, enter the password created using this command rather than the password for the local Windows user account. CACHE_REFRESH_SEC Note: The Data ONTAP SMI-S Agent uses a default collection interval of 5 minutes. Ipswitch recommends setting the cache refresh rate interval to match the interval set for disk utilization data collection in WhatsUp Gold. Refer to Windows documentation for information on creating variables. vsadminuser is present, unlocked, and sshdand ontapiare enabled. smis add <SVM IP address> vsadmin smis addsecure <SVM IP address> vsadminto configure SMI-S to use HTTPS instead of HTTP. smis list
https://docs.ipswitch.com/NM/WhatsUpGold2018/03_Help/1033/43171.htm
2021-09-17T02:51:17
CC-MAIN-2021-39
1631780054023.35
[]
docs.ipswitch.com
When creating custom device roles, browse and begin with a Default device role that you can learn from and modify. In most cases you will want to re-use the following from the default role that you clone: WhatsUp Gold Provides a role for discovering and monitoring APC brand UPS devices. This example uses this role as a starting point for discovering and monitoring other UPS brands. WhatsUp Gold adds a new sub role to the library named Copy of UPS.
https://docs.ipswitch.com/NM/WhatsUpGold2019/03_Help/1033/44409.htm
2021-09-17T04:08:22
CC-MAIN-2021-39
1631780054023.35
[]
docs.ipswitch.com
SAP JCo Adapter The SAP JCo adapter is used to access, retrieve, and modify data in on-premise SAP systems from RunMyProcess DigitalSuite by means of the SAP Java Connector (JCo), version 3.x. SAP JCo is a development library provided by SAP that enables Java applications to communicate with on-premise SAP systems by Business API (BAPI) and Remote Function Calls (RFC) on SAP's RFC protocol. - SAP JCo Adapter - Overview - Prerequisites - Installing the Adapter - Configuring the Adapter - Starting the Adapter - Using the Adapter - Considerations Overview The following illustration shows the components involved in the communication between RunMyProcess DigitalSuite and SAP as well as the request and response format conversions between them: The SAP JCo adapter supports two types of connections with SAP: "with pool" and "without pool". For establishing a connection with SAP, two sets of parameters are required: input parameters and configuration parameters. The input parameters, SAP authorization and request information, are sent through the body of the RunMyProcess requests. The configuration parameters, SAP server information/destination parameters, are maintained in the SAP JCo adapter's configuration file. SAP JCo and the SAP JCo adapter are generic, carrying out conversions, re-formatting, and re-arrangements as required. They support: - Different BAPIs of SAP: standard BAPIs, modified BAPIs, custom BAPIs, BAPIs with or without parameters. The only requirement is that the BAPI is accessible through SAP JCo and that BAPI metadata is available in the JCo repository. - All types of BAPI parameters: import parameters, export parameters, table parameters, changing parameters - Any number of parameters - 14 ABAP data types - All parameter field types: field, structure, table The SAP JCo adapter can send back its responses to RunMyProcess DigitalSuite in JSON or XML format. Except for errors that occurred, the output is encoded in Base64. Prerequisites The following prerequisites must be fulfilled to install and run the SAP JCo SAP installations. The identifier of the adapter ( protocolsetting in the handler.configconfiguration file) must be unique for each of the installations. SAP JCo, Version 3.x, must be installed on the system where you want to run the adapter. It must be installed as a stand-alone component, not as a version integrated with "SAP Business Connector" or "AS Java". SAP JCo is provided and maintained by SAP. As an SAP customer, you can download SAP JCo from. Install SAP JCo as described in the installation instructions on the download site and in the archive. The installation folder will be referred to as [sap-jco]in the following descriptions. To make sure if SAP JCo is installed correctly, you can start its "About" dialog, for example, by calling: java -jar [sap-jco]/sapjco3.jaron Windows, or java -jar [sap-jco]/sapjco3.jar -stdouton Linux The SAP installation you want to work with (SAP application, message, and gateway servers, or the SAP router) must be accessible by SAP JCo from the system where it is installed together with the DigitalSuite EnterpriseConnect SAP JCo\sap [parent-folder]is a folder of your choice. If the EnterpriseConnect Agent is installed on the same machine, use its installation folder as the [parent-folder], for example, C:\ProgramFiles (x86)\dsec-agent. Copy the configuration files for the SAP JCo adapter, JCO3.configand handler.config, from the configFiles\sap.referencesubfolder to the configFilesfolder, for example: Overwrite existing files in the configFilesfolder. Depending on the operating system, copy the following files from your SAP JCo installation to the libfolder: Microsoft Windows: Linux: If desired, delete obsolete files. Only the following folders and files are required in the sapfolder to use the adapter: In addition, we recommend you keep the following: runAdapter.batbatch file for starting the adapter on Microsoft Windows sap.referencesubfolder in the configFilesfolder for reference purposes Configuring the Adapter Configuration settings for the SAP JCoCO31and JCO32.CO3.config The JCO3.config file contains specific settings for the adapter: The settings have the following meaning: JCO_ASHOST: The host name or IP address of the SAP application server to work with. JCO_SYSNR: The two-digit system number, e.g. 01. JCO_CLIENT: The three-digit number identifying the SAP client to work with in the SAP system, e.g. 321. A client is a self-contained unit in an SAP system with separate master records and its own set of tables. JCO_LANG: The two-character ISO language code specifying the logon language, e.g. DE, JCO_POOL_CAPACITY: The maximum number of idle connections to keep open. JCO_PEAK_LIMIT: The maximum number of active connections that can simultaneously be created. Starting the Adapter The adapter needs to be running to be able to work with the SAP installation. Before you start the adapter, make sure that the DigitalSuite EnterpriseConnect Agent is running. To start the adapter: Change to the sap execute BAPIs in SAP from RunMyProcess DigitalSuite through the EnterpriseConnect Agent and the SAP JCo adapter. Request: POST on http://[agent-host]:[port]/, where [agent-host] and [port] are the IP address and port of the EnterpriseConnect Agent. Content Type: application/json Accept: application/json A 200 OK status in the result indicates that information was sent and received by the SAP JCo adapter. Except for errors that occurred, the response is encoded in Base64. You can use the RunMyProcess Freemarker functions or JavaScript SDK to decode the information. Content (examples): The message body is a JSON (JavaScript Object Notation) object whose structure depends on the operation to be executed. Each operation requires a nested JSON object with the outer object specifying the protocol, and the inner object defining the operation-specific parameters, for example: Usually, you will need to specify authentication information for SAP, like SAPUser and SAPPassword in the example. The BAPI BAPI_USER_GET_DETAIL returns details about the user specified as the import parameter. More details on handling specific cases and issues are provided in the following subsections. You can also find a sample request in the sap.reference\InputJSONExample.txt file in the sap installation folder. Specifying Parameter Field Types The fields of any BAPI parameter (import, export, table, or changing parameter) can be one of the following types: simple field, structure, or table. The following examples show how to specify each of these types in the JSON requests to SAP. Simple field (key and value pair) in BAPI: The JSON value must by a string value enclosed in double quotes (""). Example: Structure (collection of simple fields and/or nested structures) in BAPI: The JSON value must be a JSON object ({}). For nested structures, the JSON objects can be nested accordingly. Example: Table (relational, record structure) in BAPI: The JSON value must be a JSON array ([]). For nested tables, the JSON arrays can be nested accordingly. They may include simple fields as well as structures. Example: Using Parameter Types In the JSON requests, you can specify all types of BAPI parameters: import parameters, export parameters, table parameters, and changing parameters. It is not mandatory to pass export parameters in the JSON input. If a BAPI has export parameters you do not need, just pass the ones you are interested in with an empty value (empty string "") in the exportParameters of the JSON input. The SAP JCo adapter will return only these parameters and exclude the other ones. It is also not mandatory to pass importParameters or tableParameters. A table parameter can have 1-n table rows, which in turn can have 1-n row fields. Example: Retrieving BAPI Metadata You can requst metadata on all known BAPIs from SAP. This is useful, for example, if you are not sure whether a specific BAPI is exposed by SAP or which parameters are required, or if there is no documentation for custom BAPIs. To obtain the metadata for a BAPI, include the getMetaData parameter with value true in the JSON input. There is no need to specify other parameters. The SAP JCo adapter returns the parameter structure of the BAPI as well as all available metadata for each parameter, encoded in Base64. For table parameters, the metadata for each table row and field is returned. Example: Obtaining XML Output By default, the SAP JCo adapter returns its output in JSON format, encoded in Base64. If you want to obtain the result in XML format, include the responseType parameter with the value XML in the JSON input. Specify all the other parameters as required. The SAP JCo adapter returns the output as provided by SAP in XML format with Base64 encoding. Example: Considerations The following sections provide some hints on possibilities and limitations you should consider when working with SAP through the SAP JCo adapter. Supported BAPIs The SAP JCo adapter can handle standard, customized, and custom-built BAPIs. The only requirements is that the BAPI is accessible through SAP JCo and BAPI metadata is available in the JCo repository. ction). The following types of BAPI are supported: - BAPIs for reading data - BAPIs for creating / changing data - BAPIs for mass processing The SAP JCo adapter cannot handle BAPIs which require data entry from a GUI during execution. Before attempting to call a specific BAPI, you should thus make sure that it does not expect such input. Database Modifying BAPIs and Commit/Rollback BAPIs of the "create or modify transaction/master data" nature are referred to as "database modifying BAPIs". As a prudent measure, SAP expects a separate commit or rollback after the execution of such operations. The commit or rollback needs to be carried out in the same session, and usually specific validations are necessary before. Some BAPIs handle a commit and/or rollback by themselves. Executing such BAPIs from RunMyProcess DigitalSuite is straightforward. Technically, the SAP JCo adapter can execute a commit or rollback by means of the following BAPIs. BAPI_TRANSACTION_COMMIT- Execute external commit BAPI_TRANSACTION_ROLLBACK- Execute external rollback However, this is not advisable and must be avoided. If a specific BAPI requires an explicit commit/rollback, the best practice is to write a wrapper with additional logic around the BAPI in SAP and publish it as custom BAPI. Such a custom BAPI can be called safely through the SAP JCo adapter or by any other external application. Execution of Multiple BAPIs As a conscious decision, the SAP JCo adapter does not support the execution of multiple BAPIs in one API call. The reason for this is that DigitalSuite EnterpriseConnect as a whole has a predefined time limit for getting responses to its requests. When this limit is exceeded, the connection is reset and the session lost. Executing multiple BAPIs would increase the probability of such timeouts. If you intend to execute a sequence of multiple BAPIs, we recommend you enclose them in a wrapper in SAP and call them as a single, custom BAPI. Simultaneous Calls by Multiple Users The SAP JCo adapter and DigitalSuite EnterpriseConnect as a whole can handle simultaneous API calls by multiple users, ensuring data consistency and integrity. Please give details of the problem
https://docs.runmyprocess.com/Components/EnterpriseConnect/Adapters/SAP_JCo_Adapter/
2021-09-17T04:17:08
CC-MAIN-2021-39
1631780054023.35
[array(['/images/Components/EnterpriseConnect/jco-communication.png', 'communication'], dtype=object) ]
docs.runmyprocess.com
There will be situations where you will want to verify the policies that have been enforced on the iOS devices in your Organization. In such situations follow the steps given below to identify the policies that have been enforced on an iOS device as opposed to getting the policy details from device details page in the device management console. - Tap Settings > General > Profiles or Profiles & Device Management. - Tap the respective profile you configured to run with the WSO2 IoT Server that is under MOBILE DEVICE MANAGEMENT. Example: In this scenario tap WSO2QA Company Mobile Device Management. Tap Restrictions to view the restrictions that have been enforced on the device via the WSO2 IoT Server. The restrictions that have been enforced on the device via WSO2 IoT Server will be shown to you. In this example, the restrictions policy has been enforced on the device. For more information on the WSO2 IoT Server policy enforcement criteria, see Key concepts.
https://docs.wso2.com/display/IOTS331/Verifying+Policies+Applied+on+an+iOS+Device
2021-09-17T04:43:47
CC-MAIN-2021-39
1631780054023.35
[]
docs.wso2.com
Security Exception Class Definition Important Some information relates to prerelease product that may be substantially modified before it’s released. Microsoft makes no warranties, express or implied, with respect to the information provided here. Represents an exception that occurred when there is something wrong with the security applied on a message. public ref class MessageSecurityException : System::ServiceModel::CommunicationException public class MessageSecurityException : System.ServiceModel.CommunicationException [System.Serializable] public class MessageSecurityException : System.ServiceModel.CommunicationException type MessageSecurityException = class inherit CommunicationException [<System.Serializable>] type MessageSecurityException = class inherit CommunicationException Public Class MessageSecurityException Inherits CommunicationException - Inheritance - MessageSecurityException - Inheritance - MessageSecurityException - Derived - - Attributes - Remarks An example where this exception happens is if signature verification fails. This exception usually happens during application message exchange (when security context is fully established). In addition, it can occur while establishing a security session on top of the initial security context.
https://docs.microsoft.com/en-us/dotnet/api/system.servicemodel.security.messagesecurityexception?view=dotnet-plat-ext-5.0
2021-09-17T05:21:10
CC-MAIN-2021-39
1631780054023.35
[]
docs.microsoft.com
RSKderiveseapressure.m Input -Required- RSK -Optional- patm: atmosphere pressure for calculating the sea pressure, default is 10.1325 dbar. Output RSK This function calculates sea pressure from pressure and atmospheric pressure then, adds it to the data field in the RSK structure. This function requires an RSK structure that contains pressure data. The patm argument is the atmospheric pressure used to calculate the sea pressure. A custom value can be used; otherwise, the default is to retrieve the value stored in the parameters field, or to assume it is 10.1325 dbar if the parameters field is unavailable. Starting from v3.0.0, the function also supports a variable patm input as a vector, when that happens, input RSK must not have profile structure and the input vector should have the same length of the RSK samples. If there is already a sea pressure channel present, the function replaces the values with the new calculation of sea pressure based on the values currently in the pressure column.
https://docs.rbr-global.com/rsktools/process/calculators/rskderiveseapressure-m
2021-09-17T02:59:11
CC-MAIN-2021-39
1631780054023.35
[]
docs.rbr-global.com
RSKreadprofiles.m Input -Required- RSK -Optional- profile: [ ] (all profiles, default). direction: down, up or both. Output RSK This function retrieves data from the rsk file and segmented into casts and stores it in the data field. This function requires an RSK structure that contains a profiles field with profiling metadata. The data field elements are segmented based on the metadata in the profiles field; this field is organized by upcast and downcast. Each cast direction contains vectors, tstart and tend, which are the start time and end time of each profile. From v2.2.0, the segmented data field also adds rsk.data.direction and rsk.data.profilenumber field to indicate cast direction and profile sequence. If the direction argument is up or down, the data field contains an element with all the samples between each set of tstart to tend in the upcast or downcast field. When direction = both, the data field elements are populated in chronological order from tstart to tend from both upcast and downcast, with the data elements alternating between downcasts and upcasts. This function also adds an order field which describes the cast direction order and originalindex field which links between the metadata index in the profiles field and data field element. If you select the first five profiles ( profile = 1:5) of upcasts, it will use the first five sets of tstart and tend from the RSK.profiles.upcast entries, as shown below. >> rsk = RSKopen('file.rsk'); >> datestr(rsk.profiles.upcast.tstart(1:5)) ans = 02-Jan-2000 18:27:12 02-Jan-2000 18:35:05 02-Jan-2000 18:42:31 02-Jan-2000 18:49:00 02-Jan-2000 18:54:33 >> datestr(rsk.profiles.upcast.tend(1:5)) ans = 02-Jan-2000 18:33:18 02-Jan-2000 18:41:17 02-Jan-2000 18:42:57 02-Jan-2000 18:53:16 02-Jan-2000 18:59:16 >> rsk = RSKreadprofiles(rsk, 'profile', [1:5], 'direction', 'up') rsk = dbInfo: [1x1 struct] instrumentChannels: [3x1 struct] channels: [3x1 struct] epochs: [1x1 struct] schedules: [1x1 struct] deployments: [1x1 struct] instruments: [1x1 struct] appSettings: [1x1 struct] ranging: [3x1 struct] parameters: [1x1 struct] regionCast: [28x1 struct] region: [42x1 struct] profiles: [1x1 struct] data: [1x5 struct] If users prefer to split time series data in existing Matlab rsk structure into profiles, instead of directly reading profiles from the disk. RSKtimeseries2profiles is recommended.
https://docs.rbr-global.com/rsktools/read/rskreadprofiles-m
2021-09-17T04:54:23
CC-MAIN-2021-39
1631780054023.35
[array(['/rsktools/files/6554547/10092702/17/1593461212984/Profiles+structure+Copy+Copy.png', 'Profiles structure Copy Copy'], dtype=object) ]
docs.rbr-global.com
Activity Stream Handy when: The Activity Stream-module is perfect for showing what is happening in this space. The results can be limited by setting a label-filter. Displays created and edited content in a space in chronological order. Using the Activity Stream-module To add the Activity Stream-module to a space homepage: - Click to edit the Space Home or the Content Layout Macro on a page. - Click to add a module and choose Activity Stream. - Set the parameters to your liking and save. - Save the space homepage. Preview: Parameters Parameters are options that you can set to control the content or format of the macro output.
https://docs.refined.com/display/RTCC/Activity+Stream
2021-09-17T03:49:25
CC-MAIN-2021-39
1631780054023.35
[]
docs.refined.com
The Custom expression allows you to write custom HLSL shader code operating on an arbitrary amount of inputs and outputting the result of the operation. Add as many inputs as you need to the Inputs array, and name them. You can then write code in the Code property. You can type either a full function body with return statements as shown in the example, or a simple expression such as Input.bgr. You must also specify the output data type in OutputType. Click for full image. Here is the code that was used above so that you can try out the Custom node for yourself.can.
https://docs.unrealengine.com/4.26/en-US/RenderingAndGraphics/Materials/ExpressionReference/Custom/
2021-09-17T05:13:47
CC-MAIN-2021-39
1631780054023.35
[]
docs.unrealengine.com
DeleteAssociation Disassociates the specified Amazon Systems Manager document (SSM document) from the specified instance. If you created the association by using the Targets parameter, then you must delete the association by using the association ID. When you disassociate a document from an instance, it doesn't change the configuration of the instance. To change the configuration state of an instance after you disassociate a document, you must create a new document with the desired configuration and associate it with the instance. Amazon SDKs, see the following:
https://docs.amazonaws.cn/systems-manager/latest/APIReference/API_DeleteAssociation.html
2021-09-17T03:04:24
CC-MAIN-2021-39
1631780054023.35
[]
docs.amazonaws.cn
Verifying the root of trust This topic is intended for users who are using a third-party key management service, and need to build their own attestation document validation processes. This topic provides a detailed overview of the entire Nitro Enclaves attestation flow. It also discusses what is generated by the AWS Nitro system when an attestation document is requested, and explains how a key management service should process an attestation document. Topics Attestation in the Nitro Enclaves world The purpose of attestation is to prove that an enclave is a trustworthy entity, based on the code and configuration that is running within a particular enclave. The root of trust for the enclave resides within the AWS Nitro system, which provides attestation documents to the enclave. The root of trust component for the attestation is the Nitro Hypervisor, which contains information about the enclave, such as its platform configuration registers (PCRs). The Nitro Hypervisor is able to produce an attestation document that contains details of the enclave, including the enclave signing key, a hash of the enclave image, a hash of the parent instance ID, and a hash of the ARN of the attached IAM role. Attestation documents are signed by the AWS Nitro Attestation Public Key Infrastructure (PKI), which includes a published certificate authority that can be incorporated into any service. The attestation document An enclave can request an attestation document from the Nitro hypervisor that it can use to verify its identify with an external service. The attestation document that is generated by the Nitro system is encoded in Concise Binary Object Representation (CBOR), and it is signed using CBOR Object Signing and Encryption (COSE). Attestation document specification The following shows the structure of an attestation document. AttestationDocument = { module_id: text, ; issuing Nitro hypervisor module ID timestamp: uint .size 8, ; UTC time when document was created, in ; milliseconds since UNIX epoch digest: digest, ; the digest function used for calculating the ; register values pcrs: { + index => pcr }, ; map of all locked PCRs at the moment the ; attestation document was generated certificate: cert, ; the infrastucture certificate used to sign this ; document, DER encoded cabundle: [* cert], ; issuing CA bundle for infrastructure certificate ? public_key: user_data, ; an optional DER-encoded key the attestation ; consumer can use to encrypt data with ? user_data: user_data, ; additional signed user data, defined by protocol ? nonce: user_data, ; an optional cryptographic nonce provided by the ; attestation consumer as a proof of authenticity } cert = bytes .size (1..1024) ; DER encoded certificate user_data = bytes .size (0..1024) pcr = bytes .size (32/48/64) ; PCR content index = 0..31 digest = "SHA384" The enclave and the service that wants to attest the enclave first need to agree on a common protocol to follow. The optional parameters in the attestation document ( public_key, user_data, and nonce) allow the enclave and the entity to set up a variety of protocols depending on the security properties that the service and the enclave want to guarantee. Services that rely on attestation need to define a protocol that can meet those guarantees, and the enclave software needs to agree to and follow these protocols. An enclave wishing to attest to a specific service first has to open a TLS connection to that service and verify that the service's certificates are valid. These certificates must then be included in the enclave during the enclave image file build. A TLS session is not absolutely required, but it does provide integrity of data between the enclave and the third-party service. For more information about the optional fields in the attestation document, see the Nitro Enclaves Attestation Process Attestation document validation When you request an attestation document from the Nitro Hypervisor, you receive a binary blob that contains the signed attestation document. The signed attestation document is a CBOR-encoded, COSE-signed (using the COSE_Sign1 signature structure) object. The overall validation process includes the following steps: Decode the CBOR object and map it to a COSE_Sign1 structure. Extract the attestation document from the COSE_Sign1 structure. Verify the certificate's chain. Ensure that the attestation document is properly signed. Attestation documents are signed by the AWS Nitro Attestation PKI, which includes a root certificate for the commercial AWS partitions. The root certificate can be downloaded from 8cf60e2b2efca96c6a9e71e851d00c1b6991cc09eadbe64a6a1d1b1eb9faff7c The root certificate is based on an AWS Certificate Manager Private Certificate Authority (ACM PCA) private key and it has a lifetime of 30 years. The subject of the PCA has the following format. CN=aws.nitro-enclaves, C=US, O=Amazon, OU=AWS COSE and CBOR Usually, the COSE_Sign1 signature structure is used when only one signature is going to be placed on a message. The parameters dealing with the content and the signature are placed in the protected header rather than having the separation of COSE_Sign. The structure can be encoded as either tagged or untagged, depending on the context it will be used in. A tagged COSE_Sign1 structure is identified by the CBOR tag 18. The CBOR object that carries the body, the signature, and the information about the body and signature is called the COSE_Sign1 structure. The COSE_Sign1 structure is a CBOR array. The array includes the following fields. [ protected: Header, unprotected: Header, payload: This field contains the serialized content to be signed, signature: This field contains the computed signature value. ] In the context of an attestation document, the array includes the following. 18(/* COSE_Sign1 CBOR tag is 18 */ {1: -35}, /* This is equivalent with {algorithm: ECDS 384} */ {}, /* We have nothing in unprotected */ $ATTESTATION_DOCUMENT_CONTENT /* Attestation Document */, signature /* This is the signature */ ) Semantical validity An attestation document will always have its CA bundle in the following order. [ ROOT_CERT - INTERM_1 - INTERM_2 .... - INTERM_N] 0 1 2 N - 1 Keep this ordering in mind, as some existing tools, such as Java’s CertPath from Java PKI API Programmer’s Guide To validate the certificates, start from the attestation document CA bundle and generate the required chain, Where TARGET_CERT is the certificate in the attestation document. [TARGET_CERT, INTERM_N, ..... , INTERM_2, INTERM_1, ROOT_CERT] For more information about the optional fields in the attestation document, see the Nitro Enclaves Attestation Process Certificate validity For all of the certificates in the chain, you must ensure that the current date falls within the validity period specified in the certificate. Certificate chain validity In general, a chain of multiple certificates might be needed, comprising a certificate of the public key owner signed by one CA, and zero or more additional certificates of CAs signed by other CAs. Such chains, called certification paths, are required because a public key user is only initialized with a limited number of assured CA public keys. Certification path validation procedures for the internet PKI are based on the algorithm supplied in X.509. Certification path processing verifies the binding between the subject distinguished name and/or subject alternative name and subject public key. The binding is limited by constraints that are specified in the certificates that comprise the path and inputs that are specified by the relying party. The basic constraints and policy constraint extensions allow the certification path processing logic to automate the decision making process. CRL must be disabled when doing the validation. Using Java, starting from the root path and the generated certificate chain, the chain validation is as follows. validateCertsPath(certChain, rootCertficate) { /* The trust anchor is the root CA to trust */ trustAnchors.add(rootCertificate); /* We need PKIX parameters to specify the trust anchors * and disable the CRL validation */ validationParameters = new PKIXParameters(trustAnchors); certPathValidator = CertPathValidator.getInstance(PKIX); validationParameters.setRevocationEnabled(false); /* We are ensuring that certificates are chained correctly */ certPathValidator.validate(certPath, validationParameters); }
https://docs.aws.amazon.com/enclaves/latest/user/verify-root.html
2021-09-17T05:13:45
CC-MAIN-2021-39
1631780054023.35
[]
docs.aws.amazon.com
When you open this tab, a list of products appears; check the box next to a product to select to receive an alert notification when a new release is available. In the example below, several products show a checkmark in the box to the left of the product name. When the Update button is pressed, the owner of that product will receive an alert when there is an update released for it. Be sure we have your contact Email listed in My Bamboo so that the right person receives the Email!
https://docs.bamboosolutions.com/document/sign_up_for_alerts/
2021-09-17T04:34:39
CC-MAIN-2021-39
1631780054023.35
[array(['/wp-content/uploads/2017/06/MyAlerts.jpg', 'MyAlerts.jpg'], dtype=object) ]
docs.bamboosolutions.com
public enum ThreeWayMergeResultType extends Enum<ThreeWayMergeResultType> The various types of result that can be produced by the Merger. Some of these enumeration values follow those provided in the ConcurrentMergeResultType enumeration used in the n-way merge operations. The additional values in this enumeration are specific to three-way processing. The product documentation describes these formats in further detail, the configuration and processes involved in producing them, use-cases and guidance on the choice of format. ConcurrentMergeResultType compareTo, equals, getDeclaringClass, hashCode, name, ordinal, toString, valueOf getClass, notify, notifyAll, wait, wait, wait public static final ThreeWayMergeResultType DELTAV2 public static final ThreeWayMergeResultType ANALYZED_DELTAV2 public static final ThreeWayMergeResultType RULE_PROCESSED_DELTAV2 public static final ThreeWayMergeResultType SIMPLIFIED_DELTAV2 public static final ThreeWayMergeResultType SIMPLIFIED_RULE_PROCESSED_DELTAV2 public static final ThreeWayMergeResultType THREE_WAY_OXYGEN_TRACK_CHANGES A result format with oXygen Author-mode track changes processing instructions. The result will contain change author information derived from the input version identifiers used in the merge methods or setAncestor/addVersion methods. The result is a symmetrical representation of the changes from the perspective of the ancestor version. It is also possible to generate track-change information with the ThreeWayMergeResultType.TWO_WAY_RESULT and ThreeWayMergeResultType.RULE_PROCESSED_TWO_WAY_RESULT values, however these generate a different result that is asymmetrical in nature with respect to the various merge inputs. Please see the documentation for further details and examples. When this setting is used further configuration is possible through the ThreeWayMerge.setThreeWayTrackChangeAttributeMode method. public static final ThreeWayMergeResultType TWO_WAY_RESULT A three to two way merge result. This form of result presents a three way merge as a two way result that may be more familiar to users of systems such as accept/reject. In some cases information from the ancestor version is lost. Rather than being from the perspective of the ancestor version, the concept of add and delete is seen from the perspective of one of the versions, the 'local' version. This is supplied as the second argument of the merge methods or the first version supplied using addVersion methods. The third merge argument, or second supplied using addVersion, is seen as the other branch which is being merged from and in many version control systems is referred to as the 'remote branch', the 'other branch' or 'their' branch. This result type can be represented in various ways. As well as the delta format used for three-way and n-way change it is also possible to the accept/reject interfaces associated with XML editor's compare and/or track-change systems. The ThreeWayMerge.setResultFormat method can be used to further control the representation. public static final ThreeWayMergeResultType RULE_PROCESSED_TWO_WAY_RESULT A three to two way merge result with rule processing. The processing to convert a raw three way merge result is the same as that described for ThreeWayMergeResultType.TWO_WAY_RESULT. Similarly the rule processing that is applied is the same process used with the ThreeWayMergeResultType.RULE_PROCESSED_DELTAV2 result type. public static ThreeWayMergeResultType[] values() for (ThreeWayMergeResultType c : ThreeWayMergeResultType.values()) System.out.println(c); public static ThreeWayMergeResultType valueOf(String name) name- the name of the enum constant to be returned. IllegalArgumentException- if this enum type has no constant with the specified name NullPointerException- if the argument is null
https://docs.deltaxml.com/dita-merge/current/docs/api/com/deltaxml/mergecommon/config/ThreeWayMergeResultType.html
2021-09-17T04:02:38
CC-MAIN-2021-39
1631780054023.35
[]
docs.deltaxml.com
Date: Wed, 16 Sep 2020 17:39:31 +0200 From: xpetrl <[email protected]> To: [email protected] Subject: Re: move zfs geli encrypt mirror to unencrypted Message-ID: <[email protected]> In-Reply-To: <[email protected]> References: <[email protected]> <[email protected]> Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help >>. > We don't need the encrypted partition anymore. Your procedure is really convenient, thank you. Do I have to take care about geli in some way? Want to link to this message? Use this URL: <>
https://docs.freebsd.org/cgi/getmsg.cgi?fetch=249799+0+archive/2020/freebsd-questions/20200920.freebsd-questions
2021-09-17T05:21:25
CC-MAIN-2021-39
1631780054023.35
[]
docs.freebsd.org
The frontend is just static HTML, JS, and CSS files, that can be optimized by being deployed on a CDN server at a low cost*. Make sure you have deployed the backend first. *If you plan to use the frontend with the multi-tenant with subdomain strategy, you will have to use a server that allows wildcard subdomains. For the other two cases - single-tenant and multi-tenant - you can basically use any static hosting provider.
https://docs.scaffoldhub.io/deployment/frontend
2021-09-17T04:32:25
CC-MAIN-2021-39
1631780054023.35
[]
docs.scaffoldhub.io
Upgrade Guide - Scylla 4.4 to 4.5 for Red Hat Enterprise Linux 7/8 or CentOS 7/8 platforms: Red Hat Enterprise Linux 7/8 or CentOS 7/8. Upgrading your Scylla version is a rolling procedure that does not require a full cluster shutdown. For each of the nodes in the cluster, serially (i.e. one at a time), you will: Check the cluster’s schema Drain the node and backup the data Backup the configuration file Stop the Scylla service Download and install new Scylla packages Start the Scylla service the cluster schema¶ Make sure that all nodes have the schema synched prior to upgrade as any schema disagreement between the nodes causes the upgrade to fail. nodetool describecluster Drain node and backup the data¶ Before any major procedure, like an upgrade, it is highly an external backup device. When the upgrade is complete (for all nodes), remove the snapshot by running nodetool clearsnapshot -t <snapshot>, or you risk running out of disk space. Backup configuration file¶ sudo cp -a /etc/scylla/scylla.yaml /etc/scylla/scylla.yaml.backup-src Stop Scylla¶ sudo systemctl stop scylla-server sudo service scylla-server stop docker exec -it some-scylla supervisorctl stop scylla (without stopping some-scylla container) Download and install the new release¶ Before upgrading, check what Scylla version you are currently running with rpm -qa | grep scylla-server. You should use the same version as this version in case you want to rollback the upgrade. If you are not running a 4.4.x version, stop right here! This guide only covers 4.4.x to 4.5.y upgrades. To upgrade: Update the Scylla rpm repo to 4.5 Install the new Scylla version sudo yum clean all sudo yum update scylla\* -y Note Alternator users upgrading from Scylla 4.0 to 4.1, need to set default isolation level Start the node¶ sudo systemctl start scylla-serversudo service scylla-server startdocker exec -it some-scylla supervisorctl start scylla (with some-scylla container already running) Validate¶ Check cluster status with nodetool statusand make sure all nodes, including the one you just upgraded, are in UN status. Use curl -X GET ""to check the Scylla version. Validate that the version matches the one you upgraded to. Use journalctl _COMM=scyllato check there are no new errors in the log. Check again after two minutes, to validate no new issues are introduced. Once you are sure the node upgrade the nodes that you upgraded to 4.5 Scylla rollback is a rolling procedure that does not require a full cluster shutdown. For each of the nodes rollback to 4.4, you will: Drain the node and stop Scylla Retrieve the old Scylla packages Restore the configuration file Reload the systemd configuration Restart the Scylla service Validate the rollback success Apply the following procedure serially on each node. Do not move to the next node before validating the rollback was successful and that the node is up and running with the old version. Rollback steps¶ Gracefully shutdown Scylla¶ nodetool drain .. include:: /rst_include/scylla-commands-stop-index.rst Download and install the new release¶ Remove the old repo file. sudo rm -rf /etc/yum.repos.d/scylla.repo Update the Scylla rpm repo to 4.4 Install sudo yum clean all sudo rm -rf /var/cache/yum sudo yum remove scylla\*tools-core sudo yum downgrade scylla\* -y sudo yum install scylla Restore the configuration file¶ sudo rm -rf /etc/scylla/scylla.yaml sudo cp -a /etc/scylla/scylla.yaml.backup-src| systemctl start scylla-serversudo service scylla-server startdocker exec -it some-scylla supervisorctl start scylla (with some-scylla container already running) Validate¶ Check the upgrade instruction above for validation. Once you are sure the node rollback is successful, move to the next node in the cluster. Keep in mind that the version you want to see on your node is the old version, which you noted at the beginning of the procedure.
https://docs.scylladb.com/upgrade/upgrade-opensource/upgrade-guide-from-4.4-to-4.5/upgrade-guide-from-4.4-to-4.5-rpm/
2021-09-17T02:56:31
CC-MAIN-2021-39
1631780054023.35
[]
docs.scylladb.com
Date: Sun, 13 Sep 2020 19:49:35 +0100 From: Mike Clarke <[email protected]> To: FreeBSD questions <[email protected]> Subject: Problem installing Windows 7 guest with bhyve Message-ID: <1824311.EnoYUHA41c@curlew> Next in thread | Raw E-Mail | Index | Archive | Help I've followed these steps to use vm-bhyve to install a Windows 7 guest on my FreeBSD 12.1 system based on the instructions at pkg install vm-bhyve bhyve-firmware zfs create home/NOBACKUP/vm Added vm_enable="YES" and vm_dir="zfs:home/NOBACKUP/vm" to /etc/rc.conf cp /usr/local/share/examples/vm-bhyve/* /nobackup/vm/.templates vm switch create public vm switch add public re0 Placed a copy of the Windows 7 installation ISO into /nobackup/vm/.iso vm create -t windows -s 40G win7 vm install win7 win7.iso vncviewer :5900 The Windows installer started , although it did not respond to my mouse I was able to navigate through the first stages of setup using the keyboard as far as the disk setup screen where I selected 'Disk 0 Unallocated Space' and clicked 'Next'. The installation was then cancelled with the message "Windows could not format a partition on disk 0. The error occurred while preparing the partition selected for installation. Error code: 0x80070057" -- Mike Clarke Want to link to this message? Use this URL: <>
https://docs.freebsd.org/cgi/getmsg.cgi?fetch=40010+0+archive/2020/freebsd-questions/20200920.freebsd-questions
2021-09-17T03:51:00
CC-MAIN-2021-39
1631780054023.35
[]
docs.freebsd.org
In a call or live broadcast, you may need to play custom sound or music to all the users in the channel. For example, adding sound effects in a game, or playing background music. The Agora Web SDK supports publishing and automatically mixing multiple audio tracks to create and play custom sound or music. Before proceeding, ensure that you have read the quickstart guides and implemented basic real-time audio and video functions in your project. To play a sound effect or background music, create an audio track from a local or online audio file and publish it together with the audio track from the microphone. The SDK provides createBufferSourceAudioTrack to read a local or online audio file and create an audio track object ( BufferSourceAudioTrack). // Create an audio track from an online music file const audioFileTrack = await AgoraRTC.createBufferSourceAudioTrack({ source: "", }); console.log("create audio file track success"); If you call audioFileTrack.play() or client.publish([audioFileTrack]) immediately after creating the audio track, your users will not hear anything. This is because the SDK processes the audio track created from an audio file differently from the microphone audio track ( MicrophoneAudioTrack). MicrophoneAudioTrack For the microphone audio track, the SDK keeps sampling the latest audio data ( AudioBuffer) from the microphone. play, the SDK sends the audio data to the local playback module ( LocalPlayback), then the local user can hear the audio. publish, the SDK sends the audio data to Agora SD-RTN, then the remote users can hear the audio. Once the microphone audio track is created, the sampling continues until close is called, and then the audio track becomes unavailable. BufferSourceAudioTrack For an audio file, the SDK cannot sample the audio data directly, and instead reads the file to achieve similar effects, such as the processing phase in the previous figure. Sampling is different from file reading: BufferSourceAudioTrack. See Control the playback for details. For the audio track created from an audio file, the SDK does not read the file by default. Call BufferSourceAudioTrack.startProcessAudioBuffer() to start reading and processing the audio data, and then call play and publish for the local and remote users to hear the audio. To start audio mixing, publish audioFileTrack and the microphone audio track together. const microphoneTrack = await AgoraRTC.createMicrophoneAudioTrack(); // Start processing the audio data from the audio file audioFileTrack.startProcessAudioBuffer(); // Publish audioFileTrack and microphoneTrack together await client.publish([microphoneTrack, audioFileTrack]); // To stop audio mixing, stop processing the audio data audioFileTrack.stopProcessAudioBuffer(); // Or unpublish audioFileTrack await client.unpublish([audioFileTrack]); BufferSourceAudioTrack provides the following methods to control the playback of the audio file: startProcessAudioBuffer: Starts reading the audio file and processing data. This method also supports setting loop times and the playback starting position. pauseProcessAudioBuffer: Pauses processing the audio data to pause the playback. resumeProcessAudioBuffer: Resumes processing the audio data to resume the playback. stopProcessAudioBuffer: Stops processing the audio data to stop the playback. seekAudioBuffer: Jumps to a specified position. After the processing starts, if you have called play and publish, calling the above methods affects both the local and remote users. // Pause processing the audio data audioFileTrack.pauseProcessAudioBuffer(); // Resume processing the audio data audioFileTrack.resumeProcessAudioBuffer(); // Stop processing the audio data audioFileTrack.stopProcessAudioBuffer(); // Start processing the audio data in loops audioFileTrack.startProcessAudioBuffer({ loop: true }); // Get the current playback position (seconds) audioFileTrack.getCurrentTime(); // The duration of the audio file (seconds) audioFileTrack.duration; // Jump to the playback position of 50 seconds audioFileTrack.seekAudioBuffer(50); If the local user does not need to hear the audio file, call stop() to stop the local playback, which does not affect the remote users. createBufferSourceAudioTrack BufferSourceAudioTrack publish Fileobjects. RemoteAudioTrackobject.
https://docs.agora.io/en/Video/audio_effect_mixing_web_ng?platform=Web
2021-09-17T04:33:56
CC-MAIN-2021-39
1631780054023.35
[]
docs.agora.io
The LevelWindow class Class to store level/window values. More... #include <mitkLevelWindow.h> The LevelWindow class Class to store level/window values. Current min and max value are stored in m_LowerWindowBound and m_UpperWindowBound. m_DefaultLevel amd m_DefaultWindow store the initial Level/Window values for the image. m_DefaultRangeMin and m_DefaultRangeMax store the initial minrange and maxrange for the image. The finite maximum and minimum of valid value range is stored in m_RangeMin and m_RangeMax. If deduced from an image by default the minimum or maximum of it statistics is used. If one of these values are infinite the 2nd extrimum (which is guaranteed to be finite), will be used. See documentation of SetAuto for information on how the level window is initialized from an image. Definition at line 44 of file mitkLevelWindow.h. confidence tests if m_LowerWindowBound > m_UpperWindowBound, then the values for m_LowerWindowBound and m_UpperWindowBound will be exchanged if m_LowerWindowBound < m_RangeMin, m_LowerWindowBound will be set to m_RangeMin. m_UpperWindowBound will be decreased the same as m_LowerWindowBound will be increased, but minimum value for m_UpperWindowBound is also m_RangeMin. if m_UpperWindowBound > m_RangeMax, m_UpperWindowBound will be set to m_RangeMax. m_LowerWindowBound will be increased the same as m_UpperWindowBound will be decreased, but maximum value for m_LowerWindowBound is also m_RangeMax. method returns the default level value for the image Get the default range minimum value Get the default range maximum value returns the default window size for the image method that returns the level value, i.e. the center of the current grey value interval Returns the minimum Value of the window returns the size of the grey value range ! Get the range maximum value Get the range minimum value Returns the upper window bound value of the window returns the current window size, i.e the range size of the current grey value interval Shows if floating values are accepted. non equality operator implementation that allows to compare two level windows implementation necessary because operator made private in itk::Object equality operator implementation that allows to compare two level windows Resets the level and the window value to the default values. the default min and max range for image will be reset set the default Bounderies set the default level and window value If a level window is set to fixed, the set and get methods won't accept modifications to the level window settings anymore. This behaviour can be turned of by setting fixed to false; Sets the floating image value. To set the level and the window value Set the range minimum and maximum value sets the window to its maximum Size in scaleRange Set the lower and upper bound of the window default minimum gray value of the window Definition at line 227 of file mitkLevelWindow.h. default maximum gray value of the window Definition at line 232 of file mitkLevelWindow.h. Defines whether the level window settings may be changed after initialization or not. Definition at line 243 of file mitkLevelWindow.h. Image with floating values Definition at line 237 of file mitkLevelWindow.h. lower bound of current window Definition at line 207 of file mitkLevelWindow.h. maximum gray value of the window Definition at line 222 of file mitkLevelWindow.h. minimum gray value of the window Definition at line 217 of file mitkLevelWindow.h. upper bound of current window Definition at line 212 of file mitkLevelWindow.h.
https://docs.mitk.org/nightly/classmitk_1_1LevelWindow.html
2021-09-17T04:58:23
CC-MAIN-2021-39
1631780054023.35
[]
docs.mitk.org
The Perlin Network Bounty Programs Terms and Conditions (“Terms“) cover your participation in the Perlin Network Bounty Program, Perlin Ambassadors, Perlin Community Building Activities, Perlin Community Building Competitions and Perlin Network Bug Bounty Program (herein collectively known as the “Program“). These Terms are between you and Perlin Network (“Perlin,” “us” or “we“). By participating in the Program in any manner, you accept these Terms. PROGRAM OVERVIEW The Program enables participants to engage in particular activities (as defined in the relevant Program documentation posted on the Perlin website and official community channels) for a chance to earn rewards in an amount determined by Perlin in its sole discretion (“Bounty“). The decisions made by Perlin regarding Bounties are final and binding on participants. Perlin may change or cancel the Program at any time, for any reason. CHANGES TO THESE TERMS We may change these Terms at any time, for any reason, at our sole discretion. [email protected]. Opting out will not affect any licenses granted to Perlin under the age of 14; Your organization does not allow you to participate in these types of programs; You are currently an employee of Perlin or a Perlin affiliate, or an immediate family (parent, sibling, spouse, or child) or household member of such an employee; Within the six months prior to providing us your Submission you were an employee of Perlin or a Perlin affiliate; You currently (or within six months prior providing to us your Submission) perform services for Perlin or a Perlin affiliate in an external staff capacity, such as agency temporary worker, vendor employee, business guest, or contractor; or You are or were involved in any part of the development, administration, and/or execution of this Program. It is your responsibility to comply with any policies. Perlin disclaims any and all liability or responsibility for disputes arising between an employee and their employer related to this matter. There may be additional restrictions on your ability to enter depending upon your local law. By agreeing to these Terms, you confirm that you are complying with these laws. SUBMISSION OF CONTENT & EVIDENCE OF PROMOTIONAL ACTIVITIES OR MODERATION ACTIVITIES The Program enables users to complete each the following tasks: Translation of of Perlin documentation, materials and content to a language other than English; Produce content in the form of written text, graphic design or video (“Content”), including (but not limited to) posts on social media and/or public forums; Activities specifically aimed at promoting Perlin on social media and/or public sites (“Promotional Activities”), including (but not limited to) participation in public discussion and dissemination of Perlin Content on public forums such as Reddit, Telegram and Discord; Moderation of community discussion (“Moderation Activities”) in relation to Perlin on public forums such as Reddit, Telegram and Discord; and Any other tasks eligible for Bounty rewards as defined by Perlin at its sole discretion in the future. If you wish to participate in the completion of any of the above tasks, may submit your Content or supporting evidence of engagement in Promotional Activities or Moderation Activities to Perlin through the following process: Each item of Content or supporting evidence of engagement in Promotional Activities or Moderation Activities submitted to Perlin shall be a “Submission.” Submissions must sent to [email protected] Please include sufficient information and evidence to allow Perlin to assess your Content or validate your Promotional Activities or Moderation Activities. Depending on the detail of your Submission, Perlin may award a Bounty of varying scale. High quality Content and clear evidence of engagement in Promotional Activities or Moderation Activities are more likely to result in Bounties. Those Submissions that do not meet the minimum standards applied by Perlin at its sole discretion Submissions you can provide and potentially be paid a Bounty for. If you submit Content or supporting evidence of engagement in Promotional Activities or Moderation Activities for a product or service that is not covered by the Program at the time you submitted it, you will not be eligible to receive Bounty payments if the product or service is later added to the Program. If we receive multiple items of Content or evidence of Promotional Activities or Moderation Activities relating to the same issue from different parties, the Bounty will be granted to the first eligible Submission. SUBMISSION OF VULNERABILITY REPORTS The Program enables users to submit vulnerabilities and exploitation techniques (“Vulnerabilities“) to Perlin about eligible Perlin products and services (“Products“) for a chance to earn a Bounty. If you believe you have identified a Vulnerability that meets the applicable requirements set forth in the Terms, you may submit it to Perlin through the following process: Each Vulnerability submitted to Perlin shall be a “Submission.” Submissions must be sent to <SUPPORT EMAIL>. In the initial email, specify the Vulnerability details, and specific product version numbers you used to validate your research. Please also include as much of the following information as possible: Type of issue (private/public key exploits, chain splitting events, equihash related exploits etc.) Product and version that contains the bug Any special configuration required to reproduce the issue Step-by-step instructions to reproduce the issue on a fresh install Proof-of-concept or exploit code Impact of the issue, including how an attacker could exploit the issue You must follow Coordinated Vulnerability Disclosure (CVD) when reporting all Vulnerabilities to Perlin. Submissions that do not follow CVD may not be eligible for Bounties and not following CVD could disqualify you from participating in the Program in the future. Depending on the detail of your Submission, Perlin may award a Bounty of varying scale. Well-written reports and functional exploits are more likely to result in Bounties. Those Submissions that do not meet the minimum bar described above. If we receive multiple vulnerability or bug reports for the same issue from different parties, the Bounty will be granted to the first eligible Submission. If a duplicate report provides new information that was previously unknown to Perlin,). CONFIDENTIALITY OF SUBMISSIONS OF VULNERABILITY REPORTS & RESTRICTIONS ON DISCLOSURE Protecting stakeholders is Perlin’s highest priority. We endeavor to address each Vulnerability report in a timely manner. We require that Bounty Submissions remain confidential and cannot be disclosed to third parties or as part of paper reviews or conference submissions until you are notified by Perlin in writing that the vulnerability has been fixed. You can make available high-level descriptions of your research and non-reversible demonstrations after the Vulnerability is fixed. We require that detailed proof-of-concept exploit code and details that would make attacks easier on customers be withheld for 30 days after the Vulnerability is fixed. Perlin LICENSE Perlin is not claiming any ownership rights to your Submission. However, by providing any Submission to Perlin, you: grant Perlin, trade shows, and screen shots of the Submission in press releases) in all media (now known or later developed); agree to sign any documentation that may be required for us or our designees to confirm the rights you granted above; understand and acknowledge that Perlin Perlin. SUBMISSION REVIEW PROCESS After a Submission is sent to Perlin Network in accordance with Sections 4 and 5 (above), Perlin staff will review the Submission and validate its eligibility. The review time will vary depending on the complexity and completeness of your Submission, as well as on the number of Submissions we receive. Perlin retains sole discretion in determining which Submissions are qualified. BOUNTY PAYMENTS The decisions made by Perlin Network regarding Bounties are final and binding on participants. If we have determined that your Submission is eligible for a Bounty, Perlin to use your name and likeness without pursuing future claims).; if you are eligible for this Program but are at least 14 years old). Perlin will have no responsibility for determining the necessity of or for issuing any formal invoices, or for determining, remitting, or withholding any taxes applicable to the Bounty awarded to you. NOTE: For public sector employees (government and education), all Bounties must be awarded directly to your public sector organization and subject to receipt of a gift letter signed by the organization’s ethics officer, attorney, or designated executive/officer responsible for the organization’s gifts/ethics policy. Perlin seeks to ensure that by offering Bounties under this Program, it does not create any violation of the letter or spirit of a participant’s applicable gifts and ethics rules. PUBLIC RECOGNITION Perlin may publicly recognize individuals who have been awarded Bounties. Perlin at it is discretion may recognize you on web properties or other printed materials unless you explicitly ask us not to include your name/username. PRIVACY See the Perlin Perlin MAKES Perlin. MISCELLANEOUS These Terms, the Perlin Privacy Statement, and any applicable Product Program Terms are the entire agreement between you and Perlin for your Participation in the Program. It supersedes any prior agreements between you and Perlin, Perlin does not consider or accept unsolicited proposals or ideas, including without limitation ideas for new products, technologies, promotions, product names, product feedback and product improvements (“Unsolicited Feedback“). If you send any Unsolicited Feedback to Perlin through the Program or otherwise, Perlin makes no assurances that your ideas will be treated as confidential or proprietary. BY SENDING SUBMISSIONS OR OTHERWISE PARTICIPATING IN THE PROGRAM, YOU ARE DEEMED TO AGREE TO AND ACCEPT THESE TERMS.
https://docs.perlinx.finance/perlin-community/legal/terms-and-conditions
2021-09-17T03:03:35
CC-MAIN-2021-39
1631780054023.35
[]
docs.perlinx.finance
Scylla Monitoring 2.3¶ Note You are not reading the most recent version of this documentation. Go to the latest version of Scylla Monitoring Stack Documentation. Scylla Monitoring Stack is a full stack for Scylla monitoring and alerting. The stack contains open source tools including Prometheus and Grafana, as well as custom Scylla dashboards and tooling.
https://docs.scylladb.com/operating-scylla/monitoring/2.3/
2021-09-17T02:57:43
CC-MAIN-2021-39
1631780054023.35
[]
docs.scylladb.com
Using a Shared RadCalendar Having many date pickers on a page can render a lot of HTML and impact performance negatively. To avoid this problem, the RadDatePicker control can share a RadCalendarCalendar control onto the Web Page that contains your RadDatePicker controls. Set the properties of the RadCalendar control to configure the popup. Unlike the embedded popup controls that are automatically created when you are not using a shared calendar, the external RadCalendar control does not inherit any properties (such as Skin or RangeMaxDate and RangeMinDate) from the parent control that uses it. Do not set the AutoPostBack property to True . A popup control cannot work properly if it causes postbacks. Set the SharedCalendarID property of all RadDatePicker control to the ID property of the RadCalendar control. <telerik:RadDatePicker </telerik:RadDatePicker> <telerik:RadCalendar </telerik:RadCalendar> When you assign the ID of a RadCalendar control as the value of a SharedCalendarID property, it is automatically hidden from view in the rendered Web Page. You do not need to do anything additional to hide it. Defining a shared popup control at runtime To define a shared popup control at runtime Do not set the Calendar property of RadDatePicker at design time. Add a PlaceHolder for holding the dynamically created popups. In the code behind, create an instance of the shared RadCalendar object and set its properties as per your requirements. Remember that this instance does not inherit any properties from the RadDatePicker control that uses it. Add the instance of the RadCalendar to the PlaceHolder you added at design time. Assign the RadCalendar instance as the value of the SharedCalendar property. <telerik:RadDatePicker <br /> <telerik:RadDateTimePicker <br /> <telerik:RadTimePicker <asp:PlaceHolder protected void Page_Load(object sender, EventArgs e) { RadCalendar popupCal =; } Protected Sub Page_Load(ByVal sender As Object, ByVal e As EventArgs) Handles Me.Load Dim popupCal As End Sub
https://docs.telerik.com/devtools/aspnet-ajax/controls/datepicker/functionality/using-shared-radcalendar
2021-09-17T02:52:37
CC-MAIN-2021-39
1631780054023.35
[]
docs.telerik.com
; Currently there is only the diff drive kinematics class. This provides functions for forward and inverse kinematics on a differential drive type robot.
http://docs.ros.org/en/hydro/api/ecl_mobile_robot/html/
2021-09-17T03:34:29
CC-MAIN-2021-39
1631780054023.35
[]
docs.ros.org
Introduction The 1010data Reference Manual provides information about the 1010data Macro Language as well as the 1010data library of functions. The Macro Language Elements section contains information on data transformation operations such as row selection ( <sel>) and linking ( <link>). This section also provides full details on the elements used to create and contain block code such as <block> and <library>. It also covers elements related to application development, such as <dynamic> and <loop>. There is a wealth of information about the elements used to create QuickApps, which include <widget>, <layout>, and <do>. Each of the widgets is documented in detail with respect to its attributes, which are used to determine their behavior and appearance. The manual also includes examples that demonstrate basic usage of each of the widgets. The Function Reference contains details about the 1010data functions, including their parameters and return values. Many of the functions have sample usage tables, which demonstrate the behavior given different sets of input values, and examples that show how to incorporate the functions into Macro Language code.
https://docs.1010data.com/1010dataReferenceManual/Introduction.html
2021-09-17T04:07:23
CC-MAIN-2021-39
1631780054023.35
[]
docs.1010data.com
')=tools-vm \ -: Cannot get response (timeout received)UNKNOWN: Cannot get response (timeout received) This error message means that the Plugin didn't get a response off the VMWare Daemon. Check your connection parameters and the macros CENTREONVMWAREHOST and CENTREONVMWAREPORT. find 'VirtualMachine' object |UNKNOWN: Cannot find 'VirtualMachine' object | This error message means that the Plugin didn't find the Virtual Machine. Check the Virtual Machine name in the macro HOSTVMNAME, it's must fit the name defined in your VMWare infrastructure.
https://docs.centreon.com/20.10/en/integrations/plugin-packs/procedures/virtualization-vmware2-vm.html
2021-09-17T03:09:57
CC-MAIN-2021-39
1631780054023.35
[]
docs.centreon.com
Date: Mon, 1 Apr 2019 21:02:12 +0430 From: Mazandar Wiki <[email protected]> To: Michael Schuster <[email protected]> Cc: freeBSD Mailing List <[email protected]> Subject: Re: FreeBSD 12.0-RELEASE crashes during the boot process Message-ID: <CANx7MpeMK-T1mu+wnotXJsVbWR5+aD2szDOh2FcMfygbhKymqw@mail.gmail.com> In-Reply-To: <CADqw_gKNxtqsMasM6hDFPV=NMgX821xGAjqgeLHW4erR6xzWPQ@mail.gmail.com> References: <CANx7MpfDQJEUsjo_SSYqnExagj30NAAD8cPYiR3mDZq_ztor8Q@mail.gmail.com> <CADqw_gKNxtqsMasM6hDFPV=NMgX821xGAjqgeLHW4erR6xzWPQ@mail.gmail.com> Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help It's a laptop from 2011, it has an Intel core-i5 CPU, with 6GB of RAM, it has two video cards, the first is Intel HD 3000, and the second is NVIDIA GT 540M (supports NVIDIA Optimus), here is a link: On Mon, Apr 1, 2019 at 9:07 AM Michael Schuster <[email protected]> wrote: > you might get more feedback if you wrote something about the hardware > you're using. > > regards > Michael > > On Sat, Mar 30, 2019 at 10:15 PM Mazandar Wiki <[email protected]> > wrote: > >> I'm trying to boot FreeBSD from the installation media >> (FreeBSD-12.0-RELEASE-amd64-memstick.img, written on an 8GB USB Flash >> Drive >> by dd(1)). It crashes at the beginning of the boot process, right after >> printing this message: >> >> ACPI APIC TABLE: <GBT GBTUACPI> >> >> In the verbose mode: >> ACPI APIC TABLE: <GBT GBTUACPI> >> L3 cache ID shift: 4 >> L2 cache ID shift: 1 >> L1 cache ID shift: 1 >> Core ID shift: 1 >> [crashes here and nothing works, I have to restart by hand] >> >> It sometimes works (and one time, I even managed to install it, though it >> didn't work after the first reboot), but most often it crashes. I tried to >> disable AHCI in the BIOS, but that didn't work either. Checksums are OK. >> Other operating systems (including FreeBSD 9.0-RELEASE) work fine. >> _______________________________________________ >> [email protected] mailing list >> >> To unsubscribe, send any mail to " >> [email protected]" >> > > > -- > Michael Schuster > > recursion, n: see 'recursion' > Want to link to this message? Use this URL: <>
https://docs.freebsd.org/cgi/getmsg.cgi?fetch=104785+0+archive/2019/freebsd-questions/20190407.freebsd-questions
2021-09-17T05:19:16
CC-MAIN-2021-39
1631780054023.35
[]
docs.freebsd.org
The Asset Browser window displays the Animation Sequences, BlendSpaces, AnimMontages and other animation assets that are useable for the selected Skeleton asset. Double-clicking on an asset in the Asset Browser will open the asset inside the Animation Editor so that you can preview the animation. Click image for full view. This will also populate the Asset Details window with varying options based on the asset type clicked. Each asset in the Asset Browser is color-coded and uses the same color-coding found during animation asset creation from the Content Browser. By default, the Advanced Details column view in the Asset Browser is hidden. You can un-hide the Advanced Details with the Reset Columns option from the View Options. This will give you more information about each asset as well as the ability for advanced column sorting. You can also highlight an asset in the Asset Browser to display a tooltip containing information about that asset as well as a preview. Right-clicking on an asset in the Asset Browser will give you a context menu with varying options based on the type of asset selected.
https://docs.unrealengine.com/4.26/en-US/AnimatingObjects/SkeletalMeshAnimation/Persona/AssetBrowser/
2021-09-17T05:13:30
CC-MAIN-2021-39
1631780054023.35
[array(['./../../../../../Images/AnimatingObjects/SkeletalMeshAnimation/Persona/AssetBrowser/AssetBrowser.jpg', 'AssetBrowser.png'], dtype=object) array(['./../../../../../Images/AnimatingObjects/SkeletalMeshAnimation/Persona/AssetBrowser/AddNewButton.jpg', 'AddNewButton.png'], dtype=object) array(['./../../../../../Images/AnimatingObjects/SkeletalMeshAnimation/Persona/AssetBrowser/UnHideColumns.jpg', 'UnHideColumns.png'], dtype=object) array(['./../../../../../Images/AnimatingObjects/SkeletalMeshAnimation/Persona/AssetBrowser/HighlightToolTip.jpg', 'HighlightToolTip.png'], dtype=object) array(['./../../../../../Images/AnimatingObjects/SkeletalMeshAnimation/Persona/AssetBrowser/RightClickMenu.jpg', 'RightClickMenu.png'], dtype=object) ]
docs.unrealengine.com
Adding an Application Key Generation Workflow¶ This section explains as to how you can attach a simple approval. Engage the Approval Workflow Executor Approval Workflow Executor for application registration key generation. You can enable Approve workflow executor for Product keys or Sandbox keys or both by disabling the simple workflow executor and enable approval workflow executor for the ones you need. Please note that this workflow is not applicable for API keys generation. The application key generation Approve Workflow Executor is now engaged. <WorkFlowExtensions> ... <!--ProductionApplicationRegistration ... <!--SandboxApplicationRegistration ... </WorkFlowExtensions> Sign in to the API Developer Portal () as a Developer Portal user and open the application with which you subscribed to the API.Click Applications and click on an ACTIVE application. Note If you do not have an API already created and an Application subscribed to it, follow Create a REST API, Publish an API, and Subscribe to an API to create an API and subscribe to it. Select Production Keys or Sandbox Keys from the side navbar and click on GENERATE KEYS. Note that the following message will appear if the application key generation workflow is correctly enabled. https://<Server Host>:9443/admin) with admin credentials and list all the tasks for application registrations from Tasks → Application Registration and click on approve or reject to approve or reject the application key generation pending request. Navigate back to the API Developer Portal and view your application. It shows the application access token, consumer key and consumer secret. If the workflow request is rejected it will show a message.
https://apim.docs.wso2.com/en/3.2.0/learn/consume-api/manage-application/advanced-topics/adding-an-application-key-generation-workflow/
2021-09-17T04:22:22
CC-MAIN-2021-39
1631780054023.35
[]
apim.docs.wso2.com
2.2.1 [NamespacesXML1.1] Section 3, Declaring Namespaces C0001: The specification states: Definition: A namespace (or more precisely, a namespace binding) is declared using a family of reserved attributes. Such an attribute's name must either be xmlns or begin xmlns:. These attributes, like any other XML attributes, may be provided directly or by default. IE9 Mode (All Versions) Attributes that are used to declare a namespace binding must be provided directly. C0002: The specification states: The attribute's normalized value MUST be either a UR. IE9 Mode (All Versions) An empty string can be used as the value of the default namespace, but not any other specific namespace.
https://docs.microsoft.com/en-us/openspecs/ie_standards/ms-xmlnsh/9b667f86-d17d-4972-8123-c3b61704004d
2021-09-17T05:31:43
CC-MAIN-2021-39
1631780054023.35
[]
docs.microsoft.com