content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
EEmeter: tools for calculating metered energy savings¶ EEmeter — an open source toolkit for implementing and developing standard methods for calculating normalized metered energy consumption (NMEC) and avoided energy use. Background - why use the EEMeter library¶ At time of writing (Sept 2018), the OpenEEmeter, as implemented in the eemeter package and sister eeweather package, contains the most complete open source implementation of the CalTRACK Methods, which specify a family of ways to calculate and aggregate estimates avoided energy use at a single meter particularly suitable for use in pay-for-performance (P4P) programs. The eemeter package contains a toolkit written in the python langage which may help in implementing a CalTRACK compliant analysis (see CalTRACK Compliance). It contains a modular set of of functions, parameters, and classes which can be configured to run the CalTRACK methods and close variants. Note Please keep in mind that use of the OpenEEmeter is neither necessary nor sufficient for compliance with the CalTRACK method specification. For example, while the CalTRACK methods set specific hard limits for the purpose of standardization and consistency, the EEmeter library can be configured to edit or entirely ignore those limits. This is becuase the eemeter package is used not only for compliance with, but also for development of the CalTRACK methods. Please also keep in mind that the EEmeter assumes that certain data cleaning tasks specified in the CalTRACK methods have occurred prior to usage with the eemeter. The package proactively exposes warnings to point out issues of this nature where possible. Installation¶ EEmeter is a python package and can be installed with pip. $ pip install eemeter Note If you are having trouble installing, see Using with Anaconda. Features¶ - Candidate model selection - Data sufficiency checking - Reference implementation of standard methods - CalTRACK Daily Method - CalTRACK Monthly Billing Method - CalTRACK Hourly Method - Flexible sources of temperature data. See EEweather. - Model serialization - First-class warnings reporting - Pandas DataFrame support - Visualization tools Roadmap for 2020 development¶ The OpenEEmeter project growth goals for the year fall into two categories: - Community goals - we want help our community thrive and continue to grow. - Technical goals - we want to keep building the library in new ways that make it as easy as possible to use. Community goals¶ - Develop project documentation and tutorials A number of users have expressed how hard it is to get started when tutorials are out of date. We will dedicate time and energy this year to help create high quality tutorials that build upon the API documentation and existing tutorials. - Make it easier to contribute As our user base grows, the need and desire for users to contribute back to the library also grows, and we want to make this as seamless as possible. This means writing and maintaining contribution guides, and creating checklists to guide users through the process. Technical goals¶ - Implement new CalTRACK recommendations The CalTRACK process continues to improve the underlying methods used in the OpenEEmeter. Our primary technical goal is to keep up with these changes and continue to be a resource for testing and experimentation during the CalTRACK methods setting process. - Hourly model visualizations The hourly methods implemented in the OpenEEMeter library are not yet packaged with high quality visualizations like the daily and billing methods are. As we build and package new visualizations with the library, more users will be able to understand, deploy, and contribute to the hourly methods. - Weather normal and unusual scenarios The EEweather package, which supports the OpenEEmeter, comes packaged with publicly available weather normal scenarios, but one feature that could help make that easier would be to package methods for creating custom weather year scenarios. - Greater weather coverage The weather station coverage in the EEweather package includes full coverage of US and Australia, but with some technical work, it could be expanded to include greater, or even worldwide coverage. Usage Guides¶ - Basic Usage - Advanced Usage - Tutorial - CalTRACK Compliance - API Docs
https://eemeter.readthedocs.io/index.html
2022-06-25T07:30:27
CC-MAIN-2022-27
1656103034877.9
[]
eemeter.readthedocs.io
Change the types of data that BlackBerry Protect backs up - In the BlackBerry Protect application on your BlackBerry device, press the Menu key > Options. - In the BlackBerry Data to Back Up section, select the checkbox for each type of data that you want to back up. - Press the Menu key > Save. Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/smartphone_users/deliverables/31983/1833994.jsp
2014-04-16T07:55:18
CC-MAIN-2014-15
1397609521558.37
[]
docs.blackberry.com
Message-ID: <1324630942.82554.1397636289802.JavaMail.haus-conf@codehaus02.managed.contegix.com> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_82553_1814202264.1397636289802" ------=_Part_82553_1814202264.1397636289802 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: GeoTools 2.5.x is focused on switching to new feature model; we = have a list of QA tasks to confirm the work is complete - but our scope has= been met for this release.=20 Change proposals completed f= or the 2.5.x branch:=20 The following blockers represent steps that must be taken before 2.4 can= go out the door.=20 For the complete list of tasks scheduled for the 2.4.x release:=20 Reports:=20 The goals of the 2.5 release are:=20 The following represents sponsored work targeted for GeoTools 2.5:= =20 In many cases this work is available as an unsupported module for GeoToo= ls 2.4.=20 The following work is critical to the success of GeoTools but has not at= tracted sponsorship - if you can help out please jump on the mailing list a= nd let us know.=20 GeoServer has performed a round of profiling and has brought the new fea= ture model within 10-20% of the prior performance.=20 uDig trunk has returned to using GeoTools trunk; as such the following w= ill need to be sorted out before 2.5.x can go out the door (the functionali= ty was dropper or suffered code rot between GeoTools 2.2 and GeoTools 2.5).==20 We cannot release a version of uDig using 2.5.x until:=20 The following technical debt really effects the project (rendering some = datastore useless):=20 There is a Technical Debt= a> page with details of some "undone" work (often work left over = when a commercial funding ended for a specific task).
http://docs.codehaus.org/exportword?pageId=66511
2014-04-16T08:18:09
CC-MAIN-2014-15
1397609521558.37
[]
docs.codehaus.org
The Welsh Government has laid regulations on the default payments which can be charged to tenants occupying premises under an assured shorthold tenancy in the private rented sector in Wales. The Renting Homes (Fees etc.) (Prescribed Limits of Default Payments) (Wales) Regulations 2020 (‘the Act’) come into force on 28 April 2020. Within the Renting Homes (Fees etc.) (Wales) Act 2019, Welsh Ministers had the power to make regulations specifying the limits for certain types of payment that can be charged in the event of a default by the tenant. Under the Act, landlords or letting agents in Wales can charge tenants: - Interest at a rate of 3% above the Bank of England base rate for the late payment of rent which is more than 7 days overdue; and - The actual cost of replacing a lost key and/or changing, adding or removing a lock to gain access to the property, as evidenced by an invoice or receipt. These default fees are similar to those permitted under the Tenant Fees Act 2019, which affects England only; however, in England there is a longer grace period of 14 days for late payment of rent before interest can be charged. In respect of the replacement of a lost key, the landlord or letting agent in England can charge the reasonable costs as opposed to the actual cost of replacing a key. Here at Simply-Docs we will update our templates to reflect these legislative changes. Like thisLike this
https://blog.simply-docs.co.uk/2020/04/07/new-property-regulations-on-default-payments-in-wales/
2021-04-10T12:38:27
CC-MAIN-2021-17
1618038056869.3
[array(['https://blog.simply-docs.co.uk/wp-content/uploads/2018/09/Terraced-Houses.jpg', 'Terraced Houses'], dtype=object) ]
blog.simply-docs.co.uk
WARNING: This module is deprecated (what does this mean). Create account: Login to your account Optimise Network, follow: My Details → Account Details → API Key and find your API Key and Private Key. Go to Content –> Get Banners. Search any banner and check tracking link. It will be like <a href=""><img src="" border="0" width="1080" height="1080"></a> In this link you need to copy the parameter AID=878608. Your affiliate ID is 878608
https://ce-docs.keywordrush.com/modules/affiliate/optimisemedia
2021-04-10T11:47:13
CC-MAIN-2021-17
1618038056869.3
[]
ce-docs.keywordrush.com
PHYSICAL AND MEDICAL ISSUES THAT MIGHT ACCOMPANY AUTISM A range of physical and mental health conditions frequently accompany autism. It is helpful to learn more about these issues so that you can learn to read the signs. Oftentimes, caregivers and professionals might automatically attribute certain challenges to the autism diagnosis, when the child or adult may actually have another treatable condition that is causing the difficulties. It is also important to remember that though these issues are more common among autistic people, not all children and adults diagnosed with ASD will have another diagnosis. These issues, often called comorbid conditions, include, but are not limited to: Seizure disorder Seizure disorder, also called epilepsy, occurs in as many as a third of individuals with autism. have cognitive deficits. Some researchers have suggested that seizure disorder is more common when the child has shown a regression or loss of skills. There are different types and subtypes of seizures, and a child with autism may have more than one type. The easiest to recognize are large grand mal (or tonic-clonic) seizures. People with these seizures have stiffening and spasming of muscles and typically lose consciousness. Others include petit mal (or absence) seizures, which may look like a vacant stare, typically for up to 15 seconds. Subclinical seizures are so subtle that they may only show up in an electroencephalogram (EEG). It is not clear whether subclinical seizures have effects on language, understanding and behavior. Seizures associated with autism usually start early in childhood, or during adolescence, but they may occur at any time. If you are concerned that your child may be having seizures, tell your child’s health care provider. They may order tests that may include an EEG, a Magnetic Resonance Imaging (MRI) scan, Computed Axial Tomography (CAT) scan and a Complete Blood Count (CBC). Children and adults with epilepsy are often treated with anticonvulsants or seizure medicines to reduce or eliminate seizures. If your child has epilepsy, work closely with a neurologist to find the medicine or combination of medicines that works best for your child with the fewest side effects. You can also learn the best ways to ensure your child’s safety during a seizure. Gastrointestinal disorders Many parents report gastrointestinal (GI) problems in their children with autism. Surveys have suggested that between 46 and 85 percent of children with autism have problems such as chronic constipation or diarrhea. One study found 70 percent of children with autism had a history of gastrointestinal symptoms, such as: - Abnormal pattern of bowel movements - Frequent constipation - Frequent vomiting - Frequent abdominal pain The exact prevalence of GI problems, such as gastritis, chronic constipation, colitis and esophagitis, in people with autism is unknown. If your child has GI symptoms, talk with their health care provider. They may want to consult a gastroenterologist, ideally one who works with people with autism. and relieving that discomfort may reduce the frequency or intensity of behavioral challenges. Some evidence suggests that children may be helped by dietary intervention for GI issues, including the elimination of dairy- and gluten-containing foods. Ask your child’s health care provider to develop a comprehensive treatment plan for your child. In January 2010, Autism Speaks initiated a campaign to inform pediatricians about the diagnosis and treatment of GI problems associated with autism. Genetic disorders Some children with autism have an identifiable genetic condition that affects brain development. These genetic disorders include: - Fragile X syndrome - Angelman syndrome - Tuberous sclerosis - Chromosome-15 duplication syndrome - Other single-gene and chromosomal disorders While further study is needed, single-gene disorders appear to affect 15 to 20 percent of those with ASD. Some of these syndromes have characteristic features or family histories. Experts recommend that all people with an autism diagnosis get genetic testing to find these genetic changes. It may prompt your doctor to refer your child to a geneticist or neurologist for further testing. The results can help guide treatment, awareness of associated medical issues and life planning. Sleep problems Sleep problems are common in children and adolescents with autism. Sleep problems can affect the whole family’s health and well-being. They can also have an impact on the benefits of therapy for your child. Sleep problems may be caused by medical issues, such as obstructive sleep apnea or gastroesophageal reflux. Addressing the medical issues may solve the problem. When there’s no medical cause, sleep issues may be managed with behavioral interventions. These include sleep-hygiene measures, such as limiting sleep during the day and establishing regular bedtime routines. If sleep habits don’t improve, cognitive behavioral therapy is a type of therapy that can help problem-solve sleep issues. If additional help is needed, a pharmaceutical-grade melatonin supplement has also been shown to be effective and safe in children in the short-term, for up to three months. Don’t give your child melatonin or other sleep aids without talking to your child’s health care provider. For additional information on sleep issues, visit autismspeaks.org/sleep. Sensory processing disorder Many autistic children have unusual responses to sensory stimuli and process sensory input differently than nonautistic people. This means that while information is sensed normally, it may be perceived much differently. Sensory systems that can be affected include: - Vision - Hearing - Touch - Smell - Taste - Sense of movement (vestibular system) - Sense of position (proprioception and interoception) Sensory Processing Disorder (SPD), formerly referred to as Sensory Integration Dysfunction (SID), is when sensations that feel normal to others are experienced as painful, unpleasant or confusing. Although SPD is not currently recognized as a distinct medical diagnosis, it is a term commonly used to describe a set of symptoms that can involve hypersensitivity (a tendency, outside the norm, to react negatively or with alarm to sensory input which is generally considered harmless or nonirritating to others. Also called sensory defensiveness.) or hyposensitivity (lack of a behavioral response, or insufficient intensity of response, to sensory stimuli considered harmful and irritating to others). An example of hypersensitivity is an inability to tolerate wearing clothing, being touched or being in a room with normal lighting. Hyposensitivity may be apparent in a child’s increased tolerance for pain or a constant need for sensory stimulation. Treatment for SPD is usually addressed with occupational therapy and/or sensory integration therapy. Sensory integration therapy helps people with SPD by exposing them to sensory stimulation in structured, repetitive ways so they can learn to respond in new ways. SI therapy is most often play-based and is provided by an occupational therapist. Pica Pica is an eating disorder involving eating things that are not food. Children between 18 and 24 months of age often eat non-food items, and this is typically a normal part of development. Some children with autism and other developmental disabilities beyond this age continue to eat non-food items, such as dirt, clay, chalk and paint chips. Children with signs of persistent mouthing of fingers or objects, including toys, should be tested for elevated blood levels of lead, especially if there is a known potential for environmental exposure to lead. If you’re worried about pica, contact your child’s health care provider. They can help you assess if your child needs a behavioral intervention or if it is something you can manage at home. Download the Pica Guide for Parents at autismspeaks.org/tool-kit/atnair-p-pica-guide-parents. Mental and behavioral health disorders Some children diagnosed with ASD will receive an additional mental health-related diagnosis, such as attention deficit hyperactivity disorder (ADHD) or anxiety disorder. Studies suggest that 20 percent of autistic children also have ADHD, and 30 percent struggle with an anxiety disorder, including: - Social phobia (also called social anxiety disorder): characterized by an intense, persistent fear of being watched and judged by others - Separation anxiety: characterized by an extreme fear of being separated from a specific person, such as a parent or teacher - Panic disorder: characterized by spontaneous seemingly out-of-the-blue panic attacks, which create a preoccupation with the fear of a recurring attack - Specific phobias: characterized by excessive and unreasonable fears in the presence of or in anticipation of a specific object, place or situation Symptoms of ADHD include ongoing problems with: However, these symptoms also can result from autism. For this reason, evaluation for ADHD and anxiety should be done by someone with expertise in both disorders. One study found that just 1 in 10 children with autism and ADHD were receiving medicine to relieve the ADHD symptoms. Children with autism express anxiety or nervousness in many of the same ways as typically developing children. But they may have trouble communicating how they feel. Outward signs may be the best clues. In fact, some experts suspect that signs of anxiety, such as sweating and acting out, may be especially prominent among those with ASD. Symptoms can include a racing heart, muscular tension and stomach aches. It is important for your child to be evaluated by a professional who has expertise in both autism and anxiety to provide the best treatment options for your child.
https://docs.autismspeaks.org/100-day-kit-school-age-children/physical-and-medical-issues
2021-04-10T11:50:58
CC-MAIN-2021-17
1618038056869.3
[array(['https://assets.foleon.com/eu-west-2/uploads-7e3kk3/25315/pexels-ketut-subiyanto-4473982.5f66c5597042.jpg', None], dtype=object) array(['https://assets.foleon.com/eu-west-2/uploads-7e3kk3/25315/mason_2jpg.d810e8250dc7.jpeg', None], dtype=object) array(['https://assets.foleon.com/eu-west-2/uploads-7e3kk3/25315/pexels-skitterphoto-12165.c2c6d00e015f.jpg', None], dtype=object) ]
docs.autismspeaks.org
mastercomfig¶ Welcome to mastercomfig, a modern Team Fortress 2 performance and customization config. This config is by default for modern PCs and aims to disable heavily unoptimized features and adjust other settings where it does not affect behavior or visuals noticeably. However, the config is documented extensively and also has presets so that you may adjust settings to your needs/preferences. You may find that this config makes TF2 a lot smoother, eliminates stuttering, reduces load times and increases FPS. This is because this config is heavily tuned and the commands and values are based on TF2’s source code, rather than just experiments, guesswork and trying to understand the vague/non-existent documentation. The config is constantly updated with tweaks, new features and documentation improvements — iterated upon based on user feedback and benchmarks. So if you think there’s an unoptimal value, or if it’s just as simple as a comment being confusing to you, report the problem and you’ll most likely see a fix in a future update. Support). Finally, you can buy early access to the config for a month through Ko-fi. This will give you access to more frequent updates released throughout the month, rather than the monthly stable releases of the.
https://docs.mastercomfig.com/en/9.1.0/
2021-04-10T12:20:26
CC-MAIN-2021-17
1618038056869.3
[]
docs.mastercomfig.com
ATmega85158515 ID for board option in “platformio.ini” (Project Configuration File): [env:ATmega8515] platform = atmelavr board = ATmega8515 You can override default ATmega8515 settings per build environment using board_*** option, where *** is a JSON object path from board manifest ATmega8515.json. For example, board_build.mcu, board_build.f_cpu, etc. [env:ATmega8515] platform = atmelavr board = ATmega8515 ; change microcontroller board_build.mcu = atmega8515 ; change MCU frequency board_build.f_cpu = 16000000L
https://docs.platformio.org/en/latest/boards/atmelavr/ATmega8515.html
2021-04-10T11:20:26
CC-MAIN-2021-17
1618038056869.3
[]
docs.platformio.org
Date: Sat, 10 Apr 2021 13:31:11 +0100 (BST) Message-ID: <743969647.31495.1618057871056@OMGSVR86> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_31494_1088376970.1618057871055" ------=_Part_31494_1088376970.1618057871055 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: Vicon Nexus 2.5 is a point release that provides features and enhancemen= ts in addition to those that were included since Nexus 2.0. For links to descriptions of the features and enhancements that are spec= ific to Nexus 2.5, see Vicon Nexus 2.5 new features and functions. = For a description of the other features and enhancements that have been rel= eased since Nexus 2.0, see the PDF W= hat's New in Vicon Nexus 2.4. For information about requirements and systems supported for this versio= n of Nexus, see: Note The Vicon motion capture system and the Nexus softwar= e, manufactured by Vicon Motion Systems Limited, have been tested prior to = shipment and meet the metrological requirements as detailed in the Medical devices directive.<= /a> Nexus 2.5 is compatible with and fully supported under the Microsoft Win= dows 7 operating system. Installation, software operation and required thir= d-party drivers have been tested under this operating system. Vicon recomme= nds the Windows 7, 64-bit operating system for use with Nexus 2.5. Nexus 2.= 5 has also undergone limited testing under the Windows 10 operating system.= Although Vicon Nexus may install and function under other Microsoft Window= s operating systems, this is not officially supported or recommended by Vic= on. If Basler digital cameras will be connected to Nexus 2.5, ensure that, in additi= on to installing MATLAB, you install the .Net Framework version 4.5.= a> Before you install Vicon Nexus 2.5, note the following limitations on su= pported systems: This section describes functionality that is dependent upon the version = of Vicon Nexus that is being upgraded: Note This section applies only to versions of Nexus that a= re earlier than 2.0. Nexus 2.5 installs into its own folder, called Nexus2.5. If you already = have Nexus 1.x installed, it will remain installed alongside the new N= exus installation. On installation, Nexus 2.5 automatically scans for Nexus 1.x files, disp= lays a list of any older files that it finds, and provides an automated sys= tem for importing these into Nexus 2.5. For more information on the installation and licensing process, see Installing and licensing V= icon Nexus. If you are upgrading from a previous version of Nexus 2, during installa= tion you are given the option of adding the Auto Intelligent Gap Fi= ll button to your Nexus toolbar. For more information on this feat= ure, see Automated gap-filling. When you install Nexus 2.5, a dialog box similar to the following is dis= played. To add the Auto Intelligent Gap Fill button to your too= lbar, click Upgrade Files. On first launch, Nexus 2.5 scans the installation directories of earlier= versions of Nexus 2 and offers to automatically transfer custom objects th=.
https://docs.vicon.com/exportword?pageId=50888707
2021-04-10T12:31:12
CC-MAIN-2021-17
1618038056869.3
[]
docs.vicon.com
How to edit general options for a NAPTR record previously created in Address Manager. To edit a NAPTR NAPTR record that you wish to edit. - Click the NAPTR record name menu and select Edit. - Under General, set the following parameters: -. -.
https://docs.bluecatnetworks.com/r/Address-Manager-Administration-Guide/Editing-a-NAPTR-Record/8.3.2
2021-04-10T12:18:02
CC-MAIN-2021-17
1618038056869.3
[]
docs.bluecatnetworks.com
Forwarding Policy determines the forwarding behavior if the forwarder is not available. As well as the Forwarding option, you can configure a related option called Forwarding Policy. The Forwarding Policy has two settings: - Forward First—The forwarding client tries to send the query to the forwarder. If the forwarder is not available, the client tries to resolve the query itself. This is the default value. - Forward Only—The forwarding client tries to send the query to the forwarder. If the forwarder is not available, no response is returned. Address Manager imports the Forwarding and Forwarding Policy deployment options at the DNS server level. You can configure this option at the server level, the configuration level, or the view level. In each case, it is deployed to the server level on Windows. To define the Forwarding Policy define the option, then click the Deployment Options tab. - Under Deployment Options, click New and select DNS Option. The Add DNS Deployment Option page opens. - From the Option list, select Forwarding Policy. - From the Specify list, select first or only. -.
https://docs.bluecatnetworks.com/r/Address-Manager-Administration-Guide/Forwarding-Policy/8.3.0
2021-04-10T12:12:59
CC-MAIN-2021-17
1618038056869.3
[]
docs.bluecatnetworks.com
How to find the software version of the DDW server. You can find out which software version the DDW server is running from the Server Control menu. To query the server version: - Select the Servers tab. Tabs remember the page you last worked on, so select the Servers tab again to ensure you are working with the Configuration information page. - Under Server, click a DDW server. - Click the DDW name, then select Server Control. The Server Control page opens. - From the Action to perform drop-down menu, select Server version query. - Click Execute. The server version appears on the Server Control Result page.
https://docs.bluecatnetworks.com/r/Address-Manager-Administration-Guide/Querying-the-Software-Version/8.3.1
2021-04-10T12:00:48
CC-MAIN-2021-17
1618038056869.3
[]
docs.bluecatnetworks.com
Remove an IPv4/IPv6 address and equivalent netmask. Available interfaces on Address Manager and DNS/DHCP Server are: - remove an IP address: - From Main Session mode, type configure interfaces and press ENTER. - Type modify <interface> and press ENTER. - Type remove address <ipv4 or ipv6 address/netmask> and press ENTER. - Type save and press ENTER. The Administration Console saves your settings. - Type exit and press ENTER until you return to Main Session mode. The IP address will no loger be available.
https://docs.bluecatnetworks.com/r/Address-Manager-Administration-Guide/Removing-an-IPv4/IPv6-address/9.0.0
2021-04-10T12:10:38
CC-MAIN-2021-17
1618038056869.3
[]
docs.bluecatnetworks.com
Methods Any view model model Lifecycle The order of events in the view model's lifecycle is as follows: Extract data from model, optionally using createView or createViews. The following sections describe these methods. shouldCreate This method is called after shouldCreate and before generating any of the view's components. You can use this method for retrieving or building additional content not included in the received model. For example, when a user requests an article, the model received by the view model includes properties pertaining to the article itself, typically heading, author, and body. To display a list of articles recently viewed by the user, or any other information not related to the article itself, implement additional logic in the onCreate method. The following snippet is an example of finding articles recently viewed by a user at the time of rendering the requested article. public class ArticleViewModel extends ViewModel<Article> { private List<Article> histViewedArticles; @Override protected void onCreate(ViewResponse response) { 1 super.onCreate(response); histViewedArticles = historyItem.findByUser(user,20); } protected List<Article> getArticleHistory() { 2 return histViewedArticles; } } createView This method creates a view using the specified view-model class and model. You can use this method to modularize your code. For example, you can have a single view model associated with all your content types. Regardless of the item a client requests, the single view model is run, and inside that view model you can identify the correct view model for creating the view. if (model instanceof Section) { return createView(SectionViewModel.class, model); } else { return createView(ArticleViewModel.class, model); } createViews This method creates an Iterable over views using the specified view-model class and model. You can use this method to create a series of related views, such as a series of comments to an article, which you incorporate into a parent view. public class CommentListViewModel extends ViewModel<CommentList> { public Iterable<CommentsView> getComments() { return createViews(CommentsViewModel.class, model.getComments()); } } In the previous snippet creates an Iterable of comment views based on data provided by the model CommentList. createView(CommentListViewModel.class, CommentListViewModel.getComments()); The previous snippet creates a single view comprised of the comments provided by the view model CommentListViewModel.
https://docs.brightspot.com/4.2/en/developer-guide/view-system/view-models/methods.html
2021-04-10T11:36:56
CC-MAIN-2021-17
1618038056869.3
[]
docs.brightspot.com
You must create at least one Swift tenant account if your StorageGRID Webscale system will be accessed using the Swift REST API. You can create additional Swift tenants if you want to segregate the objects stored on your grid by different entities. Each tenant account has its own groups and users and its own containers and objects. Tenant accounts can optionally use identity federation to allow federated groups and users access to the Swift client. If you want to use identity federation, you must configure a federated identity source (such as Active Directory or OpenLDAP). You can either use the same identity source that was configured for the Grid Management Interface, or the tenant account can configure its own identity source using the Tenant Management Interface. For information about creating a Swift tenant account, see the StorageGRID Webscale Administrator Guide. For more information about configuring a tenant account, see the StorageGRID Webscale Tenant Administrator Guide.
https://docs.netapp.com/sgws-110/topic/com.netapp.doc.sg-swift/GUID-443F86E8-54DE-4980-8459-259F8A42FEBE.html
2021-04-10T11:44:37
CC-MAIN-2021-17
1618038056869.3
[]
docs.netapp.com
HomeUnravel 4.6.2.1 DocumentationInstallationInstallation troubleshootingTroubleshooting CDHTroubleshooting CDHSymptomProblemRemedyhadoop fs -ls /user/unravel/HOOK_RESULT_DIR/ indicates that the directory does not existUnravel Server RPM is not yet installed, orUnravel Server RPM is installed on a different HDFS cluster, orHDFS home directory for Unravel does not exist, orkerberos/sentry actions are neededInstall Unravel RPM on Unravel host.orVerify that user unravel user exists and has a /user/unravel/ directory in HDFS with write access to it.ClassNotFound error for com.unraveldata.dataflow.hive.hook.UnravelHiveHook during Hive query executionUnravel hive hook JAR was not found in in $HIVE_HOME/lib/.Confirm that the UNRAVEL_SENSOR parcel was distributed and activated in Cloudera Manager.orPut the Unravel hive-hook JAR corresponding to hive-version in jar-destination on each gateway as follows:cd /usr/local/unravel/hive-hook/; cp unravel-hive-hive-version*hook.jar jar-destinationIn this section:
https://docs.unraveldata.com/en/cdh-troubleshoot.html
2021-04-10T12:06:14
CC-MAIN-2021-17
1618038056869.3
[]
docs.unraveldata.com
What Is Amazon Kinesis Agent for Microsoft Windows? Amazon Kinesis Agent for Microsoft Windows (Kinesis Agent for Windows) is a configurable and extensible agent. It runs on fleets of Windows desktop computers and servers, either on-premises or in the AWS Cloud. Kinesis Agent for Windows efficiently and reliably gathers, parses, transforms, and streams logs, events, and metrics to various AWS services, including Kinesis Data Streams, Kinesis Data Firehose, Amazon CloudWatch, and CloudWatch Logs. From those services, you can then store, analyze, and visualize the data using a variety of other AWS services, including the following: The following diagram illustrates a simple configuration of Kinesis Agent for Windows that streams log files to Kinesis Data Streams. For more information about sources, pipes, and sinks, see Amazon Kinesis Agent for Microsoft Windows Concepts. The following diagram illustrates some of the ways you can build custom, real-time data pipelines using stream-processing frameworks. These frameworks include Kinesis Data Analytics, Apache Spark on Amazon EMR, and AWS Lambda. ![ Diagram showing the interaction of data with stream processing agents including Kinesis Data Analytics, Spark on EMR, EC2, and Lambda. ](images/KinesisAgentDataPipeline.png) Topics About AWS Amazon Web Services (AWS) is a collection of digital infrastructure services that you can use when developing applications. The services include computing, storage, database, analytics, and application synchronization (messaging and queuing). AWS uses a pay-as-you-go service model. You are charged only for the services that you—or your applications—use. Also, to make its services more approachable for prototyping and experimentation, AWS offers a free usage tier. On this tier, services are free below a certain level of usage. For more information about AWS costs and the Free Tier, see the Getting Started Resource Center. To create an AWS account, open the AWS home page What Can You Do with Kinesis Agent for Windows? Kinesis Agent for Windows provides the following features and capabilities: Collect Logs, Events, and Metrics Data Kinesis Agent for Windows collects, parses, transforms, and streams logs, events, and metrics from fleets of servers and desktops to one or more AWS services. The payload received by the services can be in a different format from the original source. For example, a log might be stored in a particular textual format (such as syslog format) on a server. Kinesis Agent for Windows can collect and parse that text and optionally transform it to JSON format, for example, before streaming to AWS. This facilitates simpler processing by some AWS services that consume JSON. Data streamed to Kinesis Data Streams can be continuously processed by Kinesis Data Analytics to generate additional metrics and aggregated metrics, which in turn can power live dashboards. You can store the data using a variety of AWS services (such as Amazon S3) depending on how the data is used downstream in a data pipeline. Integrate with AWS Services You can configure Kinesis Agent for Windows to send log files, events, and metrics to several different AWS services: Kinesis Data Firehose — Easily store streamed data in Amazon S3, Amazon Redshift, Amazon ES, or Splunk for further analysis. Kinesis Data Streams — Process streamed data using custom applications hosted in Kinesis Data Analytics or Apache Spark on Amazon EMR. Or use custom code running on Amazon EC2 instances, or custom serverless functions running in AWS Lambda. CloudWatch — View streamed metrics in graphs, which you can combine into dashboards. Then set CloudWatch alarms that are triggered by metric values that breach preset thresholds. CloudWatch Logs — Store streamed logs and events, and view and search them in the AWS Management Console, or process them further downstream in a data pipeline. Install and Configure Quickly You can install and configure Kinesis Agent for Windows in just a few steps. For more information, see Installing Kinesis Agent for Windows and Configuring Amazon Kinesis Agent for Microsoft Windows. A simple declarative configuration file specifies the following: The sources and formats of logs, events, and metrics to gather. The transformations to apply to the gathered data. Additional data can be included, and existing data can be transformed and filtered. The destinations where the final data is streamed, and the buffering, sharding, and format for the streaming payloads. Kinesis Agent for Windows comes with built-in parsers for log files generated by common Microsoft enterprise services such as: Microsoft Exchange SharePoint Active Directory domain controllers DHCP servers No Ongoing Administration Kinesis Agent for Windows automatically adapts to various situations without losing any data. These include log rotation, recovery after reboot, and temporary network or service interruptions. You can configure Kinesis Agent for Windows to automatically update to new versions. No operator intervention is required in any of these situations. Extend Using Open Architecture If the declarative capabilities and built-in plugins are insufficient for monitoring server or desktop systems, you can extend Kinesis Agent for Windows by creating plugins. New plugins enable new sources and destinations for logs, events, and metrics. The source code for Kinesis Agent for Windows is available at Benefits Kinesis Agent for Windows performs the initial data gathering, transformation, and streaming for logs, events, and metrics for data pipelines. Building these data pipelines has numerous benefits: Analysis and Visualization The integration of Kinesis Agent for Windows with Kinesis Data Firehose and its transformation capabilities make it easy to integrate with several different analytic and visualization services: Amazon QuickSight — A cloud-based BI service that can ingest from many different sources. Kinesis Agent for Windows can transform data and stream it to Amazon S3 and Amazon Redshift via Kinesis Data Firehose. This process enables discovery of deep insights from the data using Amazon QuickSight visualizations. Athena — An interactive query service that enables SQL-based querying of data. Kinesis Agent for Windows can transform and stream data to Amazon S3 via Kinesis Data Firehose. Athena can then interactively execute SQL queries against that data to rapidly inspect and analyze logs and events. Kibana — An open-source data visualization tool. Kinesis Agent for Windows can transform and stream data to Amazon ES via Kinesis Data Firehose. You can then use Kibana to explore that data. Create and open different visualizations, including histograms, line graphs, pie charts, heat maps, and geospatial graphics. Security A log and event data analysis pipeline that includes Kinesis Agent for Windows can detect and alert on security breaches in organizations, which can help you block or stop attacks. Application Performance Kinesis Agent for Windows can collect logs, events, and metric data about application or service performance. A complete data pipeline can then analyze this data. This analysis helps you improve your application and service performance and reliability by detecting and reporting on defects that otherwise might not be apparent. For example, you can detect significant changes in the execution times of service API calls. When correlated to a deployment, this capability helps you locate and resolve new performance problems with services that you own. Service Operations A data pipeline can analyze the data collected to predict potential operational issues and provide insight into how to avoid service outages. For example, you can analyze logs, events, and metrics to determine current and projected capacity usage so that you can bring additional capacity online before service users are affected. If a service outage occurs, you can analyze the data to determine the impact on customers during the outage period. Auditing A data pipeline can process the logs, events, and metrics that Kinesis Agent for Windows collects and transforms. You can then audit this processed data using various AWS services. For example, Kinesis Data Firehose could receive a data stream from Kinesis Agent for Windows, which stores the data in Amazon S3. You could then audit this data by executing interactive SQL queries using Athena. Archiving Often the most important operational data is data that is recently collected. However, analysis of data that is collected about applications and services over several years can also be useful, for example, for long range planning. Keeping large amounts of data can be expensive. Kinesis Agent for Windows can collect, transform, and store data in Amazon S3 via Kinesis Data Firehose. Therefore, Amazon S3 Glacier is available to reduce the costs of archiving older data. Alerting Kinesis Agent for Windows streams metrics to CloudWatch. In turn, you can create CloudWatch alarms to send a notification via Amazon Simple Notification Service (Amazon SNS) when a metric consistently violates a specific threshold. This gives engineers a better awareness of the operational issues with their applications and services. Getting Started with Kinesis Agent for Windows To learn more about Kinesis Agent for Windows, we recommend that you start with the following sections:
https://docs.aws.amazon.com/kinesis-agent-windows/latest/userguide/what-is-kinesis-agent-windows.html
2021-04-10T12:16:40
CC-MAIN-2021-17
1618038056869.3
[array(['images/KinesisAgentSimplePipe.png', 'Data flow diagram depicting Kinesis Agent for Windows streaming log files to Kinesis Data Streams.'], dtype=object) array(['images/LoggingIcon.png', 'Icon representing logging'], dtype=object) array(['images/EndpointsIcon.png', 'Icon representing service endpoints'], dtype=object) array(['images/EasyIcon.png', 'Icon representing ease of use'], dtype=object) array(['images/NoAdminIcon.png', 'Icon representing ease of operations'], dtype=object) array(['images/OpenIcon.png', 'Icon representing extensible architecture'], dtype=object) array(['images/Visualization.png', 'Icon representing visualization of data'], dtype=object) array(['images/SecurityIcon.png', 'Icon representing security scenarios'], dtype=object) array(['images/Performance.png', 'Icon representing performance'], dtype=object) array(['images/ServiceOps.png', 'Icon representing services'], dtype=object) array(['images/Audit.png', 'Icon representing service auditing'], dtype=object) array(['images/Archive.png', 'Icon representing archived data'], dtype=object) array(['images/Alerting.png', 'Icon representing service alerting'], dtype=object) ]
docs.aws.amazon.com
Submitting a video to AWS Elemental You can submit your videos to Amazon Elemental for editing and enhancements. Before you submit a video to AWS Elemental, plan which features you want to include with the published version, such as transcription, advertisements, and graphical overlays. You need to configure your submission to include the features you need. Note AWS Elemental provides the features described in this section. If you do not have or do not want to use an AWS account, you can add these features using a video editor on your own laptop, and then publish the video. For details, see Videos. On the dashboard, under Quick Start, select Video. The content edit page appears. From Provider, select AWS Elemental (Upload). A form appears. Under Files, click |mi-add_circle_outline|. A form appears. From the File list, select New Upload, and browse to the video that you want to upload. A preview appears. Using the following table as a reference, configure the submission to include the required features. Click |mi-save|. Brightspot submits the video to Amazon Elemental. Wait until the confirmation message appears that the job is complete. Complete your site's workflow and publish the video.
https://docs.brightspot.com/4.2/en/plugins-guide/integrating-amazon-elemental-services-into-brightspot/submitting-a-video-to-aws-elemental.html
2021-04-10T11:59:13
CC-MAIN-2021-17
1618038056869.3
[]
docs.brightspot.com
jit.repos Description jit.repos performs cell positioning on an input matrix received in its left inlet using the input from a second input matrix received in its right Inlet as a spatial map. Examples Discussion The spatial map should be a 2 plane matrix, where plane 0 specifies the x offset and plane 1 specifies the y offset. You can do Fractional repositioning by setting the interpbits attribute to a non-zero value -- the spatial map values are considered to be fixed point values with a fractional component of interpbits. Matrix Operator More about Matrix Operators The Jitter MOP MOP Arguments MOP Attributes MOP Messages Attributes boundmode [int] Boundary mode for values outside the range (0, width) (0, height) (default = 3 (clip)) 0 = ignore: Values that exceed the limits are ignored. 1 = clear: Values that exceed the limits are set to 0. 2 = wrap: Values that exceed the limits are wrapped around to the opposite limit with a modulo operation. 3 = clip: Values are limited not to exceed width or height. 4 = fold: Values that exceed the limits are folded back in the opposite direction interpbits [int] The number of bits considered as fraction for spatial mapping values (default = 0) mode [int] Offset mode flag (default = 0 (absolute offsets)) 0 = spatial map values specified as absolute offsets 1 = spatial map specified as relative offsets offset_x [int] The offset added to the x values in the spatial map matrix (default = 0) offset_y [int] The offset added to the y values in the spatial map matrix (default = 0)
https://docs.cycling74.com/max8/refpages/jit.repos
2021-04-10T11:16:30
CC-MAIN-2021-17
1618038056869.3
[array(['/static/max8/images/14d8a9c72d49e9caf0e9dd866082307b.png', None], dtype=object) ]
docs.cycling74.com
The Plug-ins Toolbar Browser The Plug-ins toolbar browser provides another way to explore image files you use when patching in a quick and efficient manner. You can search for files and patch directly from the browser. Displaying/Hiding the Plug-ins toolbar browser - Click on the Plug-ins button in the left patcher window toolbar to display the Plug-ins toolbar browser. Clicking on the Plug-ins button when the toolbar browser is displayed will close the browser window Exploring files using the Plug-ins toolbar browser - Type a word into the Plug-ins toolbar browser search bar, and the browser will display Audio Unit plug-ins, VST plug-ins, VST3 plug-ins or Max for Live devices whose titles are filtered according to your search terms. - Click on one of the selections on the left-hand side of the toolbar browser to display a narrower list of plug-ins (i.e. clicking on Max for Live will display only Max for Live devices). Clicking on All will display all types of plug-ins in alphabetical order. - Click on the name of a plug-in to select it and display the file type in the Plug-ins toolbar browser's display area. Patching from the Plug-ins toolbar browser - Click on the name of a device in the Plug-ins toolbar browser to select it and drag the selected object into the unlocked patcher window. If the selected plug-in is a VST, VST3 or Audio Units plug-in, an MSP vst~ object with the plug-in already loaded will appear in the patcher window If the selected plug-in is a Max for Live device, an MSP amxd~ object with the device already loaded will appear in the patcher window Note: You can also hold down the option key and click and drag files into a locked patcher window.
https://docs.cycling74.com/max8/vignettes/plug-ins_toolbar_browser
2021-04-10T12:24:25
CC-MAIN-2021-17
1618038056869.3
[array(['/static/max8/images/5d43458b1d576844bacf962a0c4ff34b.png', None], dtype=object) array(['/static/max8/images/8296e4f4aaafd914c24d0513b6df63fe.png', None], dtype=object) array(['/static/max8/images/6e885367c5930d0504413083d736afa1.png', None], dtype=object) array(['/static/max8/images/0f02685c234849f16724cee37f7e8412.png', None], dtype=object) array(['/static/max8/images/b537455e2b31254214f833430211da67.png', None], dtype=object) array(['/static/max8/images/7114c0c05c9472bf2437e238bf63b69e.png', None], dtype=object) array(['/static/max8/images/bfed1acf3b81cd4e2ac063ef92d8e875.png', None], dtype=object) ]
docs.cycling74.com
Geographical Location Codes Geographical locations data can be used to for Location Targeting e.g. show ads to people in a specific country/region, state/province, county, metro area (Nielsen DMA® in the United States), postal code, or city. You can call the GetGeoLocationsFileUrl operation to get a temporary file URL that can be used to download the latest geographical locations data. You can also get the locations data from the Microsoft Advertising Developer Portal. You must be signed in to the developer portal with a Microsoft account user who has Microsoft Advertising credentials.. For code examples that demonstrate how to download the geographical locations codes, see Geographical Locations Code Example. Location Codes File Format The comma separated value (CSV) file contains data organized in the following non-localized column headings. Only the Bing Display Name is localized depending on the file URL used above. Important New columns may be added at any time, so your implementation must ignore unknown columns. The order of locations is not guaranteed, so you should not take dependencies on any perceived column sort order or hierarchy. File Format Version 2.0 If you specified Version as 2.0 when calling the GetGeoLocationsFileUrl operation, the following data is available in the downloaded file. Bing Ads API currently only supports file format version 2.0. Country Codes In some contexts the API requires a country code string e.g., for the business address of an AdvertiserAccount object. The following country codes are supported by Microsoft Advertising. See Also Show Ads to Your Target Audience
https://docs.microsoft.com/en-us/advertising/guides/geographical-location-codes?view=bingads-13&viewFallbackFrom=bingads-12
2021-04-10T11:01:35
CC-MAIN-2021-17
1618038056869.3
[]
docs.microsoft.com
Using a Symbol Server A symbol server enables the debugger to automatically retrieve the correct symbol files from a symbol store - an indexed collection of symbol files - without the user needing to know product names, releases, or build numbers. The Debugging Tools for Windows package includes the symbol server SymSrv (symsrv.exe). Using SymSrv with a Debugger SymSrv can be used with WinDbg, KD, NTSD, or CDB. To use this symbol server with the debugger, simply include the text srv\* in the symbol path. For example: set _NT_SYMBOL_PATH = srv*DownstreamStore*SymbolStoreLocation where DownstreamStore specifies the local directory or network share that will be used to cache individual symbol files, and SymbolStoreLocation is the location of the symbol store either in the form \\server\share or as an internet address. For more syntax options, see Advanced SymSrv Use. Microsoft has a Web site that makes Windows symbols publicly available. You can refer directly to this site in your symbol path in the following manner: set _NT_SYMBOL_PATH=srv*DownstreamStore* where, again, DownstreamStore specifies the local directory or network share that will be used to cache individual symbol files. For more information, see Microsoft Public Symbols. If you plan to create a symbol store, configure a symbol store for web (HTTP) access, or write your own symbol server or symbol store, see Symbol Stores and Symbol Servers. Using AgeStore to Reduce the Cache Size Any symbol files downloaded by SymSrv will remain on your hard drive after the debugging session is over. To control the size of the symbol cache, the AgeStore tool can be used to delete cached files that are older than a specified date, or to reduce the contents of the cache below a specified size. For details, see AgeStore.
https://docs.microsoft.com/ko-kr/windows-hardware/drivers/debugger/using-a-symbol-server
2021-04-10T12:28:58
CC-MAIN-2021-17
1618038056869.3
[]
docs.microsoft.com
The MapBrowser boundaries feature allows MapBrowser users to visualise boundary lines as well as other data about a property or land parcel, as an overlay on Nearmap imagery. This guide includes the following sections: About the Boundary Layer Where does boundary data come from? Boundary data is provided by PSMA in Australia and Digital Map Products in the USA. 16 or higher. Australian users can select whether to view Cadastre or Property lines. - Choose the colour, brightness and opacity of areas and radii. - Choose the colour and brightness of lines. - Adjust the opacity and background of the project by dragging the slider to the left (lighter) or right (darker).. Boundary Frequently Asked Questions Why can't I see boundary layers? - Boundary layers are only visible at zoom levels of 16 or greater. To check your zoom level, look at the URL. The number before the "z" is your zoom level. E.g. "16.00z". - Sometimes there's not enough contract between the imagery and the boundary lines. Use the colour picker and the opacity and background sliders to adjust the display.
https://docs.nearmap.com/display/ND/MapBrowser+Boundaries
2021-04-10T11:33:55
CC-MAIN-2021-17
1618038056869.3
[]
docs.nearmap.com
Please follow this procedure if Smart Launcher can’t verify your license and if you are using more than a Google account on your device. Uninstall Smart Launcher, if you want you can perform a backup to avoid losing your configuration. Using a computer open this page and log in using the same account you used to purchase the license. Click on “Install” in the web page, then wait the install process to finish on your device. After the installation check if Smart Launcher recognized your license. If it doesn’t a reboot could be necessary. Since the issue causing this problem is related to how Google Play Store handles purchases in a multi-account environment, this procedure could help even if you’re experiencing the same problem with another app. Let us if the procedure helped you in our community.
https://docs.smartlauncher.net/faq/multiple-google-accounts-licensing-issues
2021-04-10T12:09:21
CC-MAIN-2021-17
1618038056869.3
[]
docs.smartlauncher.net
Development¶ This section provides useful concepts for getting started digging into the code and contributing new functionality. We welcome contributors and hope these notes help make it easier to get started. Goals¶ bcbio-nextgen provides best-practice pipelines for automated analysis of high throughput sequencing data with the goal of being: - Quantifiable: Doing good science requires being able to accurately assess the quality of results and re-verify approaches as new algorithms and software become available. - Analyzable: Results feed into tools to make it easy to query and visualize the results. - Scalable: Handle large datasets and sample populations on distributed heterogeneous compute environments. - Reproducible: Track configuration, versions, provenance and command lines to enable debugging, extension and reproducibility of results. - Community developed: The development process is fully open and sustained by contributors from multiple institutions. By working together on a shared framework, we can overcome the challenges associated with maintaining complex pipelines in a rapidly changing area of research. - Accessible: Bioinformaticians, biologists and the general public should be able to run these tools on inputs ranging from research materials to clinical samples to personal genomes. During development we seek to maximize functionality and usefulness, while avoiding complexity. Since these goals are sometimes in conflict, it’s useful to understand the design approaches: - Support high level configurability but avoid exposing all program options. Since pipelines support a wide variety of tools, each with a large number of options, we try to define configuration variables at high level based on biological intent and then translate these into best-practice options for each tool. The goal is to avoid having an overwhelming number of input configuration options. - Provide best-practice pipelines that make recommended decisions for processing. Coupled with goal of minimizing configuration parameters, this requires trust and discussion around algorithm choices. An example is bwa alignment, which uses bwa alnfor reads shorter than 75bp and bwa memfor longer reads, based on recommendations from Heng Li. Our general goal is to encourage discussion and development of best-practices to make it easy to do the right thing. - Support extensive debugging output. In complex distributed systems, programs fail in unexpected ways even during production runs. We try to maximize logging to help identify and diagnose these type of unexpected problems. - Avoid making mistakes. This results in being conservative about decisions like deleting file intermediates. Coupled with extensive logging, we trade off disk usage for making it maximally easy to restart and debug problems. If you’d like to delete work or log directories automatically, we recommend doing this as part of your batch scripts wrapping bcbio-nextgen. - Strive for a clean, readable code base. We strive to make the code a secondary source of information after hand written docs. Practically, this means maximizing information content in source files while using in-line documentation to clarify as needed. - Focus on a functional coding style with minimal use of global mutable objects. This approach works well with distributed code and isolates debugging to individual functions rather than globally mutable state. - Make sure your changes integrate correctly by running the test suite before submitting a pull request. The pipeline is automatically tested in Travis-CI, and a red label will appear in the pull request if the former causes any issue. Style guide¶ General: - Delete unnecessary code (do not just comment it out) - Refactor existing code to help deliver new functionality - Specify exact version numbers for dependencies Python: - Follow PEP 8 and PEP 20 - Limit all lines to a maximum of 99 characters - Add docstrings to each module - Follow PEP 257 for docstrings: - the """that ends a multiline docstring should be on a line by itself - for one-liner docstrings keep the closing """on the same line - Clarify function calls with keyword arguments for readability - Use type hints Modules¶ The most useful modules inside bcbio, ordered by likely interest: pipeline– Top level functionality that drives the analysis pipeline. main.pycontains top level definitions of pipelines like variant calling and RNAseq, and is the best place to start understanding the overall organization of the code. ngsalign– Integration with aligners for high-throughput sequencing data. We support individual aligners with their own separate modules. variation– Tools for variant calling. Individual variant calling and processing approaches each have their own submodules. rnaseq– Run RNA-seq pipelines, currently supporting TopHat/Cufflinks. provenance– Track third party software versions, command lines and program flow. Handle writing of debugging details. distributed– Handle distribution of programs across multiple cores, or across multiple machines using IPython. workflow– Provide high level tools to run customized analyses. They tie into specialized analyses or visual front ends to make running bcbio-nextgen easier for specific common tasks. broad– Code to handle calling Broad tools like GATK and Picard, as well as other Java-based programs. GitHub¶ bcbio-nextgen uses GitHub for code development, and we welcome pull requests. GitHub makes it easy to establish custom forks of the code and contribute those back. The Biopython documentation has great information on using git and GitHub for a community developed project. In short, make a fork of the bcbio code by clicking the Fork button in the upper right corner of the GitHub page, commit your changes to this custom fork and keep it up to date with the main bcbio repository as you develop. The GitHub help pages have detailed information on keeping your fork updated with the main GitHub repository (e.g.). After commiting changes, click New Pull Request from your fork when you’d like to submit your changes for integration in bcbio. Creating a separate bcbio installation¶ When developing, you’d like to avoid breaking your production bcbio instance. Use the installer to create a separate bcbio instance without downloading any data. Before installing the second bcbio instance, investigate your PATH and PYTHONPATH variables. It is better to avoid mixing bcbio instances in the PATH. Also watch ~/.conda/environments.txt. To install in ${HOME}/local/share/bcbio (your location might be different, make sure you have ~30GB of disk quota there): wget python bcbio_nextgen_install.py ${HOME}/local/share/bcbio --tooldir=${HOME}/local --nodata --isolate Make soft links to the data from your production bcbio instance (your installation path could be different from /n/app/bcbio): ln -s /n/app/bcbio/biodata/genomes/ ${HOME}/local/share/genomes ln -s /n/app/bcbio/biodata/galaxy/tool-data ${HOME}/local/share/bcbio/galaxy/tool-data Add this directory to your PATH (note that it is better to clear you PATH from the path of the production bcbio instance and its tools): echo $PATH # use everything you need except of production bcbio export PATH=/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin: export PATH=${HOME}/local/share/bcbio/anaconda/bin:${HOME}/local/bin:$PATH Or directly call the testing bcbio: ${HOME}/local/share/bcbio/anaconda/bin/bcbio_nextgen.py. Injecting bcbio code into bcbio installation¶ To install from your bcbio-nextgen source tree for testing do: # make sure you are using the development bcbio instance which bcbio_python # local git folder cd ~/code/bcbio-nextgen bcbio_python setup.py install One tricky part that we don’t yet know how to work around is that pip and standard setup.py install have different ideas about how to write Python eggs. setup.py install will create an isolated python egg directory like bcbio_nextgen-1.1.5-py3.6.egg, while pip creates an egg pointing to a top level bcbio directory. Where this gets tricky is that the top level bcbio directory takes precedence. The best way to work around this problem is to manually remove the current pip installed bcbio-nextgen code ( rm -rf /path/to/anaconda/lib/python3.6/site-packages/bcbio*) before managing it manually with bcbio_python setup.py install. We’d welcome tips about ways to force consistent installation across methods. Documentation¶ To build this documentation locally and see how it looks like you can do so by installing the dependencies: cd docs conda install --file requirements-local.txt --file requirements.txt and running: make html The documentation will be built under docs/_build/html, open index.html with your browser to load your local build. Testing¶ The test suite exercises the scripts driving the analysis, so are a good starting point to ensure correct installation. Tests use the pytest framework. The tests are available in the bcbio source code: git clone There is a small wrapper script that finds the pytest and other dependencies pre-installed with bcbio you can use to run tests: cd tests ./run_tests.sh You can use this to run specific test targets: ./run_tests.sh cancer ./run_tests.sh rnaseq ./run_tests.sh devel ./run_tests.sh docker Optionally, you can run pytest directly from the bcbio install to tweak more options. It will be in /path/to/bcbio/anaconda/bin/pytest. Pass -s to pytest to see the stdout log, and -v to make pytest output more verbose. The -x flag will stop the test at the first failure and --lf will run only the tests that failed the last go-around. Sometimes it is useful to drop into the debugger on failure, wihch you can do by setting -s --pdb. The tests are marked with labels which you can use to run a specific subset of the tests using the -m argument: pytest -m rnaseq To run unit tests: pytest tests/unit To run integration pipeline tests: pytest tests/integration To run tests which use bcbio_vm: pytest tests/bcbio_vm To see the test coverage, add the --cov=bcbio argument to pytest.. The environment variable BCBIO_TEST_DIR can put the tests in a different directory if the full path to the new directory is specified. For example: export BCBIO_TEST_DIR=$(pwd)/output will put the output in your current working directory/output. The test directory can be kept around after running by passing the --keep-test-dir flag. Adding tools¶ Aligner¶ Write new aligners within their own submodule inside the ngsalign directory. bwa.py is a good example to follow along with. There are two functions to implement, based on which type of alignment you’d like to allow: align_bam– Performs alignment given an input BAM file. Expected to return a sorted BAM output file. align– Performs alignment given FASTQ inputs (gzipped or not). This is generally expected to implement an approach with unix-pipe that minimizes intermediates and disk IO, returning a sorted BAM output file. For back-compatibility this can also return a text based SAM file. See the names section for more details on arguments. Other required implementation details include: galaxy_loc_file– Provides the name of the Galaxy loc file used to identify locations of indexes for this aligner. The automated installer sets up these loc files automatically. remap_index_fn– A function that remaps an index from the Galaxy location file into the exact one for this aligner. This is useful for tools which aren’t supported by a Galaxy .loc file but you can locate them relative to another index. Once implemented, plug the aligner into the pipeline by defining it as a _tool in bcbio/pipeline/alignment.py. You can then use it as normal by specifying the name of the aligner in the aligner section of your configuration input. Variant caller¶ New variant calling approaches live within their own module inside bcbio/variation. The freebayes.py implementation is a good example to follow for providing your own variant caller. Implement a function to run variant calling on multiple BAMs in an input region that takes the following inputs: align_bams– A list of BAM files to call simultaneously. items– List of datadictionaries associated with each of the samples in align_bams. Enables customization of variant calling based on sample configuration inputs. See documentation on the data dictionary for all of the information contained inside each dataitem. ref_file– Fasta reference genome file. assoc_files– Useful associated files for variant calling. This includes the DbSNP VCF file. It’s a named tuple mapping to files specified in the configuration. bcbio/pipeline/shared.py has the available inputs. region– A tuple of (chromosome, start, end) specifying the region to call in. out_file– The output file to write to. This should contain calls for all input samples in the supplied region. Once implemented, add the variant caller into the pipeline by updating caller_fns in the variantcall_sample function in bcbio/variation/genotype.py. You can use it by specifying it in the variantcaller parameter of your sample configuration. Adding new organisms¶ While bcbio-nextgen and supporting tools receive the most testing and development on human or human-like diploid organisms, the algorithms are generic and we strive to support the wide diversity of organisms used in your research. We welcome contributors interested in setting up and maintaining support for their particular research organism, and this section defines the steps in integrating a new genome. We also welcome suggestions and implementations that improve this process. Setup CloudBioLinux to automatically download and prepare the genome: - Add the genome database key and organism name to list of supported organisms in the CloudBioLinux configuration (config/biodata.yaml). - Add download details to specify where to get the fasta genome files (cloudbio/biodata/genomes.py). CloudBioLinux supports common genome providers like UCSC and Ensembl directly. Add the organism to the supported installs within bcbio (in two places): - for the initial installer (scripts/bcbio_nextgen_install.py) - for the updater (bcbio/install.py). Test installation of genomes by pointing to your local cloudbiolinux edits during a data installation: mkdir -p tmpbcbio-install ln -s ~/bio/cloudbiolinux tmpbcbio-install bcbio_nextgen.py upgrade --data --genomes DBKEY Add configuration information to bcbio-nextgen by creating a config/genomes/DBKEY-resources.yaml file. Copy an existing minimal template like canFam3 and edit with pointers to snpEff and other genome resources. The VEP database directory has Ensembl names. SnpEff has a command to list available databases: snpEff databases Finally, send pull requests for CloudBioLinux and bcbio-nextgen and we’ll happily integrate the new genome. This will provide basic integration with bcbio and allow running a minimal pipeline with alignment and quality control. We also have utility scripts in CloudBioLinux to help with preparing dbSNP (utils/prepare_dbsnp.py) and RNA-seq (utils/prepare_tx_gff.py) resources for some genomes. For instance, to prepare RNA-seq transcripts for mm9: bcbio_python prepare_tx_gff.py --genome-dir /path/to/bcbio/genomes Mmusculus mm9 We are still working on ways to best include these as part of the standard build and install since they either require additional tools to run locally, or require preparing copies in S3 buckets. Enabling new MultiQC modules¶ MultiQC modules can be turned on in bcbio/qc/multiqc.py. bcbio collects the files to be used rather than searching through the work directory to support CWL workflows. Quality control files can be added by using the datadict.update_summary_qc function which adds the files in the appropriate place in the data dict. For example, here is how to add the quality control reports from bismark methylation calling: data = dd.update_summary_qc(data, "bismark", base=biasm_file) data = dd.update_summary_qc(data, "bismark", base=data["bam_report"]) data = dd.update_summary_qc(data, "bismark", base=splitting_report) Files that can be added for each tool in MultiQC can be found in the MultiQC module documentation New release checklist¶ - [ ] pull from master to make sure you are up to date - [ ] run integration tests: pytest -s -x tests/integration/test_automated_analysis.py - [ ] run unit tests: pytest -s -x tests/unit - [ ] update version in setup.py and docs/conf.py - [ ] add release date to HISTORY.md and start new (in progress) section - [ ] commit and push changes to bcbio - [ ] draft new release, copy and paste changes from HISTORY.md to the changelog - [ ] wait for bioconda-recipes to pick up the new release - [ ] review and approve bioconda recipe once it passes the tests - [ ] merge recipe by commenting @bioconda-bot please merge - [ ] wait until new version is available on bioconda - [ ] update requirements-conda.txt - [ ] update requirements.txt - [ ] push changes to bcbio - [ ] update BCBIO_VERSION in bcbio_docker - [ ] update BCBIO_REVISION in bcbio_docker - [ ] push changes to bcbio_docker - [ ] make sure the image builds successfully Standard function arguments¶ names¶ This dictionary provides lane and other BAM run group naming information used to correctly build BAM files. We use the rg attribute as the ID within a BAM file: {'lane': '7_100326_FC6107FAAXX', 'pl': 'illumina', 'pu': '7_100326_FC6107FAAXX', 'rg': '7', 'sample': 'Test1'} data¶ The data dictionary is a large dictionary representing processing, configuration and files associated with a sample. The standard work flow is to pass this dictionary between functions, updating with associated files from the additional processing. Populating this dictionary only with standard types allows serialization to JSON for distributed processing. The dictionary is dynamic throughout the workflow depending on the step, but some of the most useful key/values available throughout are: config– Input configuration variables about how to process in the algorithmsection and locations of programs in the resourcessection. dirs– Useful directories for building output files or retrieving inputs. metadata– Top level metadata associated with a sample, specified in the initial configuration. genome_resources– Naming aliases and associated files associated with the current genome build. Retrieved from organism specific configuration files ( buildname-resources.yaml) this specifies the location of supplemental organism specific files like support files for variation and RNA-seq analysis. It also contains information the genome build, sample name and reference genome file throughout. Here’s an example of these inputs: {'config': {'algorithm': {'aligner': 'bwa', 'callable_regions': 'analysis_blocks.bed', 'coverage_depth': 'low', 'coverage_interval': 'regional', 'mark_duplicates': 'samtools', 'nomap_split_size': 50, 'nomap_split_targets': 20, 'num_cores': 1, 'platform': 'illumina', 'quality_format': 'Standard', 'realign': 'gkno', 'recalibrate': 'gatk', 'save_diskspace': True, 'upload_fastq': False, 'validate': '../reference_material/7_100326_FC6107FAAXX-grade.vcf', 'variant_regions': '../data/automated/variant_regions-bam.bed', 'variantcaller': 'freebayes'}, 'resources': {'bcbio_variation': {'dir': '/usr/share/java/bcbio_variation'}, 'bowtie': {'cores': None}, 'bwa': {'cores': 4}, 'cortex': {'dir': '~/install/CORTEX_release_v1.0.5.14'}, 'cram': {'dir': '/usr/share/java/cram'}, 'gatk': {'cores': 2, 'dir': '/usr/share/java/gatk', 'jvm_opts': ['-Xms750m', '-Xmx2000m'], 'version': '2.4-9-g532efad'}, 'gemini': {'cores': 4}, 'novoalign': {'cores': 4, 'memory': '4G', 'options': ['-o', 'FullNW']}, 'picard': {'cores': 1, 'dir': '/usr/share/java/picard'}, 'snpEff': {'dir': '/usr/share/java/snpeff', 'jvm_opts': ['-Xms750m', '-Xmx3g']}, 'stampy': {'dir': '~/install/stampy-1.0.18'}, 'tophat': {'cores': None}, 'varscan': {'dir': '/usr/share/java/varscan'}, 'vcftools': {'dir': '~/install/vcftools_0.1.9'}}}, 'genome_resources': {'aliases': {'ensembl': 'human', 'human': True, 'snpeff': 'hg19'}, 'rnaseq': {'transcripts': '/path/to/rnaseq/ref-transcripts.gtf', 'transcripts_mask': '/path/to/rnaseq/ref-transcripts-mask.gtf'}, 'variation': {'dbsnp': '/path/to/variation/dbsnp_132.vcf', 'train_1000g_omni': '/path/to/variation/1000G_omni2.5.vcf', 'train_hapmap': '/path/to/hg19/variation/hapmap_3.3.vcf', 'train_indels': '/path/to/variation/Mills_Devine_2hit.indels.vcf'}, 'version': 1}, 'dirs': {'fastq': 'input fastq directory', 'galaxy': 'directory with galaxy loc and other files', 'work': 'base work directory'}, 'metadata': {'batch': 'TestBatch1'}, 'genome_build': 'hg19', 'name': ('', 'Test1'), 'sam_ref': '/path/to/hg19.fa'} Processing also injects other useful key/value pairs. Here’s an example of additional information supplied during a variant calling workflow: {'prep_recal': 'Test1/7_100326_FC6107FAAXX-sort.grp', 'summary': {'metrics': [('Reference organism', 'hg19', ''), ('Total', '39,172', '76bp paired'), ('Aligned', '39,161', '(100.0\\%)'), ('Pairs aligned', '39,150', '(99.9\\%)'), ('Pair duplicates', '0', '(0.0\\%)'), ('Insert size', '152.2', '+/- 31.4')], 'pdf': '7_100326_FC6107FAAXX-sort-prep-summary.pdf', 'project': 'project-summary.yaml'}, 'validate': {'concordant': 'Test1-ref-eval-concordance.vcf', 'discordant': 'Test1-eval-ref-discordance-annotate.vcf', 'grading': 'validate-grading.yaml', 'summary': 'validate-summary.csv'}, 'variants': [{'population': {'db': 'gemini/TestBatch1-freebayes.db', 'vcf': None}, 'validate': None, 'variantcaller': 'freebayes', 'vrn_file': '7_100326_FC6107FAAXX-sort-variants-gatkann-filter-effects.vcf'}], 'vrn_file': '7_100326_FC6107FAAXX-sort-variants-gatkann-filter-effects.vcf', 'work_bam': '7_100326_FC6107FAAXX-sort-prep.bam'} Parallelization framework¶ bcbio-nextgen supports parallel runs on local machines using multiple cores and distributed on a cluster using IPython using a general framework. The first parallelization step starts up a set of resources for processing. On a cluster this spawns a IPython parallel controller and set of engines for processing. The prun (parallel run) start function is the entry point to spawning the cluster and the main argument is a parallel dictionary which contains arguments to the engine processing command. Here is an example input from an IPython parallel run: {'cores': 12, 'type': 'ipython' 'progs': ['aligner', 'gatk'], 'ensure_mem': {'star': 30, 'tophat': 8, 'tophat2': 8}, 'module': 'bcbio.distributed', 'queue': 'batch', 'scheduler': 'torque', 'resources': [], 'retries': 0, 'tag': '', 'timeout': 15} The cores and type arguments must be present, identifying the total cores to use and type of processing, respectively. Following that are arguments to help identify the resources to use. progs specifies the programs used, here the aligner, which bcbio looks up from the input sample file, and gatk. ensure_mem is an optional argument that specifies minimum memory requirements to programs if used in the workflow. The remaining arguments are all specific to IPython to help it spin up engines on the appropriate computing cluster. A shared component of all processing runs is the identification of used programs from the progs argument. The run creation process looks up required memory and CPU resources for each program from the Resources section of your bcbio_system.yaml file. It combines these resources into required memory and cores using the logic described in the Memory management section of the parallel documentation. Passing these requirements to the cluster creation process ensures the available machines match program requirements. bcbio-nextgen’s pipeline.main code contains examples of starting and using set of available processing engines. This example starts up machines that use samtools, gatk and cufflinks then runs an RNA-seq expression analysis: with prun.start(_wprogs(parallel, ["samtools", "gatk", "cufflinks"]), samples, config, dirs, "rnaseqcount") as run_parallel: samples = rnaseq.estimate_expression(samples, run_parallel) The pipelines often reuse a single set of machines for multiple distributed functions to avoid the overhead of starting up and tearing down machines and clusters. The run_parallel function returned from the prun.start function enables running on jobs in the parallel on the created machines. The ipython wrapper code contains examples of implementing this. It is a simple function that takes two arguments, the name of the function to run and a set of multiple arguments to pass to that function: def run(fn_name, items): The items arguments need to be strings, lists and dictionaries to allow serialization to JSON format. The internals of the run function take care of running all of the code in parallel and returning the results back to the caller function. In this setup, the main processing code is fully independent from the parallel method used so running on a single multicore machine or in parallel on a cluster return identical results and require no changes to the logical code defining the pipeline. During re-runs, we avoid the expense of spinning up processing clusters for completed tasks using simple checkpoint files in the checkpoints_parallel directory. The prun.start wrapper writes these on completion of processing for a group of tasks with the same parallel architecture, and on subsequent runs will go through these on the local machine instead of parallelizing. The processing code supports these quick re-runs by checking for and avoiding re-running of tasks when it finds output files. Plugging new parallelization approaches into this framework involves writing interface code that handles the two steps. First, create a cluster of ready to run machines given the parallel function with expected core and memory utilization: num_jobs– Total number of machines to start. cores_per_job– Number of cores available on each machine. mem– Expected memory needed for each machine. Divide by cores_per_jobto get the memory usage per core on a machine. Second, implement a run_parallel function that handles using these resources to distribute jobs and return results. The multicore wrapper and ipython wrapper are useful starting points for understanding the current implementations.
https://bcbio-nextgen.readthedocs.io/en/latest/contents/development.html
2021-04-10T11:05:36
CC-MAIN-2021-17
1618038056869.3
[]
bcbio-nextgen.readthedocs.io
How to set DNS Recursion by using the Root Hints option. Setting DNS Recursion Recursion allows a DNS server to return queries about names for which it is not authoritative. Root Hints are the Address Manager equivalent of the Windows recursion setting. Unlike the other server-level settings that Address Manager imports at the Windows server level, the Recursion setting is imported at the view level. If you configured the names and IP addresses of the root name servers on the Root Hints tab of the Managed Windows server, these are added to the Root Hints option in Address Manager. Several other features in Windows DNS require recursion. These include forwarding, conditional forwarding, and stub zones. Configuring any of these settings enables recursion on the Managed Windows server. Note: If you do not configure forwarding, forwarding zones (conditional forwarding) or stub zones in Address Manager, or if you do not assign the Root Hints option, Recursion is disabled during deployment. Note: The DNS root name servers change from time to time. In the event of a change to one or more of the root servers, the Windows DNS server may receive an update file from Microsoft. Unless Address Manager is also configured with these changes, it overwrites the Managed Windows server with out-of-date information during the next deployment. You must ensure that the list of name servers in Address Manager (Root Hints option) is always up-to-date. To set the Root Hints option: - Select the DNS tab. Tabs remember the page you last worked on, so select the DNS tab again to ensure you are working with the Configuration information page. - Click a view. The DNS View page opens. - Click the Deployment Options tab. - Click New, then select DNS Option. - From the Option drop-down menu, select Root Hints. - In the Server section, determine the servers to which this option applies: - To apply the option to all servers in the configuration select All Servers.Attention: Server Groups only support BlueCat DNS/DHCP Servers. - To apply the option to a specific server select Specific Server, and then select a server from the drop-down list. - Click Add to add the deployment option and return to the Deployment Options tab, or click Add Next to add another deployment option.
https://docs.bluecatnetworks.com/r/Address-Manager-Administration-Guide/View-level-deployment-options/9.0.0
2021-04-10T11:30:44
CC-MAIN-2021-17
1618038056869.3
[]
docs.bluecatnetworks.com
AMP With the Accelerated Mobile Pages (AMP) plugin, Brightspot delivers published websites to mobile devices much faster than with traditional HTML. Overview of AMP Web browsers, either on the desktop or on mobile devices, can suffer from latent loading of content, particularly with large media files. You can mitigate this effect by incorporating AMP into your site, which is an open-source library that provides the tools to deliver web pages almost instantaneously. AMP accelerates the delivery of an entire web page by requiring code to include certain features and to exclude others. Using AMP, your web pages gain the following performance improvements: Faster layout—Because AMP includes instructions for instantly laying out your web page, the text does not jump around after larger files arrive. Image previews—AMP providers often make low-resolution previews of your images and deliver them first before delivering the final images. The preview gives visitors a cue that the final image is about to appear, and even provides some context for the text they started reading. Pages appear in the search results carousel—Web pages implementing Google's version of AMP can appear in a carousel layout of search results along with a "quick view." Visitors see the product without being directed to your website, allowing for a shorter time-to-engagement. Content cached on Google servers—Google caches some of your AMP content, which can result in delivery faster than from your own server. Not all of your web pages or websites need AMP, and some of Brightspot's standard features provide AMP-like effects. As a best practice, evaluate your content, target audience, and competition before integrating AMP into your Brightspot project. Regardless, once you configure the AMP plugin and develop AMP-compliant templates, Brightspot uses those templates to deliver content almost instantaneously to your visitors. Configuring AMP Administrators perform these tasks. If your version of Brightspot is missing this feature, contact your Brightspot representative. From the Navigation Menu, expand Admin, and select Sites & Settings. In the Sites widget, click the site for which you want to enable AMP, or select Global to enable AMP for all sites. Under Front-End, under AMP, do the following: Mark Enabled. From the Types drop-down list, select the content types to which AMP applies. Click Save. Building AMP templates Developers perform this task. The following steps are required to build and deploy AMP: create an AMP-compliant template, and provide a link from the native template to the AMP template. After implementing these steps, the Brightspot server intercepts requests for native templates that have a corresponding AMP-compliant template, and delivers the latter to the visitor. Step 1: Create an AMP-compliant template After completing the standard (non-AMP) template, create a parallel AMP-compliant template in the same directory. The filename must end with the compound extension .amp.hbs. For examples and tools for developing AMP-compliant templates, see the following resources: Step 2: Add a discovery link to an AMP template In the native template's <head> tag, add a <link> tag to the AMP template.
https://docs.brightspot.com/4.2/en/plugins-guide/amp.html
2021-04-10T12:05:12
CC-MAIN-2021-17
1618038056869.3
[]
docs.brightspot.com
Ford Guide Here is a guide to supported Ford models and regions, information on the necessary connectivity subscriptions and available data use-cases. Available data points You can find a breakdown of all available data points in the Auto API availability for Ford table. Eligible Models The following European-market Ford models are eligible. In order to send data, a vehicle must have an active FordPass Connect subscription. All models come with a complimentary 2-year subscription to FordPass Connect. Supported Markets Ford customers in the European Economic Area can grant 3rd-party access to vehicle data. API Refresh Rate All eligible Ford models refresh the data when the engine is turned on, 60 seconds after an engine on event, engine off. In addition, the data is updated if there is a new dashboard warning light coming on. API Pricing You are presented with the pricing for your data points selection within the platform when creating a production app. The pricing is set per vehicle per month and varies according to the data use-case. Data Points You can find a breakdown of all available data points in the following table: Auto API availability for Ford. Each request returns data points and corresponding timestamps; the timestamps indicate the moment the data was transferred from the vehicle. Data points are selected via data buckets which have been tailored for specific use cases. The following data buckets are available: Pay-as-you-drive Enhanced insurance Logbook Maintenance Webhooks The webhooks below are currently in production. Example webhook The following example shows the JSON content of an vehicle_location_changed event being delivered. { "vehicle": { "vin": "1HMCF6112HA3FBBCC", "serial_number": "BB5EAC44D33F205A87" }, "event": { "type": "vehicle_location_changed", "received_at": "2019-02-20T09:13:33.563772Z", "action": null }, "application": { "id": "A77294AC8DA324FB46DA98921" } }
https://docs.high-mobility.com/guides/platform/ford/
2021-04-10T11:46:03
CC-MAIN-2021-17
1618038056869.3
[]
docs.high-mobility.com
You might have noticed , in case you search certain keywords or certain contains Bing will highlight them and show results and you might have noticed it is smart enough to highlight exact code from Stake Overflow. It would have been nice to work with Bing team to improve indexing for this website and in addition to highlight answers in Bing website like it is being done for other topics. This highlight is known as search tips and is very valuable for users so with one view will get the answer.
https://docs.microsoft.com/en-us/answers/content/idea/252614/integrate-microsoft-qampa-with-bing.html
2021-04-10T13:13:33
CC-MAIN-2021-17
1618038056869.3
[]
docs.microsoft.com
Arduino BT ATmega328 btatmega ; BT ATmega328 has on-board debug probe and IS READY for debugging. You don’t need to use/buy external debug probe.
https://docs.platformio.org/en/latest/boards/atmelavr/btatmega328.html
2021-04-10T11:30:39
CC-MAIN-2021-17
1618038056869.3
[]
docs.platformio.org
Proactive Customer Service Operations Proactively trigger case workflows and notify customers of any issues to resolve issues faster and lower inbound call volume. Customer Service Management provides an integration with ITOM Event Management. This integration enables you to improve the customer experience by breaking down the silos that exist between the processes and systems used by front line customer service and back-office operations teams. By monitoring issues and creating cases proactively, you can be one step ahead of your customers and solve issues faster. Proactive Customer Service Operations enables you to track the digital services sold to and in use by your customers. This is referred to as install base. If you identify any service disruptions to a customer's install base, you can proactively create a case to notify them. These cases are resolved and closed in the same way as customer-reported cases. When multiple customers are affected, you can use the major issue management process. Network Operations Center engineers monitor the service health of a customer's install base using the Event Management dashboard. For more details, see Monitor service health. Network Operations Center engineers manage alerts in Workspace, identify which customers affected, and either manually create a proactive case for customer service to review or set up an alert rule to automatically create a proactive case when an incident is created from an alert. For more details, see Manage alert lists. Your customers can use the Customer Service Portal to view proactive cases that you opened on their behalf and interact with you using the proactive case. Learn about Proactive Customer Service Operations with Event Management from the following video tutorial. Proactive Customer Service Operations with Event Management Activation information Activate the Proactive Customer Service Operations plugin (com.snc.proactive_cs_ops) to enable this feature. Activate Proactive Customer Service OperationsActivate the Proactive Customer Services plugin to use Proactive Customer Service Operations. Activate an alert rule to automatically create a proactive case from an alertConfigure an alert rule to automatically create a proactive case when an incident is created from an alert that has one or more affected install base items. Create a proactive case from an alertCreate a case for customer install base affected by an alert to anticipate customer issues and address them proactively.Related tasksManage a proactive case created from an alertCreate a case for install base from the Customer Service homepageCreate a product case from the Customer Service homepageRelated conceptsProactive Customer Service Operations dashboard
https://docs.servicenow.com/bundle/newyork-customer-service-management/page/product/customer-service-management/concept/proactive-service-operations.html
2021-04-10T12:22:20
CC-MAIN-2021-17
1618038056869.3
[]
docs.servicenow.com
Monitors enable you to specify conditions or events during motion capture sessions, and to interact with them. For example, you can create a monitor for a graphed model output (such as the subject raising an arm to a certain height, or the subject's left knee angle exceeding 180 degrees), and then configure it on the Monitors tab of the Communications window to trigger one or more actions (such as an event on the time bar or a tone sounding) when the model output matches a condition you specify. You create monitors in the Graph view. You can then configure the monitors in the Monitors tab of the Communications pane. For an example of creating a monitor, see Create a joint range overlay monitor (part of a biomechanics workflow). To create a monitor: - Decide on the elements you wish to monitor (trajectories, model outputs, devices, or joints). - In a Graph view, click the Differentiate the graph button and from the dropdown list, select either: - The current variable ( x ); or - Its first derivative, that is, its velocity or angular velocity (x'), or - Its second derivative, that is, its acceleration or angular acceleration (x") For example, a graph of a trajectory will have X, Y, and Z axes, but when differentiated to x' (velocity), the axes will change to X', Y', and Z' axes. - Click the Choose the components button and select the graph components that you want to plot in the Graph view (the options depend on your choice in the previous step). - On the Graph view toolbar, click the Create a Monitor button. The monitor is added to the Monitors list in the Monitors communications pane. The monitor takes the name of the component you selected. For example, if the Graph view you've selected shows X, Y, and Z for the LeftAnkleForce, three monitors are created: LAnkleForce:X, LAnkleForce:Y, and LAnkleForce:Z.Tip If you select multiple components for your Graph view, a monitor is created for each component (e.g., x, y, z). You can select and remove one or more monitors that you don't need from the Monitors list, or click Clear to remove all of them. You can now configure the monitor, for example to specify a monitor threshold and trigger conditions that will trigger an action.
https://docs.vicon.com/display/Nexus29/Create+a+monitor
2021-04-10T11:43:45
CC-MAIN-2021-17
1618038056869.3
[]
docs.vicon.com
Adding SUTs in Eggplant Manager Before selecting a system under test (SUT) for a test run, you must add it to Eggplant Manager so that the appropriate connection is made at runtime. You add SUTs on the SUTs page by going to Admin > SUTs. For general information about working with the SUTs administration page, see Working with Systems Under Test (SUTs). You have several methods for adding new SUTs, as well as many types of SUTs that Eggplant Manager supports. The method you choose for adding a SUT might be determined by the type of SUT you're adding as well as other factors. Be sure to read the following sections carefully so that you can make the appropriate choice. Using SUT Discovery to Add SUTs Use the SUT Discovery feature on the SUTs administration page to find test environments and automatically add basic information about the environment. This method is a good choice for adding multiple SUTs as well as for adding mobile device SUTs because much of the information on the New SUT dialog box is filled in for you. The SUT Discovery section of the SUT administration page in Eggplant Manager Choose Discover LAN SUTs, which locates any environment reachable via your LAN, or Discover USB SUTs, which locates mobile devices (Android or iOS) that are currently connected via USB, or Import SUTs from Eggplant Functional. Follow the appropriate instructions shown below to add SUTs with the automatic discovery feature. Step by Step: Adding LAN SUTs with SUT Discovery The following steps guide you through adding SUTs from your LAN by using the SUT Discovery feature: - In Eggplant Manager, go to the SUTs page by choosing Admin > SUTs, then locate the SUT Discovery section. - Click Discover LAN SUTs. Eggplant Manager searches for all environments reachable via the LAN. You'll see a progress bar during the search. When the search is complete, the Find LAN SUTs dialog box shows the list of SUTs available to add. - Select the checkbox for the SUTs you want to add, then click Save Selected. The new SUTs are added to the SUTs list at the top of the page. The IP address is added automatically, and the connection type defaults to VNC. - Add or update information about the device in the Edit SUT dialog box, if necessary. The SUT is now ready to be used for running tests through Eggplant Manager. Step by Step: Adding Mobile Device SUTs with SUT Discovery The following steps guide you through adding a USB SUT for a mobile device using SUT Discovery: - Connect the mobile device via USB to the computer where your Eggplant Manager server is running. - In Eggplant Manager, go to the SUTs page by choosing Admin > SUTs, then locate the SUT Discovery section. - Click Discover USB SUTs. Eggplant Manager searches for any devices connected to the Eggplant Manager machine via USB. Eggplant Manager creates a new SUT for each device it finds. The new SUT is added to the SUTs list at the top of the page.Note: If more than one device is detected, a new SUT is added for each one. - Add or update information about the device in the Edit SUT dialog box, if necessary. The mobile device SUT is now ready to be used for running tests through Eggplant Manager. Step by Step: Importing SUTs from Eggplant Functional Follow these steps to import SUTs that you've defined or added in the Eggplant Functional Connection List: - Click Import SUTs from Eggplant Functional. The Import SUTs window opens. - Click Choose File to select your Eggplant Functional preferences file. Follow the instructions to export the preferences file if you haven't done so previously. - Select the SUT you want to import, then click Submit. Your new SUT is added to the SUTs list at the top of the page. - Add or update information about the SUT in the Edit SUT dialog box, if necessary. The imported SUT is now ready to be used for running tests through Eggplant Manager. Manually Adding SUTs When you have individual SUTs to add, use the New SUT dialog box on the SUTs administration page. This method requires you to input all the details for the connection, and lets you customize all the information as you enter it. You must use the New SUT dialog box to create a rule-based SUT, as well as certain configurations for mobile device SUTs and VMs. Step by Step: Adding SUTs with the New SUT Dialog Box The following instructions guide you through adding a SUT. - In Eggplant Manager, go to Admin > SUTs. - In the SUTs section, click New SUT to open the New SUT dialog box. - Specify a unique name for the SUT in the Name field. - Select the type of SUT you are adding from the Type drop-down list. The next section changes depending on the SUT type you choose. Your choices are:VNC/RDP: These SUT types let you specify connection information to specific environments. Read more . . . These two SUT types are closely related. Typically, you use these SUT types for environments, which can be physical machines or virtual machines (VMs), that you connect to via a specific IP address or hostname and on a specific port. Complete the following fields to add either a VNC SUT or an RDP SUT: Rule: This SUT type matches available SUTs to your specified criteria at test runtime. - IP Address: Enter the IP address or hostname of the SUT. - Port: Specify the port for your connection. By default, 5900 is entered for VNC connections, and 3389 is entered for RDP connections. - Username: (RDP only) Enter the Windows username needed for connections to this SUT, if one is required. (Optional) - Password: Enter the password for this SUT, if one is required. (Optional) - Screen Size: (RDP only) Enter the Width and Height for the RDP connection window. (Optional) Read more . . . With rule-based SUTs, you set matching criteria, or rules, then Eggplant Manager selects a specific SUT from your list that meets the requirements and is available at test runtime. Complete the following fields to create a rule-based SUT: - Match: Select the attribute you want to match from the drop-down list. Most of the attributes that appear on the New SUT dialog box are available for matching, and if you have created custom SUT Tags, they appear in the list as well. - Value: Enter the specific value for the attribute that you want to match. Matches are case-sensitive and must be exact. The New SUT dialog box showing a rule-based SUT for Windows 10 environments To add additional rules, click the green plus (+) button to add a new Match and Value pair. Every rule you create must be true for the overall match to be valid. To remove a rule, click the red X button. Beneath the Match and Value fields, Eggplant Manager automatically shows the valid matches based on the current rule set. These are the SUTs that Eggplant Manager chooses from when you start a test with this rule-based SUT. You won't know the specific "fixed" SUT until the test begins.Note: If you need a test to run in a specific SUT, it's always best to specify that SUT rather than to use a rule-based SUT.Note: When you select the Rule option, the New SUT dialog box changes so you no longer have options on the right side of the window, such as OS and Version. Instead, you would make rules to match those fields in other SUTs.TCP: This SUT type lets you use different functional test automation tools, such as Selenium, Appium, and other tools integrating with Eggplant Functional that require a TCP connection, with your Eggplant Automation Cloud-based testing environment. Read more... Select this SUT type if you currently use a variety of functional test automation tools and know that a tool requires a TCP connection. TCP SUTs let Eggplant Automation Cloud manage access for 3rd party testing solutions that use TCP connections. After a reservation to a 3rd-party tool is active (connected using TCP), you can run scripts from that 3rd party tool using the reservation IP address and port. Make entries in the following fields to add a TCP SUT: - IP Address: Enter the IP address or hostname of the SUT. - Port: Specify the port for your connection. By default, 5900 is entered for TCP connections. The TCP setting dialog box for connecting Eggplant Functional to test automation tools.Mobile SUT: Use this SUT type to add Android and iOS device SUTs. Read more . . . Use the Mobile SUT option to add Android and iOS device SUTs. For Eggplant Manager to connect to these devices, you also need to install the appropriate support application (Android Gateway or iOS Gateway). With a properly configured mobile device SUT, Eggplant Manager launches the support application, makes the connection to the device, and then runs the test.Note: The Mobile SUT option appears only if either Android Gateway or iOS Gateway (or both) is installed on the machine where Eggplant Manager is running. Complete the following fields to add a mobile device SUT: - Gateway: Choose iOS Gateway to connect to iOS mobile devices or Android Gateway to connect to Android mobile devices.Note: The appropriate application must be installed on the machine where Eggplant Manager is running for it to appear in the list. - SUT Identifier: Enter the Unique Device ID (UDID) for iOS devices or the serial number for Android devices. You should be able to find this information through iOS Gateway, Android Gateway, or through the device's settings. If you use SUT Discovery to add a mobile device SUT, this information is filled in automatically. Adding a new Mobile SUT for an iOS device in Eggplant Manager. For more information about using mobile device SUTs for tests, see Mobile Device SUTs.VirtualBox Virtual Machines: This SUT type allows Eggplant Manager to start and stop VMs for use at test runtime. Read more . . . This SUT type requires that you use Oracle VM VirtualBox for the VMs you wish to manage with this method. VirtualBox VMs can run Windows, Mac OS, or Linux operating systems, and you can set the connection type as either VNC or RDP. If you install the VirtualBox Extension Pack, you also have the Auto RDP connection option, which lets VirtualBox make the connection to the virtual environment.Note: The VirtualBox Virtual Machine SUT type appears only if VirtualBox is installed on the machine where Eggplant Manager is running. Complete the following fields to add a VirtualBox Virtual Machine SUT: - VM Template: The drop-down list shows all available appliances (VMs) that are currently in VirtualBox. - Connection: Select the connection method, either Auto RDP, VNC, or RDP, from the drop-down list. Note, however, that you won't have the Auto RDP option unless you have installed the VirtualBox Extension Pack. With Auto RDP, VirtualBox makes the connection to the SUT, so no further information is required. The following fields are enabled only if you select VNC or RDP. - IP Address: Enter the IP address or hostname to connect to this VM. - Port: Enter the port over which you connect to this VM. By default, 5900 is entered for a VNC connection, and 3389 is entered for an RDP connection. - Password: Enter the password for this SUT, if one is required. The Virtual Machine section of the New SUT dialog box in Eggplant Manager. When you add a Virtual Machine SUT, a clone of the appliance is created in VirtualBox after you click Create (step 9 below). The new SUT is added to the SUTs list and is marked Inactive until the cloning is complete, when it switches to Active.Note: You can create multiple SUTs from the same VM Template (appliance). However, each SUT you add creates a clone of the original appliance, so you will want to watch the amount of disk space you are using. For more information about using the embedded Virtual Machines feature, see Setting up VirtualBox SUTs.VMware Virtual Machines: This SUT type allows Eggplant Manager to start and stop VMs for use at test runtime. Read more . . . This SUT type requires that you use VMware Fusion or Fusion Pro (Mac) or Workstation Pro (Windows or Linux) for the VMs you wish to manage with this method. These VMs can run Windows or Mac operating systems, and you can set the connection type as either VNC or RDP. Complete the following fields to add a VMware virtual machine SUT: Note: Your list might also include Mobile Emulator. This SUT type for Android and iOS mobile emulators is available only if you have enabled experimental features on the System Preferences page. Each emulator platform has additional requirements as well. - Connection: Select the connection method, either VNC, or RDP, from the drop-down list. Note that VMware virtual machines feature a built-in VNC server. - File Browser: Selecting this box adds the Virtual Machine Directory Path field to the New SUT window. Click the folder to navigate to the location of your VMware virtual machine. Note that you can also enter this path in Admin > System Preferences. - IP Address: Enter the IP address or hostname to connect to this VM. - Port: Enter the port over which you connect to this VM. - Password: Enter the password for this SUT, if one is required. - Select the Active checkbox if you want this SUT to be available to run tests. If the checkbox is not selected, the SUT definition remains but the SUT is unavailable for tests. Default: selected. - Set group membership in the Groups box. When you add new SUTs, the Default Group is automatically selected. Be sure to remove SUTs from this group and add them to your own groups if you intend to use groups to control user access to specific SUTs. For information about using groups to manage access, see Working with Eggplant Automation Cloud Groups. - Enter descriptive text in the Description field, if desired. This field is optional. However, you can use it to provide additional information about the SUT. - On the right side of the New SUT dialog box, fill in the additional information about the SUT. These fields include: Note: Although these fields are optional, you should fill them all in if you plan to use Test Advisor to determine coverage. - Manufacturer - Model - OS - Version Note: If your SUT is a mobile device connected using VNC, click Poll Data to have Eggplant Manager automatically populate these fields. If you have custom SUT Tags in your environment, they also appear here. Although these fields are optional, any information you enter can be used to create rule-based SUTs as well as for filtering the SUTs list, so it's a good idea to include as much information as you can. - When you finish filling in the information for your new SUT, click Create. Your new SUT appears in the SUTs list. The New SUT dialog box in Eggplant Manager. Editing and Removing SUTs To edit a SUT, select the SUT in the SUTs list, then click Edit SUT. You can also double-click the SUT in the list. The Edit SUT dialog box contains the same fields as the New SUT dialog box. To remove an existing SUT, select it in the list and click Remove SUT. You can also click the Remove button on the Edit SUT dialog box. Creating SUT Tags If you want to record additional information or attributes for your SUTs, create SUT tags. SUT tags are custom fields that, when created, display on the Create and Edit SUT panels. For example, you might add a field named Device to indicate the type of device as shown in the sample screen below. You can also create SUT tags to add criteria you can use for matching your rule-based SUTs. Click New Tag to open the New Tag dialog box: The name you enter for the tag appears in the drop-down list of filters. You can also include a description to explain what the tag is used for and any specific information about how information should be entered. For example, you might include a tag such as Accessibility where you want the information stored as a Boolean, true or false, to indicate whether the SUT has accessibility features turned on. When you have SUT Tags defined, they appear as fields when you add or edit SUTs: The value you enter in the tag field becomes something you can use as a filter. For example, if you defined a Location filter, the value you entered in the field might correspond to the physical location of the SUT. So if you selected Location from the Filter drop-down list, you could enter server room in the Value field to find only SUTs entered with that location.
http://docs.eggplantsoftware.com/ePM/epm-add-suts.htm
2020-02-17T02:02:03
CC-MAIN-2020-10
1581875141460.64
[array(['../Resources/Images/epm-admin-suts-discovery.png', 'SUT Discovery feature on the SUTs administration page in Eggplant Manager SUT Discovery feature on the SUTs administration page in Eggplant Manager'], dtype=object) array(['../Resources/Images/ecl-sut-tags.png', 'SUT Tags area of the SUTs page in Eggplant Automation Cloud SUT Tags area of the SUTs page in Eggplant Automation Cloud'], dtype=object) array(['../Resources/Images/epm-new-sut-tag.png', 'Eggplant Manager New SUT tag panel Eggplant Manager New SUT tag panel'], dtype=object) ]
docs.eggplantsoftware.com
Virtualisation for the DBA part 4 – Licensing and Support To wrap up this mini-series on virtualisation, I wanted to clarify the support and licensing stuff you need to know if you want to Virtualise. The support is really simple, Microsoft support virtual machines just as though they are real environments. The interesting bit is that this isn’t specific to Hyper-V, it also applies to various versions of Vmware ESX and vSphere, Citrix Xen plus various other products listed in the Server Virtualisation Validation Program (SVVP). One thing to bear in mind here is that SQL Server 2000 and other products are in extended support or not supported now and the virtualisation doesn’t change that, so don’t expect too much help if you’re planning to run SQL Server 1.1. on OS/2 (although it works!). The licensing is also remarkably straightforward. If you are using any edition except standard edition then you license the virtual machine as though it were a physical machine where virtual processors count as processors. However if you you enterprise edition then you simply license the physical machine and then you are good to run as many virtual machines or instances on it as you wish each with a copy of SQL Server enterprise on it. Two things to note: - This applies to SQL server 2005 sp2 and later - It doesn’t matter what virtualisation technology you’re using To get the definitive word on licensing in a virtual world form whihc you can be quoted on go here. Hopefully you know understand the pressure being put on you to virtualise and have the resources to make this as painless as possible or push back if it isn’t going to work for certain workloads under your control. If not you have my contact details! Technorati Tags: SQL Server,virtualisation,Xen,Vmware,vSphere,licensing,support
https://docs.microsoft.com/en-us/archive/blogs/andrew/virtualisation-for-the-dba-part-4-licensing-and-support
2020-02-17T02:18:26
CC-MAIN-2020-10
1581875141460.64
[]
docs.microsoft.com
Once you've added these fields, click Save Changes. Within this dropdown you should see Parameters as an option within ImageEngine. Click Parameters and click Create new. This will open up a form asking for a key and value. The key should be 'cdn' and the value should be the ImageEngine domain that ScientiaMobile had configured for you. Click Save changes.. 1875 Campus Commons Dr. Suite 300 Reston, VA 20191 USA 2020 - ScientiaMobile, Inc. All rights reserved. WURFL® and ImageEngine® are the registered trademarks of ScientiaMobile, Inc.
https://docs.scientiamobile.com/documentation/ImageEngine/sitefinity-integration
2020-02-17T01:45:47
CC-MAIN-2020-10
1581875141460.64
[]
docs.scientiamobile.com
Overview of network editing¶ groupss GTFS bundle (See Managing modifications). Create a new modification¶ To add a modification, first navigate to the modification page (the icon) where you will need to select a project if you haven’t already. If your project is already selected you should see the following button below the project name. Create a modification This will open a window prompting you to select the modification type and enter a name for the modification. Choose a descriptive name - you will generally want to include at least the name of the route created or affected by the modification. After confirming these details, you will be taken to a page displaying options that vary by modification type. Create a new scenario¶¶ By default, each modification is active in all scenarios that exist when the modification is created. You can change which scenarios a modification is active in by using the checkboxes corresponding to scenarios at the bottom of the modification detail panel.
https://analysis-ui.readthedocs.io/en/latest/edit-scenario/
2020-02-17T01:19:49
CC-MAIN-2020-10
1581875141460.64
[array(['../img/create-scenario.png', None], dtype=object) array(['../img/scenario-chooser.png', None], dtype=object)]
analysis-ui.readthedocs.io
For regular operations and for debugging, there are some ports you will need to keep open to network traffic from end users. Another, larger list of ports must be kept open for network traffic between the nodes in the cluster. Required ports for operations and debugging The following ports need to be opened up to requests from your user population. There are two main categories: operations and debugging. Network Ports This reference lists the potential ports to open when setting up your security group. Required ports for inter-cluster operation+). Required ports for inbound and outbound cluster access ThoughtSpot uses static ports for inbound and outbound access to a cluster. Required ports for IPMI (Intelligent Platform Management Interface) ThoughtSpot uses static ports for out-of-band IPMI communications between the cluster and ThoughtSpot Support.
https://docs.thoughtspot.com/5.0/admin/setup/firewall-ports.html
2020-02-17T00:55:01
CC-MAIN-2020-10
1581875141460.64
[]
docs.thoughtspot.com
Spec Fixed CyclicShapeCyclicShape defined at compile time. Template Parameters Member Function Overview CyclicShape();, CyclicShape(shape);The constructor Interface Function Overview Interface Functions Inherited From CyclicShape Interface Metafunction Overview WEIGHT<CyclicShape<FixedShape<L, TInnerShape, R> > >::VALUE;Weight (number of care-positions) of Fixed CyclicShapes Interface Metafunctions Inherited From CyclicShape Member Variable Overview TSize[] FixedCyclicShape::diffsDistances between care positions. Member Variables Inherited From CyclicShape Detailed Description Fixed CylcicShapes contain their information at compile time, so in most cases no object must be created. The notation is similar to the one of HardwiredShape: Template paramters code for the distance between ones. Additionally you have to specify the number of leading and trailing zeros. The notation is chosen in such a way that predefined Shapes like PatternHunter can be plugged into a CyclicShape. Like HardwiredShapes, Fixed CyclicShapes are limited to a weight of 21. See CyclicShape for an example on how to use a CyclicShape. See Also Member Functions Detail CyclicShape(); CyclicShape(shape); Parameters This constructor does not do anything, the FixedCyclicShape is defined by its type alone. See the example on how to create a CyclicShape: typedef GappedShape<HardwiredShape<1, 1, 3, 2> > TInnerShape; // 11100101 // ^--You can also use predefied Shapes here, e.g. Patternhunter typedef CyclicShape<FixedShape<2, TInnerShape, 1> > TShape; // 00111001010 TShape shape; Member Variables Detail TSize[] FixedCyclicShape::diffs static const TSize diffs[] has weight many non-zero entries in the beginning, the remaining ones are zero. An additional entry at position weight1 holds the cyclic distance from the last "1" to the first one. During the iteration with a ModCyclicShapeModifiedIterator an index position keeps track of the position inside diffs.
http://docs.seqan.de/seqan/2.0.0/specialization_FixedCyclicShape.html
2020-02-17T00:24:23
CC-MAIN-2020-10
1581875141460.64
[]
docs.seqan.de
Tower supports integration with Red Hat Insights. Once a host is registered with Insights, it will be continually scanned for vulnerabilities and known configuration conflicts. Each of the found problems may have an associated fix in the form of an Ansible playbook. Insights users create a maintenance plan to group the fixes and, ultimately, create a playbook to mitigate the problems. Tower tracks the maintenance plan playbooks via an Insights project in Tower. Authentication to Insights via Basic Auth, from Tower, is backed by a special Insights Credential, which must first be established in Tower. To ultimately run an Insights Maintenance Plan in Tower, you need an Insights project, an inventory, and a Scan Job template. To create a new credential for use with Insights: Enter a valid Insights credential in the Username and Password fields. The Insights credential is the user’s Red Hat Customer Portal account username and password. To create a new Insights project: All SCM/Project syncs occur automatically the first time you save a new project. However, if you want them to be updated to what is current in Insights, manually update the SCM-based project by clicking the button under the project’s available Actions. This process syncs your Tower Insights project with your Insights account solution. Notice that the status dot beside the name of the project updates once the sync has run. The Insights playbook contains a hosts: line where the value is the hostname that Insights itself knows about, which may be different than the hostname that Tower knows about. Therefore, make sure that the hostnames in the Tower inventory match up with the system in the Red Hat Insights Portal. To create a new inventory for use with Insights: Note Typically, your inventory already contains Insights hosts. Tower just doesn’t know about them yet. The Insights credential allows Tower to get information from Insights about an Insights host. Tower identifying a host as an Insights host can occur without an Insights credential with the help of scan_facts.yml file. For instructions, refer to the Create a Scan Job Template section. In order for Tower to utilize Insights Maintenance Plans, it must have visibility to them. Create and run a scan job against the inventory using a stock manual scan playbook.. This is the location where the scan job template is stored. All SCM/Project syncs occur automatically the first time you save a new project. However, if you want them to be updated to what is current in Insights, manually update the SCM-based project by clicking the button under the project’s available Actions. Syncing imports into Tower any Maintenance Plans in your Insights account that has a playbook solution. It will use the default Plan resolution. Notice that the status dot beside the name of the project updates once the sync has run. Create a scan job template that uses the fact scan playbook: scan_facts.ymlfrom the drop-down menu list. This is the playbook associated with the Scan project you previously set up. Once complete, the job results display in the Job Details page. Remediation of an Insights inventory allows Tower to run Insights playbooks with a single click. Notice the Insights tab is now shown on Hosts page. This indicates that Insights and Tower have reconciled the inventories and is now set up for one-click Insights playbook runs. The screen below populates with a list of issues and whether or not the issues can be resolved with a playbook is shown. Upon remediation, the New Job Template window opens. Notice the Inventory and Project fields are pre-populated. Use this new job template to create a job template that pulls Maintenance Plans from Insights. Once complete, the job results display in the Job Details page.
https://docs.ansible.com/ansible-tower/latest/html/userguide/insights.html
2020-01-17T21:52:54
CC-MAIN-2020-05
1579250591234.15
[]
docs.ansible.com
openbsd_pkg – Manage packages on OpenBSD¶ Requirements¶ The below requirements are needed on the host that executes this module. - python >= 2.5 Notes¶ Note - When used with a loop: each package will be processed individually, it is much more efficient to pass the list directly to the name option. Examples¶ - name: Make sure nmap is installed openbsd_pkg: name: nmap state: present - name: Make sure nmap is the latest version openbsd_pkg: name: nmap state: latest - name: Make sure nmap is not installed openbsd_pkg: name: nmap state: absent - name: Make sure nmap is installed, build it from source if it is not openbsd_pkg: name: nmap state: present build: yes - name: Specify a pkg flavour with '--' openbsd_pkg: name: vim--no_x11 state: present - name: Specify the default flavour to avoid ambiguity errors openbsd_pkg: name: vim-- state: present - name: Specify a package branch (requires at least OpenBSD 6.0) openbsd_pkg: name: python%3.5 state: present - name: Update all packages on the system openbsd_pkg: name: '*' state: latest - name: Purge a package and it's configuration files openbsd_pkg: name: mpd clean: yes state: absent - name: Quickly remove a package without checking checksums openbsd_pkg: name: qt5 quick: yes state: absent Status¶ - This module is not guaranteed to have a backwards compatible interface. [preview] - This module is maintained by the Ansible Community. [community]
https://docs.ansible.com/ansible/latest/modules/openbsd_pkg_module.html
2020-01-17T21:15:22
CC-MAIN-2020-05
1579250591234.15
[]
docs.ansible.com
PDFs Error rendering macro 'show-if'Failed to render Visibility macro due to: @anonymous is not a valid user This version of the documentation is no longer supported. However, the documentation is available for your convenience. You will not be able to leave comments. Last modified by Shweta Hardikar on Jan 08, 2018 pdfs Comments Mohamed Bayoumi where's the PDF file? Oct 06, 2016 08:52 Shweta Hardikar Hi Mohamed,Are you able to see it now? Oct 06, 2016 11:57 Mohamed Bayoumi Yes i can see, appreciated. Oct 09, 2016 01:25 Troubleshooting run books FAQs and additional resources where's the PDF file? Hi Mohamed, Are you able to see it now? Yes i can see, appreciated.
https://docs.bmc.com/docs/AtriumOrchestratorContent/201402/pdfs-502996962.html
2020-01-17T23:27:52
CC-MAIN-2020-05
1579250591234.15
[]
docs.bmc.com
Tenant Certain business use cases require a Salesforce organization (the "virtual space" that includes all data and applications of an individual business) to hold specific billing-relevant information or to be further divided in subdivisions. To this end, JustOn has introduced tenants. The tenant can hold billing-relevant information of your company or company subdivision, like address data or bank data. When using tenants, you can automatically assign a tenant to an invoice during the invoice creation. Consequently, the tenant information can be printed to the invoice via custom placeholders, like [TenantName] or [TenantIBAN]. Tenant Information JustOn implements tenants using the custom setting Tenants. The following fields are available: Defining Tenant Depending on your organization's requirements, you must create individual tenants for specific regions, customers or other criteria. To create a new tenant: In Setup, open Custom Settings. In Salesforce Lightning, navigate to Custom Code > Custom Settings. In Salesforce Classic, navigate to Develop > Custom Settings. Click Manage in the row of Tenant. - Click New. Specify the details as necessary. Specify your company's or company subdivision's information in the corresponding fields as necessary. Selecting the Defaultcheckbox makes JustOn use the current tenant for all invoices. Click Save. This creates the new tenant and makes it available for subscriptions and invoices. Info You can define multiple tenants. To determine which tenant to assign to a particular invoice, you can use tenant mappings. Assigning Tenants to Invoices and Subscriptions Invoices and subscriptions can specify a tenant. If the information is available, JustOn automatically populates the Tenant field on the invoice or subscription when creating these objects. You can, however, later override the tenant manually. Info Make sure to check that the spelling exactly matches the tenant name as defined in the custom setting. JustOn performs an exact match on the tenant name when retrieving the corresponding information. When determining the tenant for an invoice, JustOn tries to retrieve the information in the following order: - Copy tenant from subscription - Determine tenant from account via tenant mapping - Use default tenant - Leave Tenantfield empty Similarly, when determining the tenant for an invoice, JustOn proceeds as follows: - Determine tenant from account via tenant mapping - Use default tenant - Leave Tenantfield empty
https://docs.juston.com/en/jo_admin_config_appset_tenant/
2020-01-17T22:50:21
CC-MAIN-2020-05
1579250591234.15
[]
docs.juston.com
Global Settings API Name: GlobalSettings__c Global settings for bill.ON. Label API Name Type Description Name Name Text(255) Active Subscription Status ActiveSubscriptionStatus__c Text(255) A comma separated list of subscription status which are considered during the invoice run. 'Active' is always considered. Adjust for Tax Rounding Differences AdjustTaxRounding__c Checkbox Adjust 'Total (Tax)' and 'Grand Total' for tax rounding differences on invoices with only one tax rate. Note: this will not work, if there are multiple tax rates. Use Advanced Currency Management AdvancedCurrencyManagement__c Checkbox When advanced currency management is enabled, the Conversion Rate stored at the Invoice will be taken from the advanced conversion rate settings instead of the standard conversion rates. Accounting Gross Taxes on First Month AggregateGrossBookkeepingTaxes__c Checkbox If selected, the taxes for gross values will be aggregated to the first booking detail month. Allow Invoice Changes in Status Open AllowInvoiceChanges__c Checkbox This allows to change certain fields of invoices, in status 'Open', which are otherwise locked. You may use this to edit fields of imported invoices (e.g. Name). Use with care and keep it unchecked most of the time. Allow Overpayments AllowOverpayments__c Checkbox Allow overpayments on open invoices and do not split balances. Allow Pro Forma / Deposit Dunnings AllowProformaDunnings__c Checkbox When checked, dunnings (usually reminders) can be created for Pro Forma and Deposit invoices. Always create Content Distribution AlwaysCreateContentDistribution__c Checkbox This option forces to always create content distributions for invoice and dunning pdf files. Attach Invoice PDFs to Account Statement AttachInvoicePdfToAccountStatement__c Checkbox When enabled, the Invoice PDF Attachments are cloned and attached to Account Statements. Note: Large or many PDFs will quickly exceed the heap size limit! Base URL BaseURL__c Text(255) Billed Opportunity Stage Name BilledOpportunityStageName__c Text(40) Completely billed Opportunities will be set to this stage. Create Bookkeeping Data CreateBookkeepingData__c Checkbox Once this is checked, JustOn will create Booking Periods and Booking Details for finalized invoices. Decimal Places for Quantity DecimalPlacesForQuantity__c Number(2, 0) The default number of decimal places for the quantity as displayed on the invoice. Decimal Places for Unit Price DecimalPlacesForUnitPrice__c Number(2, 0) The default number of decimal places for the unit price as displayed on the invoice. Disable Statistic Line Items DisableStatisticLineItems__c Checkbox If checked, the statistics export won't export invoice line item data. Display Quantity DisplayQuantity__c Text(255) API Name field of the transaction object, which is to be accumulated for the display quantity. Enable Accounting in Gross Values EnableGrossBookkeeping__c Checkbox If checked, JustON will use gross values when creating booking data. Use End of Month as Booking Date EndOfMonthBookingDate__c Checkbox When checked the Booking Date of Booking Details will be the last day of the month instead. Grace Period GracePeriod__c Number(3, 0) Grace period in days for subscription renewal, i.e. the subscription renewal date is determined by: subscription end date - cancelation terms + grace period Invoice PDF URL InvoicePdfUrl__c Url Define a custom URL for Invoice PDFs. This might point to a force.com site. The URL may contain placeholders (e.g.[InvoiceId Late Fee Rate LateFeeRate__c Percent(15, 3) Defines the monthly interest rate for calculating the late fees for invoices. This value will be used if the rate is empty at the dunning level setting. Log Level LogLevel__c Text(7) The cutoff for logging. Log-Statements below this level are not logged. Possible Values and their order: NONE, FATAL, ERROR, WARNING, INFO. Merge Items on Renewal MergeItemsOnRenewal__c Checkbox Merge multiple items with the same criteria into a single item during subscription renewal. Move Tax Booking Details to Payment Date MoveTaxBookingDetailsToPaymentDate__c Checkbox Multiple Draft Invoices MultipleDraftInvoices__c Checkbox Allow multiple draft invoices per subscription for different invoice runs. Multiple Draft Statements MultipleDraftStatements__c Checkbox Allow to create more than one statement (Account, Dunning) for the same period and filter. Not Billed Opportunity Stage Name NotBilledOpportunityStageName__c Text(40) Partly or not billed Opportunities will be set to this stage. Notification Level NotificationLevel__c Text(7) The cutoff for notifications. Notifications at the end of batch chains will only be sent, if the level has been reached. Possible Values and their order: NONE, FATAL, ERROR, WARNING, INFO. Notification Target NotificationTarget__c Text(255) Send notifications for finished batches via EMAIL. No notifications will be sent if empty. Pricebook Fields PricebookFields__c Text(255) Allows to define a comma separated list of pricebook entry fields in order to customize the picklist at the New Item From Product page. Defaults to 'Name'. Rewrite InvoicePDF URL RewriteInvoicePDFURL__c Text(255) Rewrite Invoice URL RewriteInvoiceURL__c Text(255) Rewrite PaypalBuyerReturn URL RewritePaypalBuyerReturnURL__c Text(255) Rewrite PaypalIPN URL RewritePaypalIPNURL__c Text(255) Rewrite QpayBuyerReturn URL RewriteQpayBuyerReturnURL__c Text(255) Settle Credits SettleCredits__c Checkbox Settle open credit memos with the next finalize invoice batch. This has no effect to manual settlement. Settle Invoices SettleInvoices__c Checkbox Settle open invoices with the next finalize invoice batch. This has no effect to manual settlement. Track E-Mail TrackEMail__c Checkbox Create a task if an invoice email is send successfully. Use Billing Address for Tax UseBillingForTax__c Checkbox Define the invoice address to use for tax calculation: unchecked -> shipping, checked -> billing. Use Debtor No for Deferred Revenue UseDebtorNoForDeferredRevenue__c Checkbox Prefer the debtor no from the account for the BP AccountNo of deferred booking details instead of a collective account. Write-Off Cap Amount WriteOffCapAmount__c Number(13, 5) The cap for the amount of write-off balances. Write-off balances are created when a payment is registered at the invoice, but only if the cap amount is not exceeded. Write-Off Cap Currency WriteOffCapCurrency__c Text(3) The currency ISO code for the Write-Off Cap Amount field. Write-Off Threshold WriteOffThreshold__c Percent(13, 5) The threshold (in percent) for the creation of write-off balances during payment creation.
https://docs.juston.com/jo_objects/GlobalSettings/
2020-01-17T21:15:22
CC-MAIN-2020-05
1579250591234.15
[]
docs.juston.com
Tutorial: Developing a Power BI visual We’re enabling developers to easily add Power BI visuals into Power BI for use in dashboard and reports. To help you get started, we’ve published the code for all of our visualizations to GitHub. Along with the visualization framework, we’ve provided our test suite and tools to help the community build high-quality Power BI visuals for Power BI. This tutorial shows you how to develop a Power BI custom visual named Circle Card to display a formatted measure value inside a circle. The Circle Card visual supports customization of fill color and thickness of its outline. In the Power BI Desktop report, the cards are modified to become Circle Cards. In this tutorial, you learn how to: - Create a Power BI custom visual. - Develop the custom visual with D3 visual elements. - Configure data binding with the visual elements. - Format data values. Prerequisites - If you're not signed up for Power BI Pro, sign up for a free trial before you begin. - You need Visual Studio Code installed. - You need Windows PowerShell version 4 or later for windows users OR the Terminal for OSX users. Setting up the developer environment In addition to the prerequisites, there are a few more tools you need to install. Installing node.js To install Node.js, in a web browser, navigate to Node.js. Download the latest feature MSI installer. Run the installer, and then follow the installation steps. Accept the terms of the license agreement and all defaults. Restart the computer. Installing packages Now you need to install the pbiviz package. Open Windows PowerShell after the computer has been restarted. To install pbiviz, enter the following command. npm i -g powerbi-visuals-tools Creating and installing a certificate Windows To create and install a certificate, enter the following command. pbiviz --install-cert It returns a result that produces a passphrase. In this case, the passphrase is 15105661266553327. It also starts the Certificate Import Wizard. In the Certificate Import Wizard, verify that the store location is set to Current User. Then select Next. At the File to Import step, select Next. At the Private Key Protection step, in the Password box, paste the passphrase you received from creating the cert. Again, in this case it is 15105661266553327. At the Certificate Store step, select the Place all certificates in the Following store option. Then select Browse. In the Select Certificate Store window, select Trusted Root Certification Authorities and then select OK. Then select Next on the Certificate Store screen. To complete the import, select Finish. If you receive a security warning, select Yes. When notified that the import was successful, select OK. Important Do not close the Windows PowerShell session.. Creating a custom visual Now that you have set up your environment, it is time to create your custom visual. You can download the full source code for this tutorial. Verify that the Power BI Visual Tools package has been installed. Review the output, including the list of supported commands. To create a custom visual project, enter the following command. CircleCard is the name of the project. pbiviz new CircleCard Note You create the new project at the current location of the prompt. Navigate to the project folder. cd CircleCard Start the custom visual. Your CircleCard visual is now running while being hosted on your computer. pbiviz start Important Do not close the Windows PowerShell session. Testing the custom visual In this section, we are going to test the CircleCard custom visual by uploading a Power BI Desktop report and then editing the report to display the custom visual. Sign in to PowerBI.com > go to the Gear icon > then select Settings. Select Developer then check the Enable Developer Visual for testing checkbox. Upload a Power BI Desktop report. Get Data > Files > Local File. You can download a sample Power BI Desktop report if you do not have one created already. Now to view the report, select US_Sales_Analysis from the Report section in the nav pane on the left. Now you need to edit the report while in the Power BI service. Go to Edit report. Select the Developer Visual from the Visualizations pane. Note This visualization represents the custom visual that you started on your computer. It is only available when the developer settings have been enabled. Notice that a visualization was added to the report canvas. Note This is a very simple visual that displays the number of times its Update method has been called. At this stage, the visual does not yet retrieve any data. While selecting the new visual in the report, Go to the Fields Pane > expand Sales > select Quantity. Then to test the new visual, resize the visual and notice the update value increments. To stop the custom visual running in PowerShell, enter Ctrl+C. When prompted to terminate the batch job, enter Y, then press Enter. Adding visual elements Now you need to install the D3 JavaScript library. D3 is a JavaScript library for producing dynamic, interactive data visualizations in web browsers. It makes use of widely implemented SVG HTML5, and CSS standards. Now you can develop the custom visual to display a circle with text. To install the D3 library in PowerShell, enter the command below. npm i d3@^5.0.0 --save PS C:\circlecard>npm i d3@^5.0.0 --save + [email protected] added 179 packages from 169 contributors and audited 306 packages in 33.25s found 0 vulnerabilities PS C:\circlecard> To install type definitions for the D3 library, enter the command below. npm i @types/d3@^5.0.0 --save PS C:\circlecard>npm i @types/d3@^5.0.0 --save + @types/[email protected] updated 1 package and audited 306 packages in 2.217s found 0 vulnerabilities PS C:\circlecard> This command installs TypeScript definitions based on JavaScript files, enabling you to develop the custom visual in TypeScript (which is a superset of JavaScript). Visual Studio Code is an ideal IDE for developing TypeScript applications. To install the core-js in PowerShell, enter the command below. npm i [email protected] --save PS C:\circlecard> npm i [email protected] --save > [email protected] postinstall F:\circlecard\node_modules\core-js > node scripts/postinstall || echo "ignore" Thank you for using core-js ( ) for polyfilling JavaScript standard library! The project needs your help! Please consider supporting of core-js on Open Collective or Patreon: > > + [email protected] updated 1 package and audited 306 packages in 6.051s found 0 vulnerabilities PS C:\circlecard> This command installs modular standard library for JavaScript. It includes polyfills for ECMAScript up to 2019. Read more about core-js To install the powerbi-visual-api in PowerShell, enter the command below. npm i powerbi-visuals-api --save-dev PS C:\circlecard>npm i powerbi-visuals-api --save-dev + [email protected] updated 1 package and audited 306 packages in 2.139s found 0 vulnerabilities PS C:\circlecard> This command installs Power BI Visuals API definitions. Launch Visual Studio Code. You can launch Visual Studio Code from PowerShell by using the following command. code . In the Explorer pane, expand the node_modules folder to verify that the d3 library was installed. Make sure that file index.d.ts was added, by expanding node_modules > @types > d3 in the Explorer pane. Developing the visual elements Now we can explore how to develop the custom visual to show a circle and sample text. In the Explorer pane, expand the src folder and then select visual.ts. Note Notice the comments at the top of the visual.ts file. Permission to use the Power BI custom visual packages is granted free of charge under the terms of the MIT License. As part of the agreement, you must leave the comments at the top of the file. Remove the following default custom visual logic from the Visual class. - The four class-level private variable declarations. - All lines of code from the constructor. - All lines of code from the update method. - All remaining lines within the module, including the parseSettings and enumerateObjectInstances methods. Verify that the module code looks like the following. "use strict"; import "core-js/stable"; import "../style/visual.less"; import powerbi from "powerbi-visuals-api"; import IVisual = powerbi.extensibility.IVisual; import VisualConstructorOptions = powerbi.extensibility.visual.VisualConstructorOptions; import VisualUpdateOptions = powerbi.extensibility.visual.VisualUpdateOptions; import EnumerateVisualObjectInstancesOptions = powerbi.EnumerateVisualObjectInstancesOptions; import VisualObjectInstanceEnumeration = powerbi.VisualObjectInstanceEnumeration; import IVisualHost = powerbi.extensibility.visual.IVisualHost; import * as d3 from "d3"; type Selection<T extends d3.BaseType> = d3.Selection<T, any,any, any>; export class Visual implements IVisual { constructor(options: VisualConstructorOptions) { } public update(options: VisualUpdateOptions) { } } Beneath the Visual class declaration, insert the following class-level properties. export class Visual implements IVisual { // ... private host: IVisualHost; private svg: Selection<SVGElement>; private container: Selection<SVGElement>; private circle: Selection<SVGElement>; private textValue: Selection<SVGElement>; private textLabel: Selection<SVGElement>; // ... } Add the following code to the constructor. this.svg = d3.select(options.element) .append('svg') .classed('circleCard', true); this.container = this.svg.append("g") .classed('container', true); this.circle = this.container.append("circle") .classed('circle', true); this.textValue = this.container.append("text") .classed("textValue", true); this.textLabel = this.container.append("text") .classed("textLabel", true); This code adds an SVG group inside the visual and then adds three shapes: a circle and two text elements. To format the code in the document, right-select anywhere in the Visual Studio Code document, and then select Format Document. To improve readability, it is recommended that you format the document every time that paste in code snippets. Add the following code to the update method. let width: number = options.viewport.width; let height: number = options.viewport.height; this.svg.attr("width", width); this.svg.attr("height", height); let radius: number = Math.min(width, height) / 2.2; this.circle .style("fill", "white") .style("fill-opacity", 0.5) .style("stroke", "black") .style("stroke-width", 2) .attr("r", radius) .attr("cx", width / 2) .attr("cy", height / 2); let fontSizeValue: number = Math.min(width, height) / 5; this.textValue .text("Value") .attr("x", "50%") .attr("y", "50%") .attr("dy", "0.35em") .attr("text-anchor", "middle") .style("font-size", fontSizeValue + "px"); let fontSizeLabel: number = fontSizeValue / 4; this.textLabel .text("Label") .attr("x", "50%") .attr("y", height / 2) .attr("dy", fontSizeValue / 1.2) .attr("text-anchor", "middle") .style("font-size", fontSizeLabel + "px"); This code sets the width and height of the visual, and then initializes the attributes and styles of the visual elements. Save the visual.ts file. Select the capabilities.json file. At line 14, remove the entire objects element (lines 14-60). Save the capabilities.json file. In PowerShell, start the custom visual. pbiviz start Toggle auto reload Navigate back to the Power BI report. In the toolbar floating above the developer visual, select the Toggle Auto Reload. This option ensures that the visual is automatically reloaded each time you save project changes. From the Fields pane, drag the Quantity field into the developer visual. Verify that the visual looks like the following. Resize the visual. Notice that the circle and text value scales to fit the available dimension of the visual. The update method is called continuously with resizing the visual, and it results in the fluid rescaling of the visual elements. You have now developed the visual elements. Continue running the visual. Process data in the visual code Define the data roles and data view mappings, and then modify the custom visual logic to display the value and display name of a measure. Configuring the capabilities Modify the capabilities.json file to define the data role and data view mappings. In Visual Studio code, in the capabilities.json file, from inside the dataRoles array, remove all content (lines 3-12). Inside the dataRoles array, insert the following code. { "displayName": "Measure", "name": "measure", "kind": "Measure" } The dataRoles array now defines a single data role of type measure, that is named measure, and displays as Measure. This data role allows passing either a measure field, or a field that is summarized. From inside the dataViewMappings array, remove all content (lines 10-31). Inside the dataViewMappings array, insert the following content. { "conditions": [ { "measure": { "max": 1 } } ], "single": { "role": "measure" } } The dataViewMappings array now defines one field can be passed to the data role named measure. Save the capabilities.json file. In Power BI, notice that the visual now can be configured with Measure. Note The visual project does not yet include data binding logic. Exploring the dataview In the toolbar floating above the visual, select Show Dataview. Expand down into single, and then notice the value. Expand down into metadata, and then into the columns array, and in particular notice the format and displayName values. To toggle back to the visual, in the toolbar floating above the visual, select Show Dataview. Consume data in the visual code In Visual Studio Code, in the visual.ts file, import the DataViewinterface from powerbimodule import DataView = powerbi.DataView; and add the following statement as the first statement of the update method. let dataView: DataView = options.dataViews[0]; This statement assigns the dataView to a variable for easy access, and declares the variable to reference the dataView object. In the update method, replace .text("Value") with the following. .text(<string>dataView.single.value) In the update method, replace .text("Label") with the following. .text(dataView.metadata.columns[0].displayName) Save the visual.ts file. In Power BI, review the visual, which now displays the value and the display name. You have now configured the data roles and bound the visual to the dataview. In the next tutorial you learn how to add formatting options to the custom visual. Debugging For tips about debugging your custom visual, see the debugging guide. Next steps Feedback
https://docs.microsoft.com/en-us/power-bi/developer/visuals/custom-visual-develop-tutorial
2020-01-17T22:13:37
CC-MAIN-2020-05
1579250591234.15
[array(['media/custom-visual-develop-tutorial/circle-cards.png', 'Power BI Custom Visual sample output'], dtype=object) ]
docs.microsoft.com
fortios_system_nat64 – Configure NAT64 in Fortinet’s FortiOS and FortiGate¶ New in version 2.9. Synopsis¶ - This module is able to configure a FortiGate or FortiOS (FOS) device by allowing the user to set and modify system feature and nat NAT64. fortios_system_nat64: host: "{{ host }}" username: "{{ username }}" password: "{{ password }}" vdom: "{{ vdom }}" https: "False" system_nat64: always_synthesize_aaaa_record: "enable" generate_ipv6_fragment_header: "enable" nat46_force_ipv4_packet_forwarding: "enable" nat64_prefix: "<your_own_value>" secondary_prefix: - name: "default_name_8" nat64_prefix: "<your_own_value>" secondary_prefix_status: "enable" status: "enable" Return Values¶ Common return values are documented here, the following are the fields unique to this module: Status¶ - This module is not guaranteed to have a backwards compatible interface. [preview] - This module is maintained by the Ansible Community. [community]
https://docs.ansible.com/ansible/devel/modules/fortios_system_nat64_module.html
2020-01-17T21:57:47
CC-MAIN-2020-05
1579250591234.15
[]
docs.ansible.com
Web Item Unexpected Error Reading Template File Class Definition Warning This API is now obsolete. public ref class WebItemUnexpectedErrorReadingTemplateFile : Exception [System.Obsolete("Microsoft.VisualBasic.Compatibility.* classes are obsolete and supported within 32 bit processes only.")] public class WebItemUnexpectedErrorReadingTemplateFile : Exception type WebItemUnexpectedErrorReadingTemplateFile = class inherit Exception Public Class WebItemUnexpectedErrorReadingTemplateFile Inherits Exception - Inheritance - - Attributes - Remarks The WebClass class is used by the upgrade tools to upgrade a Visual Basic 6.0 WebClass project to an ASP.NET Web-site project..
https://docs.microsoft.com/en-us/dotnet/api/microsoft.visualbasic.compatibility.vb6.webitemunexpectederrorreadingtemplatefile?view=netframework-4.8&viewFallbackFrom=netstandard-2.1
2020-01-17T21:48:44
CC-MAIN-2020-05
1579250591234.15
[]
docs.microsoft.com
Understanding pipelines Pipelines act like a series of connected segments of pipe. Items moving along the pipeline pass through each segment. To create a pipeline in PowerShell, you connect commands together with the pipe operator "|". The output of each command is used as input to the next command. The notation used for pipelines is similar to the notation used in other shells. At first glance, it may not be apparent how pipelines are different in PowerShell. Although you see text on the screen, PowerShell pipes objects, not text, between commands. The PowerShell pipeline Pipelines are arguably the most valuable concept used in command-line interfaces. When used properly, pipelines reduce the effort of using complex commands and make it easier to see the flow of work for the commands. Each command in a pipeline (called a pipeline element) passes its output to the next command in the pipeline, item-by-item. Commands don't have to handle more than one item at a time. The result is reduced resource consumption and the ability to begin getting the output immediately. For example, if you use the Out-Host cmdlet to force a page-by-page display of output from another command, the output looks just like the normal text displayed on the screen, broken up into pages: Get-ChildItem -Path C:\WINDOWS\System32 | Out-Host -Paging Directory: C:\WINDOWS\system32 Mode LastWriteTime Length Name ---- ------------- ------ ---- d----- 4/12/2018 2:15 AM 0409 d----- 5/13/2018 11:31 PM 1033 d----- 4/11/2018 4:38 PM AdvancedInstallers d----- 5/13/2018 11:13 PM af-ZA d----- 5/13/2018 11:13 PM am-et d----- 4/11/2018 4:38 PM AppLocker d----- 5/13/2018 11:31 PM appmgmt d----- 7/11/2018 2:05 AM appraiser d---s- 4/12/2018 2:20 AM AppV d----- 5/13/2018 11:10 PM ar-SA d----- 5/13/2018 11:13 PM as-IN d----- 8/14/2018 9:03 PM az-Latn-AZ d----- 5/13/2018 11:13 PM be-BY d----- 5/13/2018 11:10 PM BestPractices d----- 5/13/2018 11:10 PM bg-BG d----- 5/13/2018 11:13 PM bn-BD d----- 5/13/2018 11:13 PM bn-IN d----- 8/14/2018 9:03 PM Boot d----- 8/14/2018 9:03 PM bs-Latn-BA d----- 4/11/2018 4:38 PM Bthprops d----- 4/12/2018 2:19 AM ca-ES d----- 8/14/2018 9:03 PM ca-ES-valencia d----- 5/13/2018 10:46 PM CatRoot d----- 8/23/2018 5:07 PM catroot2 . You can see how piping impacts CPU and memory usage in the Windows Task Manager by comparing the following commands: Get-ChildItem C:\Windows -Recurse Get-ChildItem C:\Windows -Recurse | Out-Host -Paging Note The Paging parameter is not supported by all PowerShell hosts. For example, when you try to use the Paging parameter in the PowerShell ISE, you see the following error: out-lineoutput : The method or operation is not implemented. At line:1 char:1 + Get-ChildItem C:\Windows -Recurse | Out-Host -Paging + ~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : NotSpecified: (:) [out-lineoutput], NotImplementedException + FullyQualifiedErrorId : System.NotImplementedException,Microsoft.PowerShell.Commands.OutLineOutputCommand. If you run Get-Location while your current location is the root of the C drive, you see the following output: PS> Get-Location Path ---- C:\ The text output is a summary of information, not a complete representation of the object returned by Get-Location. The heading in the output is added by the process that formats the data for onscreen display. When you pipe the output to the Get-Member cmdlet you get information about the object returned by Get-Location. Get-Location | Get-Member;} Get-Location returns a PathInfo object that contains the current path and other information. Feedback
https://docs.microsoft.com/en-us/powershell/scripting/learn/understanding-the-powershell-pipeline?view=powershell-6
2020-01-17T22:09:06
CC-MAIN-2020-05
1579250591234.15
[]
docs.microsoft.com
Angular CLI & WebPack ASP.NET Zero uses angular-cli for the development and deployment. It's properly configured for angular-cli and already working. To run the application, open command line and type "npm start" command (or "npm run hmr" to enable hot module replacement feature). Once it's compiled and ready, you can go to to open the application in your browser. See angular-cli official web site for more.
https://docs.aspnetzero.com/en/aspnet-core-angular/latest/Infrastructure-Angular-Angular-CLI-WebPack
2020-01-18T01:34:44
CC-MAIN-2020-05
1579250591431.4
[]
docs.aspnetzero.com
- API > - API Resources > - Private Endpoints > - Delete One Private Endpoint Connection Delete One Private Endpoint Connection¶ On this page Note Groups and projects are synonymous terms. Your {GROUP-ID} is the same as your project ID. For existing groups, your group/project ID remains the same. The resource and corresponding endpoints use the term groups. Remove one private endpoint connection in an Atlas project. Important You must remove all interface endpoints associated with a private endpoint before you can remove it..
https://docs.atlas.mongodb.com/reference/api/private-endpoint-delete-one-private-endpoint-connection/
2020-01-18T01:58:15
CC-MAIN-2020-05
1579250591431.4
[]
docs.atlas.mongodb.com
CHOC Children’s and the William and Nancy Thompson Family Foundation (Thompson Family Foundation) recently unveiled a new collaboration that expands our. Set to open in early 2019,.
https://docs.chocchildrens.org/3467-2/
2020-01-18T01:04:52
CC-MAIN-2020-05
1579250591431.4
[array(['https://docs.chocchildrens.org/wp-content/uploads/2018/04/Bill-and-Nancy-Thompson-600x400.jpg', None], dtype=object) ]
docs.chocchildrens.org
This lesson shows how to display summary information (Count, Max, Min, Sum, etc.) against individual columns - total summary. Create Total Summaries A total summary is an aggregate function value calculated over all data rows within a grid and displayed within the Fixed Summary Panel. Summaries within the Fixed Summary Panel are always displayed onscreen and not horizontally scrolled. Total summaries are represented by SummaryItemBase objects and stored within the GridControl.TotalSummary collection. Create two summary items. One item to display the total number of rows within the grid, another item to display the maximum product price. <Grid:GridControl.TotalSummary> <Grid:GridSummaryItem <Grid:GridSummaryItem </Grid:GridControl.TotalSummary> Total summaries can be displayed either to the summary panel's left or right. The horizontal alignment is specified by the SummaryItemBase.Alignment property. Display Summary Information Set the grid's DataControlBase.ShowFixedTotalSummary property to true to display the fixed summary panel. Result Run the application. The image below shows the result.
https://docs.devexpress.com/Win10Apps/212030/controls/data-grid/getting-started/lesson-2-displaying-summary-information
2020-01-18T00:27:41
CC-MAIN-2020-05
1579250591431.4
[]
docs.devexpress.com
CHOC Children’s has been recognized by Press Ganey in multiple categories for the 2019 fiscal year. The three awards include: - The Pinnacle Award of Excellence recognizes organizations that have maintained consistently high levels of excellence over multiple years in patient experience, workforce engagement or clinical quality performance and is awarded to the three top-performing organizations in each category. - Survey Solutions Workplace of the Year is a title given to the top 20 organizations recognized for their associate engagement efforts out of over 1000 considered. - The NDNQI Outstanding Nursing Quality Award is presented annually to the best performing hospital in each of seven categories, including academic medical centers, teaching hospital, community hospital, pediatric hospital, rehabilitation hospital, psychiatric hospital and international. Seventeen quality measures are used to evaluate scores by unit type. Combine scores across the units to produce an overall score. The highest-ranking hospital in each category receives the award. Read more about awards CHOC Children’s has received.
https://docs.chocchildrens.org/press-ganey-recognizes-choc-childrens-with-three-awards/
2020-01-17T23:57:31
CC-MAIN-2020-05
1579250591431.4
[array(['https://docs.chocchildrens.org/wp-content/uploads/2019/11/IMG_9951-432x576.jpg', None], dtype=object) ]
docs.chocchildrens.org
- Sort Components Are Stacked in an MVC Search Page Coveo for Sitecore (April 2017) Coveo for Sitecore (June 2017) Symptoms With the April 2017 release, when you have several sort component on an MVC-based search page, the components are stacked vertically instead of being distributed horizontally. Cause The April 2017 introduces a new version of the JavaScript Search Framework (v1.2359.7) along with some changes to the search pages CSS files relative to this update. The sort component placeholder being in a div in the SearchView.cshtml file is causing the components to stack vertically. This <div> was previously introduced to address a rendering issue and is no longer necessary. <div class="coveo-sort-section"> <div> @Html.Sitecore().Placeholder("coveo-sorts-mvc") </div> </div> Resolution You need to manually update your search pages to remove the <div> around the coveo sort component placeholder. Follow these steps to resolve the problem: Step 1 - Make sure the SearchView.cshtml file has been updated correctly: Open a file explorer and navigate to C:\inetpub\wwwroot\[yourSitecoreInstance]\Website\Views\Coveo. - Open SearchView.cshtmlin a text editor. Locate the line containing @Html.Sitecore().Placeholder("coveo-sorts-mvc")and make sure it looks like the following code block: <div class="coveo-sort-section"> @Html.Sitecore().Placeholder("coveo-sorts-mvc") </div> If not, remove the additional <div>...</div> Step 2 - Manually remove the pair of div tags from your custom search views. For each custom search view that you have: - Open the file in a text editor. Locate the following code block: <div class="coveo-sort-section"> <div> @Html.Sitecore().Placeholder("coveo-sorts-mvc") </div> </div> Replace it with the following code block: <div class="coveo-sort-section"> @Html.Sitecore().Placeholder("coveo-sorts-mvc") </div> - Save and then close the file.
https://docs.coveo.com/en/992/
2020-01-18T01:40:57
CC-MAIN-2020-05
1579250591431.4
[array(['/en/assets/images/c4sc-v4/attachments/36640870/37098612.png', None], dtype=object) ]
docs.coveo.com
. * @throws StreamBadStateException if the SnapshotInputStream failed to be initialized. * The SnapshotInputStream is not in a usable state and it is closed. */ public SnapshotInputStream(String path) throws StreamBadStateException; /** *. * * @return A Map.Entry<EntityId, Entity> containing the (EntityId, Entity) pair read from the * snapshot file. * @throws StreamBadStateException if a snapshot internal error occurred. * The SnapshotInputStream is not in a usable state. * @throws StreamInvalidDataException if the last entity read operation failed. * The SnapshotInputStream is in a usable state. * @throws EOFException is the end of the snapshot was reached while trying to execute the last entity read operation. */ public Map.Entry<EntityId, Entity> readEntity() throws StreamBadStateException, StreamInvalidDataException, EOFException; /** *. * @throws StreamBadStateException if the SnapshotOutputStream failed to be initialized. * The SnapshotOutputStream is not in a usable state and it is closed. */ public SnapshotOutputStream(String path) throws StreamBadStateException; /** * Write the (EntityId, Entity) pair to the snapshot. * * @param entityId The EntityId of the Entity to be written to the Snapshot. * @param entity The Entity to be written to the Snapshot. * * @throws StreamBadStateException if a snapshot internal error occurred. * The SnapshotInputStream is not in a usable state. * @throws StreamInvalidDataException if the last entity write operation failed. * The SnapshotInputStream is in a usable state. */ public void writeEntity(EntityId entityId, Entity entity) throws StreamBadStateException, StreamInvalidDataException; /** *) { try { //()) { try { //); } catch (StreamInvalidDataException e) { // The last read or write operation failed, but the snapshot is still usable. // Log or handle the operation failure. Possible errors include unregistered component, // failed serialization, missing Persistence component and writing entities with the same id. logException(e.getMessage()); } catch (StreamBadStateException e) { // An internal error occurred when reading or writing and the snapshot is not usable. logException(e.getMessage()); break; } catch (EOFException e) { // The eof was reached when reading. Not an error if used as alternative to HasNext. logException(e.getMessage()); break; } } // Write the end of the snapshot and release the SnapshotOutputStream's resources. outputStream.close(); inputStream.close(); } catch (StreamBadStateException e) { // The snapshot failed to be initialized, and the stream is not usable. logException(e.getMessage()); } }
https://docs.improbable.io/reference/14.0/javasdk/using/snapshots
2020-01-18T01:41:21
CC-MAIN-2020-05
1579250591431.4
[]
docs.improbable.io
Upgrade notes Upgrade from a previous Rudder 5.0 Migration from any 5.0 minor version is supported (see below for migration from older versions). Upgrade from Rudder 4.1, 4.2 or 4.3 Migration from 4.1, 4.2 or 4.3 are supported, so you can upgrade directly to 5.0. Upgrade from Rudder 4.0 or older Direct upgrades from 4.0.x and older are no longer supported on 5.0. If you are still running one of those, either on servers or nodes, please first upgrade to one of the supported versions, and then upgrade to 5.0. Compatibility between Rudder agent 5.0 and older server versions Compatibility between Rudder server 5.0 and older agent versions 4.1, 4.2 and 4.3 agents Rudder agent 4.1, 4.2 and 4.3 are fully compatible with Rudder server 5.0. It is therefore not strictly necessary to update all your agents to 5.0. ← on SLES on Debian/Ubuntu →
https://docs.rudder.io/reference/5.0/installation/upgrade.html
2020-01-18T01:14:00
CC-MAIN-2020-05
1579250591431.4
[]
docs.rudder.io
See Also: Encoding Members System.Text.Encoding, see Understanding Encodings. Note that System.Text(Byte[], int, int, Char[], int). The .NET Framework provides the following implementations of the System.Text.Encoding class to support current Unicode encodings and other encodings: System.Text.ASCIIEncoding encodes Unicode characters as single 7-bit ASCII characters. This encoding only supports character values between U+0000 and U+007F. Code page 20127. Also available through the Encoding.ASCII property. System.Text.UTF7Encoding encodes Unicode characters using the UTF-7 encoding. This encoding supports all Unicode character values. Code page 65000. Also available through the Encoding.UTF7 property. System.Text.UTF8Encoding encodes Unicode characters using the UTF-8 encoding. This encoding supports all Unicode character values. Code page 65001. Also available through the Encoding.UTF8 property. System.Text.UnicodeEncoding encodes Unicode characters using the UTF-16 encoding. Both little endian and big endian byte orders are supported. Also available through the Encoding.Unicode property and the Encoding.BigEndianUnicode property. System.Text.UTF32Encoding encodes Unicode characters using the UTF-32 encoding. Both little endian (code page 12000) and big endian (code page 12001) byte orders are supported. Also available through the Encoding.UTF32 property. The System.Text.Encoding class is primarily intended to convert between different encodings and Unicode. Often one of the derived Unicode classes is the correct choice for your application. You use the Encoding.GetEncoding(int) method to obtain other encodings. You can call the Encoding.GetEncodings method to get a list of all encodings. The following table lists the encodings supported by the .NET Framework. It lists each encoding's code page number, along. If the data to be converted is available only in sequential blocks (such as data read from a stream) or if the amount of data is so large that it needs to be divided into smaller blocks, your application should use the System.Text.Decoder or the System.Text.Encoder provided by the Encoding.GetDecoder method or the Encoding Encoding tp://go.microsoft.com/fwlink/?LinkId=37123. Note that the encoding classes allow errors to: Silently change to a "?" character. Use a "best fit" character. Change to an application-specific behavior through use of the System.Text.EncoderFallback and System.Text.DecoderFallback classes with the U+FFFD Unicode replacement character. Your applications are recommended to throw exceptions on all data stream errors. An application either uses a "throwonerror" flag when applicable or uses the System.Text.EncoderExceptionFallback and System.Text.DecoderExceptionFallback classes. Best fit fallback is often not recommended because it can cause data loss or confusion and is slower than simple character replacements. For ANSI encodings, the best fit behavior is the default.
http://docs.go-mono.com/monodoc.ashx?link=T%3ASystem.Text.Encoding
2020-01-18T01:19:10
CC-MAIN-2020-05
1579250591431.4
[]
docs.go-mono.com
Swrve dashboard audit logging Dashboard auditing The new audit trail feature captures a range of data for the key entities: campaigns and users—including administrator activities such as user and role-level actions. This feature provides the intelligence for admin and audit investigators to find and view users’ actions within campaigns. Benefits Swrve audit trails give you the following benefits: - Saves user activity records in a central secure repository making it easy to find all records - Time-synched records for accurate, chronological activity review - Easily accessible records (company administrators only) - Good filtering options for better searchability - Exportable CSV files can be downloaded for data analysis - RestAPI to access the audit trail for a company and its applications Audit trail details To access the Audit trails menu, go to Your account and select Company Settings. You will see the link to Dashboard Audit Logging as below: Select the link to display the Audit trail screen: it lists each audit trail by company and app with details of the following information associated with each one: - Username: the username of the individual who performed the activity - Date: the date when the action was performed - Action: the action performed, for example, Created or Launched - Type: the type of notification for example a Push Notification - Name: the name given to the entity, for example the user role or permissions. - Time Stamp: the time when an event or activity was actioned Audit trail at company level Audit trail at app level Retention settings To set up the retention period for your records select Settings. Audit trail records are retained for 30 days by default—records that are more than 30 days old are automatically removed from the audit trail on a daily basis. If you wish to retain records longer than 30 days, reset the retention period using the Settings option on the Audit Trail screen and select a new duration from the Retention drop-down list. Select Save to confirm the new setting. View audit results To view your audit trails, select View All from the Audit trail screen. The audit trails are grouped by company and app and they detail all activity over the duration of your retention period. You can: - Search for a particular audit trail: type the name of an audit trail you want to view. - Filter the results by Action: Created, Updated, Deleted, Duplicated, Archived, Stopped, and Launched. - Filter the results by Type: Enter the type of notification selected for your campaign. - Filter the results by Username: Select a particular user from the Select User drop-down list. - Select a specific date range: Select the relevant dates in the From and To calendars. Use the arrows on each of the calendars to change the month and year. Download CSV Administrators can download audit records at a company or application level. When you select Download CSV different options display depending on where the download is selected from: - Select Download CSV from the main audit logging screen to download all company or app audit records. - When reviewing user activities from the View All screen, Download CSV provides the option to download all records or a filtered set of records. Select Download CSV to confirm the selected option and perform the download.
http://docs.swrve.com/welcome/settings/swrve-dashboard-audit-logging/
2020-01-18T01:03:27
CC-MAIN-2020-05
1579250591431.4
[array(['https://docs.swrve.com/wp-content/uploads/2018/10/Company-settings.jpg', 'Dashboard audit logging'], dtype=object) ]
docs.swrve.com
- API > - API Resources > - Performance Advisor > - Get Slow Queries Get Slow Queries¶ On this page Return log lines for slow queries as determined by the Performance Advisor. Note Users without one of the following roles cannot successfully call the endpoint: Project Owneraccess Project Data Access Adminaccess Project Data Access Read/Writeaccess Project Data Access Read Onlyaccess The other Performance Advisor endpoints allow users without these roles to call the endpoints and receive redacted data..
https://docs.atlas.mongodb.com/reference/api/pa-get-slow-query-logs/
2020-01-18T02:00:39
CC-MAIN-2020-05
1579250591431.4
[]
docs.atlas.mongodb.com
Event Processor Host Best Practices Part 1 Azure Event Hubs is a very powerful telemetry ingestion service that was created by the Service Bus team. It went into General Availability in late October and today can be used to stream millions of events for very low cost. The key to scale for Event Hubs is the idea of partitioned consumers. In contrast to competing consumers pattern partitioned consumers enables very high scale by removing the contention bottleneck and facilitating end to end parallelism. This pattern does require some tradeoffs that can be difficult to deal with - specifically reader coordination and offset tracking - both explained in the Event Hubs Overview on MSDN.. After implementing this class instantiate EventProcessorHost providing the necessary parameters to the constructor. - Hostname - be sure not to hard code this, each instance of EventProcessorHost must have a unique value for this within a consumer group. - EventHubPath - this is an easy one. - ConsumerGroupName - also an easy one, "$Default" is the name of the default consumer group, but it generally is a good idea to create a consumer group for your specific aspect of processing. - EventHubConnectionString - this is the connection string to the particular event hub, which can be retrieved from the Azure portal. This connection string should have Listen permissions on the Event Hub. - StorageConnectionString - this is the storage account that will be used for partition distribution and leases. When Checkpointing the lastest offset values will also be stored here.. The next blog will discuss lease management, auto scale, and runtime options.
https://docs.microsoft.com/en-us/archive/blogs/servicebus/event-processor-host-best-practices-part-1
2020-01-18T02:01:00
CC-MAIN-2020-05
1579250591431.4
[]
docs.microsoft.com
Security Inside Windows Vista User Account Control Mark Russinovich At a Glance: - Running as a standard user - File and registry virtualization - Elevating account status User Account Control (UAC) is an often misunderstood feature in Windows Vista. In my three-part TechNet Magazine series on Windows Vista kernel changes, available online at technetmagazine.com, I didn’t cover UAC because I felt that it merited its own article. In this article I discuss the problems UAC solves and describe the architecture and implementation of its component technologies. These technologies include the refactoring of operations that previously required administrative rights, lightweight virtualization to help programs run correctly without administrative rights, the ability for programs to explicitly request administrative rights, and isolation of administrative processes from non-administrative processes running on the same user desktop. UAC’s Goal UAC is meant to enable users to run with standard user rights, as opposed to administrative rights. Administrative rights give users the ability to read and modify any part of the operating system, including the code and data of other users—and even Windows®. UAC had to address several problems to make it practical to run with a standard user account. First, prior to Windows Vista™, the Windows usage model has been one of assumed administrative rights. Software developers assumed their programs could access and modify any file, registry key, or operating system setting. Even when Windows NT® introduced security and differentiated between accesses granted to administrative and standard user accounts, users were guided through a setup process that encouraged them to use the built-in Administrator account or one that was a member of the Administrators group. The second problem UAC had to address was that users sometimes need administrative rights to perform such operations as installing software, changing the system time, and opening ports in the firewall. The UAC solution to these problems is to run most applications with standard user rights, obviate the need for administrator rights all the time, and encourage software developers to create applications that run with standard user rights. UAC accomplishes these by requiring administrative rights less frequently, enabling legacy applications to run with standard user rights, making it convenient for standard users to access administrative rights when they need them, and enabling even administrative users to run as if they were standard users. Running as Standard User A full audit of all administrative operations during the development of Windows Vista identified many that could be enabled for standard users without compromising the security of the system. For example, even corporations that adopted standard user accounts for their Windows XP desktop systems were unable to remove their travelling users from the Administrators group for the sole reason that Windows XP does not differentiate changing the time zone from changing the system time. A laptop user who wants to configure the local time zone so that their appointments show correctly in their calendar when they travel must have the have the "Change the system time" privilege (internally called SeSystemTimePrivilege), which by default is only granted to administrators. Time is commonly used in security protocols like Kerberos, but the time zone only affects the way that time is displayed, so Windows Vista adds a new privilege, "Change the time zone" (SeTimeZonePrivilege), and assigns it to the Users group, as seen in Figure 1. This makes it possible for many corporations to have their laptop users run under standard user accounts. Figure 1 The “Change the time zone” privilege (Click the image for a larger view) Windows Vista also lets standard users configure WEP settings when they connect to wireless networks, create VPN connections, change power management settings, and install critical Windows updates. In addition, it introduces Group Policy settings that enable standard users to install printer and other device drivers approved by IT administrators and to install ActiveX® controls from administrator-approved sites. What about consumer and line of business (LOB) applications that do not run correctly in standard user accounts? While some software legitimately requires administrative rights, many programs needlessly store user data in system-global locations. Microsoft recommends that global application installers that expect to run with administrative rights create a directory under the %ProgramFiles% directory to store their application’s executable files and auxiliary data and create a key under HKEY_LOCAL_MACHINE\Software for their application settings. up until Windows Vista, apps that incorrectly save user data and settings to these locations work anyway. Windows Vista present, permits the read attempt from the global location. For the purposes of this virtualization, Windows Vista treats a process as legacy if it’s 32-bit (versus 64-bit), is not running with administrative rights, and does not have a manifest file indicating that it was written for Windows Vista. Any operations not originating from a process classified as legacy according to this definition, including network file sharing accesses, are not virtualized. A process’s virtualization status is stored as a flag in its token, which is the kernel data structure that tracks the security context of a process, including its user account, group memberships, and privileges. You can see the virtualization status of a process by adding the Virtualization column to Task Manager’s Processes page. Figure 2 shows that most Windows Vista components, including Desktop Window Manager (Dwm .exe), Client Server Runtime Subsystem (Csrss.exe), and Explorer, either have virtualization disabled because they have a Windows Vista manifest or are running with administrative rights and hence do not allow virtualization. Internet Explorer® (iexplore.exe) has virtualization enabled because it can host multiple ActiveX controls and scripts and must assume that they were not written to operate correctly with standard user rights. Figure 2 Task Manager shows virtualization status (Click the image for a larger. To add additional extensions to the exception list, enter them in the following registry key and reboot: HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Luafv\Parameters\ExcludedExtensionsAdd Use a multi-string type to delimit multiple extensions and do not include a leading dot in the extension name. Modifications to virtualized directories by legacy processes redirect to the user’s virtual root directory, %LocalAppData%\VirtualStore. For example, if a virtualized process that is running on my system creates C:\Windows\Application.ini, the file that it actually creates is C:\Users\Markruss\AppData\Local\VirtualStore\Windows\Application.ini. 3. Clicking the button navigates you to the corresponding VirtualStore subdirectory to show you the virtualized files. Figure 3 Compatibility Files button indicates virtualized files nearby (Click the image for a larger view) Figure 4 shows how the UAC File Virtualization Filter Driver (%SystemRoot%\System32\Drivers\Luafv.sys) implements file system virtualization. Because it’s a file system filter driver, it sees all file system operations, but it only implements functionality for operations from legacy processes. You can see that it changes the target file path for a legacy process that creates a file in a system-global location, but does not for a process running a Windows Vista application with standard user rights. The legacy process believes that the operation succeeds when it really created the file in a location fully accessible by the user, but default permissions on the \Windows directory deny access to the application written for Windows Vista. Figure 4 File system virtualization Registry virtualization is implemented slightly differently than flag, REG_ KEY_DONT_VIRTUALIZE, in the key itself. The Reg.exe utility can show the flag as well as the two other virtualization-related flags, REG_KEY_ DONT_SILENT_FAIL and REG_KEY_ RECURSE_FLAG, as seen in Figure 5. When REG_KEY_DONT_SILENT_FAIL is clear and the key is not virtualized (REG_KEY_DONT_VIRTUALIZE is set), a legacy application that would be denied access performing an operation on the key is instead granted any access the user has to the key rather than the ones the application requested. REG_KEY_RECURSE_FLAG indicates if new subkeys inherit the virtualization flags of the parent instead of just the default flags. Figure 5 Reg utility shows virtualization flags (Click the image for a larger view) Figure 6 shows how registry virtualization is implemented by the Configuration Manager, which manages the registry in the operating system kernel, Ntoskrnl.exe. As with file system virtualization, a legacy process creating a subkey of a virtualized key is redirected to the user’s registry virtual root, but a Windows Vista process is denied access by default permissions. Figure 6 Registry virtualization In addition to file system and registry virtualization, some applications require additional help to run correctly with standard user rights. For example, an application that tests the account in which it’s running for membership in the Administrators group might otherwise work, but won’t run if it’s not in that group. Windows Vista therefore defines a number of application-compatibility shims so that such applications work anyway. The shims most commonly applied to legacy applications for operation with standard rights are shown in Figure 7. Corporate IT professionals can use tools like the Application Compatibility Toolkit (ACT, available from technet.microsoft .com/windowsvista/aa905066.aspx) and its Standard User Analyzer (SUA) utility, or Aaron Margosis’s LUA Buglight to identify the shim requirements for their LOB applications. They assign shims to an application using the Compatibility Administrator, also part of ACT, and then deploy the resulting compatibility database (.sdb file) to their desktops via Group Policy. Note that, if required, virtualization can be completely disabled for a system using a local security policy setting. Figure 7 Common user shims The Effects of Virtualization You can change the virtualization status of a process by selecting Virtualization from the context menu that appears when you right-click it in Task Manager. Figure A shows the behavior of a command prompt when its virtualization status changes. It starts out with virtualization disabled because it has a Windows Vista manifest. Because it’s running with standard user rights, it is unable to create a file in the \Windows directory, but after it becomes virtualized with Task Manager it appears to create the file successfully. When its virtualization returns to its disabled state it can’t find the file, which is actually in the user’s virtual store. Figure A A virtualization status change (Click the image for a larger view) Administrator Approval Mode Even if users run only programs that are compatible with standard user rights, some operations still require administrative rights. The vast majority of software installations require admin rights to create directories and registry keys in system-global locations or to install services or device drivers. Modifying system-global Windows and application settings also requires administrative rights, as does the Windows Vista parental controls feature. It would be possible to perform most of these operations by switching to a dedicated admin account, but the inconvenience of doing so would likely result in most users remaining in the administrative account to perform their daily tasks. Windows Vista therefore includes enhanced "run as" functionality so that standard users can conveniently launch processes with administrative rights. This functionality required giving applications a way to identify operations for which the system can get administrative rights on behalf of the application as necessary, which I’ll describe shortly. Further, so that users acting as system administrators can run with standard user rights, but not have to enter user names and passwords every time they want to access administrative rights, Windows Vista introduces Admin Approval Mode (AAM). This feature creates two identities for the user at logon: one with standard user rights and another with administrative rights. Since every user on a Windows Vista system is either a standard user or running for the most part as a standard user in AAM, developers must assume that all Windows users are standard users, which will result in more programs working with standard user rights without virtualization or shims. Granting a process administrative rights is called elevation. When it’s performed by a standard user account, it’s referred to as an Over the Shoulder (OTS) elevation because it requires the entry of credentials for an account that’s a member of the administrator’s group, something that’s usually completed by another user typing over the shoulder of the standard user. An elevation performed by an AAM user is called a Consent elevation because the user simply has to approve the assignment of his administrative rights. Windows Vista considers a user an administrator if the user is a member of any of the administrator-type groups listed in Figure 8. Many of the groups listed are used only on domain-joined systems and don’t directly give users local administrative rights, but allow them to modify domain-wide settings. If a user is a member of any of those groups, but not the actual administrators group, then the user accesses his administrative rights via OTS elevations instead of Consent elevations. Figure 8 Administrative groups When a user belonging to one of the listed groups logs on, Windows Vista creates a token representing the standard-user version of the user’s administrative identity. The new token is stripped of all the privileges assigned to the user except those listed in Figure 9, which are the default standard user privileges. In addition, any of the administrator-type groups are marked with the USE_FOR_DENY_ONLY flag in the new token. Figure 10 shows the Sysinternals Process Explorer (a process management tool you can download from microsoft .com/technet/sysinternals) displaying the group memberships and privileges of a process running with administrative rights on the left and without administrator rights on the right. (To prevent inadvertent use, the Windows security model requires that a privilege with the disabled flag be enabled before it can be used.) Figure 9 Standard user privileges Figure 10 AAM administrator and standard user tokens (Click the image for a larger view) A group with the deny-only flag can only be used to deny the user access to a resource, never to allow it, closing a security hole that could be created if the group was instead removed altogether. For example, if a file had an access control list (ACL) that denied the Administrators group all access, but granted some access to another group the user belongs to, the user would be granted access if the administrators group was absent from the token, giving the standard user version of the user’s identity more access than their administrator identity. Standalone systems, which are typically home computers, and domain-joined systems treat AAM access by remote users differently because domain-connected computers can use domain administrative groups in their resource permissions. When a user accesses a standalone computer’s file share, Windows requests the remote user’s standard user identity, but on domain-joined systems Windows honors all the user’s domain group memberships by requesting the user’s administrative identity. Conveniently Accessing Administrative Rights There are a number of ways the system and applications identify a need for administrative rights. One that shows up in the Explorer UI is the "Run as administrator" context menu entry and shortcut option. These items include a colored shield icon that should be placed on any button or menu item that will result in an elevation of rights when it is selected. Choosing the "Run as administrator" entry if the image has the words setup, install, or update in its file name or internal version information; more sophisticated ones involve scanning for byte sequences in the executable that are common to third-party installation wrapper utilities. The image loader also calls the application compatibility (appcompat) library to see if the target executable requires administrator rights. The library looks in the application compatibility database to see if the executable has the RequireAdministrator or RunAsInvoker compatibility flags associated with it. The most common way for an executable to request administrative rights is for it to include a requestedElevationLevel tag in its application manifest file. Manifests are XML files that contain supplementary information about an image. They were introduced in Windows XP as a way to identify dependencies on side-by-side DLL and Microsoft .NET Framework assemblies. The presence of the trustInfo element in a manifest (which you can see in the excerpted string dump of Firewallsettings.exe below), denotes an executable that was written for Windows Vista and the requestedElevationLevel element nests within it. The element’s level attribute can have one of three values: asInvoker, highestAvailable, and requireAdministrator. <trustInfo xmlns=”urn:schema-microsoft-com:asm.v3”> <security> <requestedPrivileges> <requestedExecutionLevel Level=”requireAdministrator” uiAccess=”false”/> </requestedPrivileges> </security> </trustInfo> Executables with no need for administrative rights, like Notepad.exe, specify the asInvoker value. Some executables expect administrators to always want maximum access, so they use the highestAvailable value. A user who runs an executable with that value will be asked to elevate only if he is running in AAM or considered an administrator according to the rules defined earlier and must elevate in order to access his administrative rights. Regedit.exe, Mmc.exe, and Eventvwr.exe are examples of applications that use highestAvailable. Finally, requireAdministrator always causes an elevation request and is used by any executable that will fail to operate without administrative rights. Accessibility applications specify "true" for the uiAccess attribute in order to drive the window input of elevated processes, and they must also be signed and in one of several secure locations, including %SystemRoot% and %ProgramFiles%, to get that power. An easy way to determine the values specified by an executable is to view its manifest with the Sysinternals Sigcheck utility like this: sigcheck –m <executable> Executing an image that requests administrative rights causes the Application Information Service (also known as only accessible to the Local System account, paints the bitmap as the background, and displays an elevation dialog box that contains information about the executable. Displaying on a separate desktop prevents any malware present in the user’s account from modifying the appearance of the dialog box. Figure 11 OTS elevation dialogs (Click the image for a larger view) If an image is a Windows component digitally signed by Microsoft and the image is in the Windows system directory, then the dialog displays a blue stripe across the top as shown at the top of Figure 11. The gray stripe (middle dialog) is for images that are digitally signed by someone other than Microsoft, and the orange stripe (bottom dialog) is for unsigned images. The elevation dialog shows the image’s icon, description, and publisher for digitally signed images, but only a generic icon, the file name, and "Unidentified Publisher" for unsigned images. This makes it harder for malware to mimic the appearance of legitimate software. The Details button at the bottom of the dialog expands to show the command line that will be passed to the executable if it launches. The AAM Consent dialog, shown in Figure 12, is similar, but instead of prompting for administrator credentials the dialog has Continue and Cancel buttons. Figure 12 AAM elevation dialog (Click the image for a larger view)’s parent process ID to that of the process that originally launched it (see Figure 13). That’s why elevated processes don’t appear as children of the AIS Service Hosting process in tools like Process Explorer that show process trees. Figure 13 Elevation Flow Even though elevation dialogs appear on a separate secure desktop, users have no way by default of verifying that they are viewing a legitimate dialog and not one presented by malware. That isn’t an issue for AAM because malware can’t gain administrative rights with a faked Consent dialog, but malware could wait for a standard user’s OTS elevation, intercept it, and use a Trojan horse dialog to capture administrator credentials. With those credentials they can gain access to the administrator’s account and infect it. For this reason, OTS elevations are strongly discouraged in corporate environments. To disable OTS elevations (and reduce help desk calls), run the Local Security Policy Editor (Secpol.msc) and configure "User Account Control: Behavior of the elevation prompt for standard users" to "Automatically deny elevation requests." Home users who are security-conscious should configure the OTS elevations to require a Secure Attention Sequence (SAS) that malware cannot intercept or simulate. Configure SAS by running the Group Policy Editor (Gpedit.msc), navigating to Computer Configuration | Administrative Templates | Windows Components | Credential User Interface, and enabling "Require trusted path for credential entry." After doing so you will be required to enter Ctrl+Alt+Delete to access the elevation dialog. Isolating Elevated Processes Windows Vista places a barrier around elevated processes to protect them from malware running on the same desktop with standard user rights. Without a barrier, malware could drive an administrative application by sending it synthesized mouse and window input via window messages. And although the standard Windows security model prevents malware running in a process with standard user rights from tampering with an elevated process running as a different user, it does not stop malware running as the standard-rights version of an administrative user from opening the user’s elevated processes, injecting code into them, and starting threads in them to execute the injected code. The Windows Vista shield for window messages is called User Interface Privilege Isolation (UIPI). It’s based on the new Windows Integrity Mechanism that Windows Vista also uses as the barrier around elevated processes. In this new security model, all processes and objects have integrity levels, and an object’s integrity policy can restrict the accesses that would otherwise be granted to a process by the Windows Discretionary Access Control (DAC) security model. Integrity levels (IL) are represented by Security Identifiers (SIDs), which also represent users and groups, where the level is encoded in the SID’s Relative Identifier (RID). Figure 14 shows the display name, SID, and hexadecimal version of the SID’s RID for the four primary ILs. The hex numbers reveal gaps of 0x1000 between each level that allows for intermediate levels for use by UI accessibility applications as well as future growth. Figure 14 Primary integrity levels Figure 15 lists the object IL policies and the access types they restrict, which correspond to the generic accesses defined for an object. For example, No-Write-Up prevents a lower IL process from gaining any of the accesses represented by the GENERIC_WRITE accesses. The default policy for most objects, including files and registry keys, is No-Write-Up, which prevents a process from obtaining write access to objects that have a higher IL than itself, even if the object’s discretionary access control list (DACL) grants the user such access. The only objects with a different policy are the process and thread objects. Their policy, No-Write-Up plus No-Read-Up, stops a process running at a lower IL from injecting code into and reading data—like passwords—out of a process that has a higher IL. Figure 15 Integrity level policies. Figure 16 Sample process integrity level assignment\iCacls.exe) to view the ILs of files and the Sysinternals AccessChk utility to view the ILs of files, registry keys, services and processes. Figure 17 reveals that the IL of a directory that needs to be accessible by Protected Mode Internet Explorer is Low. Figure 17 AccessChk showing the IL of a user’s favorites directory (Click the image for a larger view) If an object has an explicit IL, it is stored in an access control entry (ACE) of a type new to Windows Vista, the Label ACE, in the System Access Control List (SACL) of the object’s security descriptor (see Figure 18). The SID in the ACE corresponds to the object’s IL, and the ACE’s flags encode the object’s integrity policy. Prior to Windows Vista, SACLs stored only auditing ACEs, which require the "Manage auditing and security log" privilege (SeSecurityPrivilege), but reading a Label ACE requires only Read Permissions (READ_CONTROL) access. For a process to modify an object’s IL it must have Change Owner (WRITE_OWNER) access to the object and an IL that’s equal to or higher than the object, and the process can only set the IL to its own IL or lower. The new "Modify an object label" (SeRelabelPrivilege) privilege gives a process the ability to change the IL of any object the process has access to and even raise the IL above the process’s own IL, but by default that privilege is not assigned to any account. Figure 18 An object's Label ACE When a process tries to open an object, the integrity check happens before the standard Windows DACL check in the kernel’s SeAccessCheck function. Given the default integrity policies, a process can only open an object for write access if its IL is equal to or higher than the object’s IL and the DACL also grants the process the accesses it desires. For example, a Low IL process cannot open a Medium IL process for write access, even if the DACL grants the process write access. With the default integrity policy, processes can open any object—with the exception of process and thread objects—for read access so long as the object’s DACL grants them read access. That means a process running at Low IL can open any files accessible to the user account in which it’s running. Protected Mode Internet Explorer uses ILs to help prevent malware that infects it from modifying user account settings, but it does not stop malware from reading the user’s documents. Process and thread objects are exceptions because their integrity policy also includes No-Read-Up. That means a process’s IL must be equal to or higher than the IL of the process or thread it wants to open and the DACL must grant it the accesses it wants for an open to succeed. Assuming the DACLs allow the desired access, Figure 19 shows the accesses that the processes running at Medium and Low have to other processes and objects. Figure 19 Object and Process Accesses The Windows messaging subsystem also honors integrity levels to implement UIPI by preventing a process from sending all but a few informational windows messages to the windows owned by a process having a higher IL. That disallows standard user processes from driving input into the windows of elevated processes or from shattering an elevated process by sending it malformed messages that trigger internal buffer overflows. Processes can choose to allow additional messages past the guard by calling the ChangeWindowMessageFilter API. UIPI also blocks window hooks from affecting the windows of higher IL processes so that a standard user process can’t log the key strokes the user types into an administrative app, for example. Elevations and Security Boundariess only identify the executable that will be elevated; they say nothing about what it will do when it executes. The executable will process command-line arguments, load DLLs, open data files, and communicate with other processes. Any of those operations could conceivably allow malware to compromise the elevated process and thus gain administrative rights. Playing in a Low-IL Sandbox Protected Mode Internet Explorer runs at Low IL to create a fence around malware that might infect its process. This prevents the malware from changing the user’s account settings and installing itself in an autostart location. You can use the Sysinternals PsExec utility, along with the -l switch, to launch arbitrary processes at Low IL in order to explore the sandbox. Figure B shows how a command prompt running at Low IL can’t create a file in the user’s temporary directory, which has a Medium IL, but can do so in the Internet Explorer temporary directory, which has a Low IL. Figure B Command prompt can only create files in similar IL (Click the image for a larger view). For example, the common control dialogs load Shell extensions configured in a user’s registry key (under HKEY_CURRENT_USER), so malware can add itself as an extension to load into any elevated process that uses those dialogs. Even processes elevated from standard user accounts can conceivably be compromised because of shared state. All the processes running in a logon session share the internal namespace where Windows stores objects such as events, mutexes, semaphores, and shared memory. If malware knows that an elevated process will try to open and read a specific shared memory object when the process starts, it could create the object with contents that trigger a buffer overflow to inject code into the elevated process. That type of attack is relatively sophisticated, but its possibility prevents OTS elevations from being a security boundary. The bottom line is that elevations were introduced as a convenience that encourages users who want to access administrative rights to run with standard user rights by default. Users wanting the guarantees of a security boundary can trade off convenience by using a standard user account for daily tasks and Fast User Switching (FUS) to a dedicated administrator account to perform administrative operations. On the other hand, users who want to forgo security in favor of convenience can disable UAC on a system in the User Accounts dialog in the Control Panel, but should be aware that this also disables Protected Mode for Internet Explorer. Conclusion.
https://docs.microsoft.com/en-us/previous-versions/technet-magazine/cc138019(v=msdn.10)?redirectedfrom=MSDN
2020-01-17T23:52:03
CC-MAIN-2020-05
1579250591431.4
[array(['images/cc138019.fig01.gif', 'Figure 1 The “Change the time zone” privilege'], dtype=object) array(['images/cc138019.fig02.gif', 'Figure 2 Task Manager shows virtualization status'], dtype=object) array(['images/cc138019.fig03.gif', 'Figure 3 Compatibility Files button indicates virtualized files nearby'], dtype=object) array(['images/cc138019.fig04.gif', 'Figure 4 File system virtualization'], dtype=object) array(['images/cc138019.fig05.gif', 'Figure 5 Reg utility shows virtualization flags'], dtype=object) array(['images/cc138019.fig06.gif.gif', 'Picture of Registry virtualization Picture of Registry virtualization'], dtype=object) array(['images/cc138019.figa.gif', 'Figure A A virtualization status change'], dtype=object) array(['images/cc138019.fig10.gif', 'Figure 10 AAM administrator and standard user tokens'], dtype=object) array(['images/cc138019.fig11.gif', 'Figure 11 OTS elevation dialogs'], dtype=object) array(['images/cc138019.fig12.gif', 'Figure 12 AAM elevation dialog'], dtype=object) array(['images/cc138019.fig13.gif', 'Figure 13 Elevation Flow'], dtype=object) array(['images/cc138019.fig17.gif', 'Figure 17 AccessChk showing the IL of a user’s favorites directory'], dtype=object) array(['images/cc138019.fig18.gif', "Figure 18 An object's Label ACE"], dtype=object) array(['images/cc138019.fig19.gif', 'Figure 19 Object and Process Accesses'], dtype=object) array(['images/cc138019.figb.gif', 'Figure B Command prompt can only create files in similar IL'], dtype=object) ]
docs.microsoft.com
Scandit Barcode Scanner for iOS Documentation API Reference Getting Started Guides Present the scanner Basic scanning settings - Configure the scanner's key settings - Define the scanning area - Change the length of barcodes to decode Advanced use cases - Select one barcode among many - Reject a barcode - Scan multiple barcodes at once with MatrixScan - Configure OCR - Capture camera frames while scanning
https://docs.scandit.com/stable/ios/
2020-01-18T00:37:39
CC-MAIN-2020-05
1579250591431.4
[]
docs.scandit.com
Customization¶ There are a lot of ways to customize datalad-hirni. Some things are just a matter of configuration settings, while others involve a few lines of (Python) code. Configuration¶ As a DataLad extension, datalad-hirni uses DataLad’s config mechanism. It just adds some additional variables. If you look for a possible configuration to change some specific behaviour of the commands, refer also to the help pages for those commands. Please don’t hesitate to file an issue on GitHub if there’s something you would like become configurable as well. - datalad.hirni.toolbox.url This can be used to overwrite the default url to get the toolbox from. The url is then respected by the cfg_hirniprocedure. Please note, that therefore it will have no effect, if the toolbox was already installed into your dataset. This configuration may be used to refer to an offline version of hirni’s toolbox or to switch to another toolbox dataset altogether. - datalad.hirni.studyspec.filename - Use this configuration to change the default name for specification files ( studyspec.json). - datalad.hirni.dicom2spec.rules - Set this to point to a Python file defining rules for how to derive a specification from DICOM metadata. (See below for more on implementing such rules). This configuration can be set multiple times, which will result in those rules overwriting each other. Therefore the order in which they are specified matters, with the later rules overwriting earlier ones. As with any DataLad configuration in general, the order of sources would be system, global, local, dataset. This could be used for having institution-wide rules via the system level, a scanner-based rule at the global level (of a specific computer at the scanner site), user-based and study-specific rules, each of which could either go with what the previous level decided or overwrite it. - datalad.hirni.import.acquisition-format - This setting allows to specify a Python format string, that will be used by datalad hirni-import-dcmif no acquisition name was given. It defines the name to be used for an acquisition (the directory name) based on DICOM metadata. The default value is {PatientID}. Something that is enclosed with curly brackets will be replaced by the value of a variable with that name everything else is taken literally. Every field of the DICOM headers is available as such a variable. You could also combine several like {PatientID}_{PatientName}. Procedures¶ DataLad procedures are used in different places with datalad-hirni. Wherever this is the case you can use your own procedure instead (or in addition). Most notably procedures are the drivers of the conversion and therefore the pieces used to plugin arbitrary conversion routines (in fact, the purpose of a procedure is up to you - for example, one can use the conversion specification and those procedures for preprocessing as well). The following is an outlining of how this works. A specification snippet defines a list of procedures and how exactly they are called. Any DataLad procedure can be referenced therein, it is, however, strongly recommended to include them in the dataset they are supposed to run on or possibly in a subdataset thereof (as is the case with the toolbox). For full reproducibility you want to avoid referencing a procedure, that is not tracked by the dataset or any of its subdataset. Sooner or later this would be doomed to become a reference to nowhere. Those procedures are then executed by datalad hirni-spec2bids in the order they are appearing in that list. A single entry in that list is a dictionary, specifying the name of the procedure and optionally a format string to use for calling it and, also optionally, a flag indicating whether it should be executed only, if datalad hirni-spec2bids was called with --anonymize. For example (taken from the demo dataset, acquisition2) (a part of) the snippet of the specification for the DICOM image series and another one specifying the use of the “copy converter” for an events.tsv file. (See the study dataset demo for context): {"location":"dicoms", "dataset-id":"7cef7b58-400d-11e9-a522-e8b1fc668b5e", "dataset-refcommit":"2f98e53c171d410c4b54851f86966934b78fc870", "type":"dicomseries:all" ":false, "value":"hirni-dicom-converter"} "procedure-call":{"approved":false, "value":null},} "on-anonymize":{"approved":false, "value":false} } ] } {"location":"events.tsv", "dataset-id":"3f27c348-400d-11e9-a522-e8b1fc668b5e", "dataset-refcommit":"4cde2dc1595a1f3ba694f447dbb0a1b1ec99d69c", "type":"events_file", ":true, "value":"copy-converter"}, "procedure-call":{"approved":true, "value":"bash {script} {{location}} {ds}/sub-{{bids-subject}}/func/sub-{{bids-subject}}_task-{{bids-task}}_run-{{bids-run}}_events.tsv"} } ] } Such format strings to define the call can use replacements (TODO: refer to datalad-run/datalad-run-procedure) by enclosing valid variables with curly brackets, which is then replaced by the values of those variables when this is executed. For procedures referenced in the specification snippets and executed by datalad hirni-spec2bids all fields of the currently processed specification snippets are available for that way of passing them to the procedures. That way any conversion routine you might want to make (likely wrap into) such a procedure can be made aware of all the metadata recorded in the respective snippet. The format strings to define how exactly a particular procedure should be called, can be provided by the procedure itself, if that procedure is registered in a dataset. This is treated as a default and can be overwritten by the specification. If the default is sufficiently generic, the call-format field in the specification can remain empty. The only specification field actually mandatory for a procedure is procedure-name, of course. - TODO - have an actual step-by-step example implementation of a (conversion) procedure Rules¶ The rule system to derive a specification for DICOM image series from the DICOM metadata consists of two parts. One is a configuration determining which existing rule(s) to use and the other is providing such rules that then can be configured to be the one to be used. - TODO - config vs. implementation - TODO - Say a thing or two about those: - - - likely walk through a reasonably small example implementation
http://docs.datalad.org/projects/hirni/en/latest/customization.html
2020-01-18T00:44:07
CC-MAIN-2020-05
1579250591431.4
[]
docs.datalad.org
Import Data into Cluster¶ Data Import Tools¶ You can bring data from existing MongoDB deployments or JSON/CSV files into Atlas using one of the following: You can also restore from an Atlas cluster backup data to another Atlas cluster. For information, see Restore a Cluster from a Continuous Backup Snapshot. Import Strategies for Common Cluster Configurations¶ The following table covers the best import strategy for common cluster configurations. Note If have a source cluster with authentication and you wish to use an import strategy which includes using mongorestore with the --oplogReplay option, you must delete the admin directory from the dump directory created by mongodump. The admin directory contains database user information which you cannot add to an Atlas cluster with mongorestore. Atlas has built-in support for scaling an M0 Free Tier cluster to an M10+ paid cluster. Alternatively, use mongodump and mongorestore to copy data from an M0 Free Tier cluster to an M10+ cluster. See Seed with mongorestore.
https://docs.atlas.mongodb.com/import/
2020-01-18T02:01:03
CC-MAIN-2020-05
1579250591431.4
[]
docs.atlas.mongodb.com
- Full Text Search > - Analyzers > - Simple Analyzer Simple Analyzer¶ The simple analyzer divides text into searchable terms wherever it finds a non-letter character, such as whitespace or punctuation. It converts all terms to lower case. Example¶.
https://docs.atlas.mongodb.com/reference/full-text-search/analyzers/simple/
2020-01-18T01:52:39
CC-MAIN-2020-05
1579250591431.4
[]
docs.atlas.mongodb.com
Account does not have performance counter permissions - CRM 2011 When installing CRM 2011 if you come across the following error: Then you need to update the domain account that you selected for the CRM services. To fix this do the following: 1) Logon to the CRM application sevrer. 2) Open the Computer Management tool. 3) Select Local Users and Groups -> Groups 4) Select the "Performance Log Users" group and add your CRM service account.
https://docs.microsoft.com/en-us/archive/blogs/johnsullivan/account-does-not-have-performance-counter-permissions-crm-2011
2020-01-18T02:16:51
CC-MAIN-2020-05
1579250591431.4
[]
docs.microsoft.com
Fix web compatibility issues using document modes and the Enterprise Mode site list The Internet Explorer 11 Enterprise Mode site list lets you specify document modes for specific websites, helping you fix compatibility issues without changing a single line of code on the site. This addition to the site list is a continuation of our commitment to help you upgrade and stay up-to-date on the latest version of Internet Explorer, while still preserving your investments in existing apps. What does this mean for me? Enterprises can have critical apps that are coded explicitly for a specific browser version and that might not be in their direct control, making it very difficult and expensive to update to modern standards or newer browser versions. Because you can decide which URLs should open using specific document modes, this update helps ensure better compatibility, faster upgrades, and reduced testing and fixing costs. How does this fix work? You can continue to use your legacy and orphaned web apps, by specifying a document mode in the centralized Enterprise Mode site list. Then, when IE11 goes to a site on your list, the browser loads the page in the specified document mode just as it would if it were specified through an X-UA-Compatible meta tag on the site. For more information about document modes and X-UA-compatible headers, see Defining document compatibility. Important Enterprise Mode takes precedence over document modes, so sites that are already included in the Enterprise Mode site list won’t be affected by this update and will continue to load in Enterprise Mode, as usual. When do I use document modes versus Enterprise Mode? While the <emie> functionality provides great compatibility for you on Windows Internet Explorer 8 or Windows Internet Explorer 7, the new <docMode> capabilities can help you stay up-to-date regardless of which versions of IE are running in your environment. Because of this, we recommend starting your testing process like this: If your enterprise primarily uses Internet Explorer 8 or Internet Explorer 7 start testing using Enterprise Mode. If your enterprise primarily uses Windows Internet Explorer 9 or Internet Explorer 10, start testing using the various document modes. Because you might have multiple versions of IE deployed, you might need to use both Enterprise Mode and document modes to effectively move to IE11. Test your sites for document mode compatibility To see if this fix might help you, run through this process one step at a time, for each of your problematic sites: Go to a site having compatibility problems, press F12 to open the F12 Developer Tools, and go to the Emulation tool. Starting with the 11 (Default) option, test your broken scenario. If that doesn’t work, continue down to the next lowest document mode, stopping as soon as you find a document mode that fixes your problems. For more information about the Emulation tool, see Emulate browsers, screen sizes, and GPS locations. If none of the document modes fix your issue, change the Browser Profile to Enterprise, pick the mode you want to test with starting with 8 (IE8 Enterprise Mode), and then test your broken scenario. Add your site to the Enterprise Mode site list After you’ve figured out the document mode that fixes your compatibility problems, you can add the site to your Enterprise Mode site list. Note There are two versions of the Enterprise Mode site list schema and the Enterprise Mode Site List Manager, based on your operating system. For more info about the schemas, see Enterprise Mode schema v.2 guidance or Enterprise Mode schema v.1 guidance. For more info about the different site list management tools, see Use the Enterprise Mode Site List Manager. To add your site to the site list Open the Enterprise Mode Site List Manager, and click Add. Add the URL and pick the document mode from the Launch in box. This should be the same document mode you found fixed your problems while testing the site. Similar to Enterprise Mode, you can specify a document mode for a particular web path—such as contoso.com/ERP—or at a domain level. In the above, the entire contoso.com domain loads in Enterprise Mode, while microsoft.com is forced to load into IE8 Document Mode and bing.com loads in IE11. Note For more information about Enterprise Mode, see What is Enterprise Mode? For more information about the Enterprise Mode Site List Manager and how to add sites to your site list, see Enterprise Mode Site List Manager. Review your Enterprise Mode site list Take a look at your Enterprise Mode site list and make sure everything is the way you want it. The next step will be to turn the list on and start to use it in your company. The Enterprise Mode Site List Manager will look something like: And the underlying XML code will look something like: <rules version="1"> <emie> <domain exclude="false">bing.com<path exclude="false" forceCompatView="true">/images</path></domain> <domain exclude="true"><path exclude="true">/news</path></domain> </emie> <docmode /> <docMode> <domain docMode="edge">timecard</domain> <domain docMode="edge">tar</domain> <domain docMode="9">msdn.microsoft.com</domain> </docMode> </rules> Turn on Enterprise Mode and using your site list If you haven’t already turned on Enterprise Mode for your company, you’ll need to do that. You can turn on Enterprise Mode using Group Policy or your registry. For specific instructions and details, see Turn on Enterprise Mode and use a site list. Turn off default Compatibility View for your intranet sites By default, IE11 uses the Display intranet sites in Compatibility View setting. However, we’ve heard your feedback and know that you might want to turn this functionality off so you can continue to upgrade your web apps to more modern standards. To help you move forward, you can now use the Enterprise Mode site list to specify sites or web paths to use the IE7 document mode, which goes down to IE5 “Quirks” mode if the page doesn’t have an explicit DOCTYPE tag. Using this document mode effectively helps you provide the Compatibility View functionality for single sites or a group of sites, which after thorough testing, can help you turn off Compatibility View as the default setting for your intranet sites. Related topics - Download the Enterprise Mode Site List Manager (schema v.2) - Download the Enterprise Mode Site List Manager (schema v.1) - Enterprise Mode Site List Manager
https://docs.microsoft.com/en-us/internet-explorer/ie11-deploy-guide/fix-compat-issues-with-doc-modes-and-enterprise-mode-site-list?redirectedfrom=MSDN
2020-01-18T01:20:27
CC-MAIN-2020-05
1579250591431.4
[array(['images/emie-sitelistmgr.png', 'Enterprise Mode Site List Manager, showing the different modes'], dtype=object) ]
docs.microsoft.com
We are preparing a new source of documentation for you. Work in progress! Difference between revisions of "FetchXML" Revision as of 10:23, 5 December 2019 FetchXML is a proprietary data query language used for Dynamics CRM server. It is only used for querying data from the server, you cannot perform create, update, or delete operations using FetchXML. FetchXML is more or less equivalent to SQL. A fetch can be translated to an SQL command and executed on the database. An SQL query can be translated to FetchXML language. Basic syntax of FetchXML can be found in Microsoft documentation. It is based on XML format. Contents FetchXML and Resco Resco has also adopted this language and uses it as the main (reference) query language: - in server queries - in Sync Filter - to define records listed in a view, etc. Resco Mobile CRM uses FetchXML for communication with all CRM servers or databases: - In offline mode, FetchXML query is translated to SQL and executed in the local database - In online mode, FetchXML query is read by the appropriate server-specific service that translates it as needed: - Crm4Service for legacy Microsoft Dynamics CRM 4.0 - Crm2011Service for newer Dynamics versions - XrmService for Resco CRM server - SalesforceService for Salesforce (translates FetchXML into SOQL - Salesforce Object Query Language, json based) In Resco environment, FetchXML queries can be created in several ways: - Woodford administrators may build fetches, most notably when using the Filter editor. - JavaScript developers sometimes write FetchXML queries for Offline HTML custom functions. - Resco developers use FetchXML queries in the source code of Mobile CRM apps. People responsible for troubleshooting synchronization problems can also encounter FetchXML queries and responses when tracing HTTP communication, for example using Fiddler. Fetch anatomy Simple fetch is a query where we specify - Entity (database table) - Which columns we want (possibly all columns) - Aliases (usually for linked entities, but also for result columns) - Filter, i.e., which rows we want (possibly all) - Sort order of the returned rows (optional) - Additional directives such as Count or Distinct (optional) The result of such a query is a table, which is a subset of the original entity table. (Selected rows, selected columns, possibly different sort order.) More complex queries can query several tables at once, whereby these tables must be linked by lookups. Even more complex fetches can execute additional processing such as compute aggregates (counts, average values...) or group the result rows. Condition operators Operators are used in the filters of the queries. Many operators are described here; you can find some examples here. Standard set of operators This is a subset of Dynamics CRM FetchXML operators that Resco considers as standard: - DateTime operators - on, on-or-before, on-or-after - today, yesterday, tomorrow - {this|next|last}-{week|month|year} - {olderthan|last|next}-x-{hours|days|weeks|months|years} - Other operators - eq, ne, le, lt, ge, gt - {eq|ne}-{userid|businessid} - eq-userteams, eq-useroruserteams - [not-]{null|like|between} - [not-]in For the exact meaning of individual operators consult Microsoft documentation. Operators unsupported by Resco The FetchXML standard is not static; Microsoft occasionally adds new operators. The following operators are known as not supported by Resco. The list might not be exhaustive. We might add support for these operators in the future. - {next|last}-seven-days - olderthan-x-minutes - all operators that contain the word fiscal Resco specific syntax We have extended the FetchXML language that we use internally to include some additional constructs. The following macros used in condition values for particular operators: @@TODAY+<n>or @@TODAY-<n>can be used in operators on-or-before and on-or-after. Examples: @@TODAY, @@TODAY+2, @@TODAY-123. The macro is replaced by today's date in the format "2019-07-31T00:00:00.0000000+02:00". @@currentlanguage@@can be used in operator eq. It is replaced by currently loaded language ID used in the mobile app, for example: "en-US", "fr-FR", etc. Special Resco operators - eq-customerid - Interpreted as the operator EQ used with value = {appSettings.CustomerId} (*) - eq-customeruserid - Interpreted as the operator EQ used with value = {settings.CustomerUserId} (*) (*) These appSettings are set up in customer mode login. Treatment of unsupported operators What happens if the destination party does not understand the operators (for example if you attempt to send a custom Resco syntax to Dynamics)? The behavior depends on the party. - Fetches to the database of the mobile CRM app - Only the standard set of operators is supported. Unsupported conditions are silently evaluated as "not-null". - Fetches to Dynamics server - Standard set of operators (as well as all other Dynamics operators) are supported. Unsupported operators return an error. - Fetches to Resco CRM server - Standard set of operators supported. Fetch is sent to the server without any modification. Unsupported conditions are silently evaluated as "not-null". - Fetches to Salesforce server - The standard set of operators is supported, with the following exceptions: last-x-hours, next-x-hours, not-between, eq-businessid, ne-businessid, eq-userteams, eq-useroruserteams. Salesforce throws the NotSupportedException for these operators. FetchXML limitations in Dynamics - A FetchXML query can return at most 5000 (result) rows. We use paging to retrieve additional records (the next page). During a synchronization, it can make sense to use smaller page size (fewer records per page). See Speed up synchronization for more details. - FetchXML supports the following aggregate functions: sum, avg, min, max, count. If the query would return more than 50,000 records, the query fails with error "AggregateQueryRecordLimit exceeded".
https://docs.resco.net/mediawiki/index.php?title=FetchXML&curid=190&diff=1558&oldid=1060
2020-01-18T01:42:43
CC-MAIN-2020-05
1579250591431.4
[]
docs.resco.net
See Also: BufferedInputStream Members Wraps an existing Java.IO.InputStream: java Example BufferedInputStream buf = new BufferedInputStream(new FileInputStream("file.java"));
http://docs.go-mono.com/monodoc.ashx?link=T%3AJava.IO.BufferedInputStream
2020-01-17T23:55:56
CC-MAIN-2020-05
1579250591431.4
[]
docs.go-mono.com
fortios_wireless_controller_hotspot20_anqp_ip_address_type – Configure IP address type availability_ip_address address type availability. fortios_wireless_controller_hotspot20_anqp_ip_address_type: host: "{{ host }}" username: "{{ username }}" password: "{{ password }}" vdom: "{{ vdom }}" https: "False" state: "present" wireless_controller_hotspot20_anqp_ip_address_type: ipv4_address_type: "not-available" ipv6_address_type: "not-available" name: "default_name_5" Return Values¶ Common return values are documented here, the following are the fields unique to this module: Status¶ - This module is not guaranteed to have a backwards compatible interface. [preview] - This module is maintained by the Ansible Community. [community]
https://docs.ansible.com/ansible/latest/modules/fortios_wireless_controller_hotspot20_anqp_ip_address_type_module.html
2020-01-18T00:10:32
CC-MAIN-2020-05
1579250591431.4
[]
docs.ansible.com
Scheduled Reports are a set of build-in reports that are built by Tools4ever and per report an admin is able to add filters. You can schedule exactly who will be emailed a reports as well as customized content within the email. So, if you are sending an email to all your managers, you can specify that they will only receive the relevant information they need for their department. This way the targeted group is not inundated with details that have no relevance to them. Watch the video below to see the improvements. Learn more about Scheduled Reports.
https://docs.helloid.com/hc/en-us/articles/360002992853-Scheduled-Reports-HelloID-4-3
2020-01-18T01:34:27
CC-MAIN-2020-05
1579250591431.4
[]
docs.helloid.com
All content with label archetype+buddy_replication+data_grid+development+eventing+grid+gridfs+infinispan+installation+jbosscache3x+jcache+jsr-107+repeatable_read+replication+s+webdav. Related Labels: podcast, expiration, publish, datagrid, coherence, server, transactionmanager, dist, release, partitioning, query, deadlock, intro, contributor_project, pojo_cache, jbossas, lock_striping, nexus, guide, schema, listener, cache, s3, amazon, memcached, test, api, ehcache, maven, documentation, jboss, roadmap, wcm, youtube, userguide, write_behind, 缓存, ec2, streaming, hibernate, getting, aws, getting_started, interface, custom_interceptor, clustering, setup, eviction, ls, large_object, out_of_memory, concurrency, fine_grained, examples, jboss_cache, import, events, hash_function, configuration, batch, loader, pojo, write_through, cloud, remoting, mvcc, tutorial, notification, presentation, xml, read_committed, distribution, started, jira, cachestore, cacheloader, resteasy, hibernate_search, cluster, br, websocket, async, transaction, interactive, xaresource, build, gatein, searchable, demo, scala, client, as7, non-blocking, migration, filesystem, jpa, user_guide, article, gui_demo, student_project, client_server, testng, infinispan_user_guide, standalone, snapshot, hotrod, docs, batching, consistent_hash, store, whitepaper, jta, faq, 2lcache, as5, lucene, jgroups, locking, favourite, rest, hot_rod more » ( - archetype, - buddy_replication, - data_grid, - development, -.
https://docs.jboss.org/author/label/archetype+buddy_replication+data_grid+development+eventing+grid+gridfs+infinispan+installation+jbosscache3x+jcache+jsr-107+repeatable_read+replication+s+webdav
2020-01-18T00:35:48
CC-MAIN-2020-05
1579250591431.4
[]
docs.jboss.org
We are preparing a new source of documentation for you. Work in progress! Difference between revisions of "Command editor" Revision as of 13:30, 6 December 2019 App users can execute commands from their app, for example, run a mobile report or delete several records at once. Some commands are available automatically, for example, Save command when editing a form. Woodford administrators can add more commands to the app projects, modify command availability and behavior, or even create new commands from scratch. In the app, commands are available in the top right corner of a form. If your form only has one or two commands, they are displayed directly; if you have more commands, users can select the from the Hamburger menu. Contents - 1 Managing commands - 2 Command rules - 3 Available commands - 4 Custom commands - 5 Form rule examples - 6 Disable Delete command - 7 Disable signature on Note attachment Managing commands Commands can be customized in several Resco tools: - In Woodford, you can edit the commands available for a form. - In Woodford, you can edit the commands available for a view when you select multiple records. - In Questionnaire Designer, you can edit the commands available when filling out a questionnaire. In all cases, the behavior of the editor is very similar. - Edit a form, view, or questionnaire template. - Click Edit (or Multi Select in case of views) to open the command configuration window. - To add commands, move them from the Available Commands pane to the Selected Commands pane. Some commands can be added multiple times. - Some commands can be configured. Select the command in the Selected Commands pane and click Properties - To create a new custom command, click New Command. Provide a name and a label, then click OK. - Click Save & Close. Command rules Predefined commands usually function out of the box. However, you can use rules to define their availability and tweak their function. Custom commands do nothing unless you use rules to define associated actions. Rules are modified in Rules editor; the following apply to commands: - On Can Execute defines when is a command available in the app. - On Execute defines what happens when the command is executed. Available commands Different commands are available on each location. Form. - PrintReport – Runs a report. For more information read How to run an SSRS report via the Resco Mobile CRM app. Blog - RunMobileReport - Scan – Only on the new records belonging to Contact and Lead entity – Starts barcode scanner and by scanning VCard/Me Card enabled QR code on a business card (or elsewhere) it will fill in appropriate fields of a new record. - ScanBusinessCard – Only on a Contact and Lead entity – Similar to the previous command, but runs CamCard application to scan a business card and by using text recognition it fills in the appropriate fields. The user needs to have CamCard application installed on the device and needs to enter CamCard Api Key to Project’s Configuration (section Advanced). - Qualify – Allows record qualifying. Available on Lead and Activity entities. - GetProducts – Loads products to Invoice or Order from the selected associated record (e.g. opportunity). - WonOpportunity – Sets opportunity as won and closes it. - LostOpportunity – Sets opportunity as lost and closes it. - CreateInvoice – Creates an invoice from the order. - CheckIn - Hidden commands - UpdateGPS – Sets the current device’s position to record’s latitude and longitude fields, by using Google Maps Services. - UpdateAddress – Sets the current device position (latitude, longitude fields) and address (Street, City, Country) by using Google Maps Services. View. - Assign – Change the ownership of the items. - AssignToMe – Change the ownership of the items. - MarketingList - RunMobileReport - Export – Save selected records to a file. You can select the fields you need and the destination format (PDF, HTML, CSV, Word document, or Excel document). - Import Questionnaire commands - Complete and close - Complete with a report - Cancel the questionnaire - Clear answers - Run report Business card scanning CamCard integration with Mobile CRM app works in such a way, that when creating a new Lead or a Contact, you can select a command (in Mobile CRM app) that starts the CamCard app and you can scan the card. Then it transfers the information back to the Mobile CRM app’s new Lead or Contact record. You need to do the following: - Enable the command (ScanBusinessCard) on Lead or Contact entity edit form. - Install the CamCard app on your device (with iPad you need to install the CamCard app from iPhone, not the HD version) - Obtain Open API key for iOS from Intsig (the company that develops CamCard app) and enter it into app project’s Configuration CamCard ApiKey field. This key is used for both iOS and Android platforms. Resco Mobile CRM app can update the following fields by using the scanned card information from CamCard: Check in / check out The check in and check out commands can be used to indicate when an activity starts and ends. For example, technicians may want to document when they start working on a task and when they finish. This functionality is available from the 11.2 version of the application. It can be configured for any entity available in Woodford; however, if your entity does not include date fields, the commands don't make sense. - In an app project, edit a form of an entity. For example, use the Appointment entity. - Click Edit and CheckIn to the Selected Commands pane. - Select CheckIn and click Properties to open the configuration. - Configure which entity fields should be filled when users tap the Check In and Check Out commands: - Check In time and Check Out time: These generally correspond to Start Time and End Time. These two fields are mandatory. Note that these fields become read-only so that users cannot manually modify time set by the commands. - Duration, Latitude, Longitude: Optionally, select additional entity fields. Geographical coordinates must be Float type, the duration can be Float or Decimal number. - Status: Allows you to change the status of a record when you check in or out. If the status field is chosen, you also need to configure the fields Change Status after Check In and Check Status after Check Out. If you select more options, users must choose one of them. - If you want to make the commands available only for records with a certain status, use the parameters Check In possible for Status and Check Out possible for Status. - Click OK to close the Check In Config window, then Save & Close. Custom commands You can also create custom commands where you define: - when is a command is available - what does it do. Click New Command and enter a name and a label for the command. You have two options for making the command to actually do anything: - Use the form rules On Execute and On Can Execute. - It is also possible to connect custom commands with offline HTML with Java Script and perform actions that are not available when using command rules. Form rule examples On Can Execute rule defines when the command is available for the user. In this rule, you specify in what situation users can see and use the command, so that you can hide this command in situation, when its action is not suitable. On Execute rule defines the actions that the command performs. It can be filling in some data, hiding fields, etc. Disable Update GPS command A simple example is the Update GPS command on the Edit form’s Address tab. Users should not change the GPS position of a record, that already has the GPS position set so you can hide this command in such a situation. Disable Delete command Another example is when you want to prevent users from deleting a record. You can disable deleting records by not putting the Delete command on Selected commands section. But what if you want them to delete commands in some situation? You can add the Delete command but hide it in specific situations. In this case, we will not allow users to delete records that are owned by different users. Disable signature on Note attachment Yet another example can be when you don’t want to have Signature on Note’s Attachment. To disable it, you need to go to Note’s Form and adjust the On Can Execute rule. The tricky part can be setting the command’s name. It needs to be exactly DocAction.CaptureInk (Beware, it’s case sensitive!). Here is the list of all DocActions, all possibilities you can do with Note attachment’s commands: - Actions for an empty Attachment - DocAction.CaptureInk – Configures the view for Capturing Signature. When disabled, list of available commands is shown - DocAction.CapturePhoto – Asks user to capture a photo and loads the chosen media into the view - DocAction.SelectPhoto – Asks user to choose a media (image, video, depending on what the platform supports) and loads the chosen media into the view - DocAction.SelectFile – Asks user to choose a file and loads it into the view - DocAction.RecordAudio – Asks user to record an audio note and loads it into the view - DocAction.RecordVideo – Asks user to record a video and loads it into the view - DocAction.UseLastPhotoTaken – When user captures photo using the camera, and then wants to attach this picture to a Note, this action can be used, instead of Select Picture and navigating to the picture. - DocAction.LoadForm - Actions for non-empty Attachment, i.e. when there is a file attached to a Note - DocAction.Clear – Clears the view and marks it as empty - DocAction.View – Shows a preview of the loaded document (full screen, etc.) - DocAction.OpenExternal – Opens the loaded document in an external application. Which application will be used, is platform-specific - DocAction.FindApp – Find an external application for opening specific documents. Find method is platform-specific (i.e. find on Android Market) - DocAction.Download – Saves file in device, platform-specific - DocAction.Copy – Copies image to the clipboard - DocAction.Paste – Pastes image/file from clipboard - DocAction.Print – Prints the document - DocAction.ResizeImage – Lets user to choose smaller image resolution - DocAction.Import – Lets user import VCard attachment (handled in common code) - DocAction.Edit – Allows the document to be edited directly in Resco Mobile CRM application - DocAction.SendTo – Creates a new Email with the file/document as an attachment - DocAction.Export – Save attachment as a file to the device’s file system See also FS Mobile: Remove Signature from Notes. Blog Can Execute example Can Execute rule can define what exactly should a command do. For example, it can fill in some specific fields and set the status code as busy.
https://docs.resco.net/mediawiki/index.php?title=Command_editor&curid=240&diff=1574&oldid=1571
2020-01-18T01:23:25
CC-MAIN-2020-05
1579250591431.4
[]
docs.resco.net
Changelog 0.8.1 (2015-10-10) support the new pairing rapp goals 0.7.12 (2015-07-09) 0.7.11 (2015-05-27) 0.7.10 (2015-04-06) 0.7.9 (2015-02-09) 0.7.8 (2014-11-21) 0.7.6 (2014-08-25) Export architecture_independent flag in package.xml fix a broken dependency on uuid_msgs. rocon_interaction_msgs: error: unconfigured build_depend on 'uuid_msgs Contributors: Daniel Stonier, Jihoon Lee, Scott K Logan 0.7.4 (2014-05-05) a pairing status message, #92 provide our remocon name when requesting an interaction (now used by pairing), #92 additional error codes for pairing, #92 remocon status now correctly listing the set of running apps (not just one) Contributors: Daniel Stonier 0.7.1 (2014-04-09) get roles moved completely to a service. int32, not 8 unlimited interactions constant. 0.7.0 (2014-03-29) first release, indigo.
http://docs.ros.org/melodic/changelogs/rocon_interaction_msgs/changelog.html
2020-01-18T01:35:22
CC-MAIN-2020-05
1579250591431.4
[]
docs.ros.org
Gets a command to substitute a header sub-document with a footer sub-document of the same page as an active sub-document. readonly goToFooter: GoToFooterCommand Call the execute method to invoke the command. The method checks the command state (obtained via the getState method) to determine whether the action can be performed. If the footer sub-document does not exist for the current page, the command creates it. The active sub-document is available via the RichEditDocument.activeSubDocument property.
https://docs.devexpress.com/AspNet/js-RichEditCommands.goToFooter
2020-01-18T01:17:03
CC-MAIN-2020-05
1579250591431.4
[]
docs.devexpress.com
In this lesson, you will learn how to customize the default editor layout in a Detail View. For this purpose, the Contact Detail View will be used. Before proceeding, take a moment to review the following lessons. Invoke the Model Editor for the MySolution.Module project. Navigate to the Views | MySolution.Module.BusinessObjects node. Expand the Contact_DetailView child node. It contains the Items and Layout child nodes. To view and modify the current layout of the Contact Detail View editors, select the Layout node. The property list to the right will be replaced with a design surface that imitates the Contact Detail View. To modify the editor arrangement, right-click the View's empty space and choose Customize Layout. The Customization Form will be invoked. In the invoked form, you can drag editors to the required positions. Follow the graphical prompts that indicate the item's new location. In addition, you can remove and restore View Items. Drag the required item from the Detail View to the Customization Form to remove the item, and drag the item from the Customization Form to the Detail View to add it. To view the layout tree for View Items, click the Layout Tree View tab on the Customization Form. You can right click a tree item and invoke a context menu, allowing you to hide the Customization Form, reset the layout, create a tabbed group, etc. (See the image below.) To learn more about the Customization form, the Layout Tree View tab and its context menu, refer to the Default Runtime Customization topic. Close the Customization Form. Run the WinForms or ASP.NET application, and invoke a Contact Detail View. Notice that the editors are arranged as required. If you want to reset changes, right-click Contact_DetailView | Layout and choose Reset Differences. Alternatively, you can customize the Contact Detail View layout at runtime, and then merge these customizations into the MySolution.Module project. Refer to the How to: Merge End-User Customizations into the XAF Solution topic for details. Next Lesson: Add an Editor to a Detail View
https://docs.devexpress.com/eXpressAppFramework/112833/getting-started/comprehensive-tutorial/ui-customization/customize-the-view-items-layout
2020-01-18T01:05:56
CC-MAIN-2020-05
1579250591431.4
[]
docs.devexpress.com
Introduction Self Service products may be grouped into categories to more easily organize and provide specific collections of products to certain users and groups. Navigate to Self service > Categories to see an overview of the Self Service categories within your organization's HelloID environment. Search for categories by using the search bar or scrolling through the list. The following information is displayed for each category: - Name The names displayed under the Names column are visible in the User Dashboard and can be edited by clicking Edit under the Actions column. - Status The Status column indicated whether the category is enabled or disabled with the respective label. Enabled categories are visible on the User Dashboard. Disabled categories are not. - Actions Modify a category by clicking Edit or Delete under the Actions column. Next: Create a category for your Self Service products.
https://docs.helloid.com/hc/en-us/articles/360002011213
2020-01-17T23:52:10
CC-MAIN-2020-05
1579250591431.4
[array(['/hc/article_attachments/360019510414/categories_overview.png', 'categories_overview.png'], dtype=object) ]
docs.helloid.com
fortios_switch_controller_802_1X_settings – Configure global 802.1X settings in Fortinet’s FortiOS and FortiGate¶ New in version 2.9. Synopsis¶ - This module is able to configure a FortiGate or FortiOS (FOS) device by allowing the user to set and modify switch_controller feature and 802_1 802.1X settings. fortios_switch_controller_802_1X_settings: host: "{{ host }}" username: "{{ username }}" password: "{{ password }}" vdom: "{{ vdom }}" https: "False" switch_controller_802_1X_settings: link_down_auth: "set-unauth" max_reauth_attempt: "4" reauth_period: "5" Return Values¶ Common return values are documented here, the following are the fields unique to this module: Status¶ - This module is not guaranteed to have a backwards compatible interface. [preview] - This module is maintained by the Ansible Community. [community]
https://docs.ansible.com/ansible/latest/modules/fortios_switch_controller_802_1X_settings_module.html
2020-01-18T00:07:08
CC-MAIN-2020-05
1579250591431.4
[]
docs.ansible.com
Dr. William Feaster, CHOC Children’s chief health information officer, has been recognized nationally for his leadership in utilizing health information technology to increase positive outcomes for patients. Already recognized as an international leader in population health technology and analytics, Dr. Feaster was one of five physicians to receive a Physician All-Star award from Cerner, a leading worldwide provider of health information technology solutions, services, devices and hardware, at its recent annual conference. “This award further validates the important work underway at CHOC Children’s to use data to save children’s lives,” Dr. Feaster says. “Advancing technology will continue to dramatically enhance how we practice medicine, and I am proud to stand with CHOC on the forefront of a dramatic shift that will ultimately lead to more children having happier and healthier childhoods.” As CHOC’s chief health information officer, Dr. Feaster leads the implementation and adoption of technologies that support clinical care and data analysis across the healthcare community. His work promotes the application of data science tools on healthcare data for predictive analytics, data mining, and related technologies to support new informatics initiatives. Dr. Feaster’s work supports CHOC’s population health efforts; innovation and performance excellence initiatives; and clinical and translational research informatics. At CHOC since 2012, Dr. Feaster has been involved in advancing information technology throughout nearly 40 years of clinical practice in pediatric critical care and anesthesia. He has held several medical administrative positions in hospitals, health systems and universities, and is board-certified in pediatrics, anesthesia and clinical informatics. The annual Cerner Health Conference, held mid-October in Kansas City, Mo., drew nearly 14,000 healthcare industry leaders, practitioners and employees to discuss the latest innovations for health information technology.
https://docs.chocchildrens.org/tag/technology/
2020-01-18T00:55:18
CC-MAIN-2020-05
1579250591431.4
[array(['https://docs.chocchildrens.org/wp-content/uploads/2018/05/Feaster_William-320x400.jpg', None], dtype=object) ]
docs.chocchildrens.org
Windows 7 Optimizations on Solid State Drives Be default, Windows 7 disables Superfetch, ReadyBoost, as well as boot and application launch prefetching on Solid State Drives (SSD) with good random read, random write and flush performance. These technologies were all designed to improve performance on traditional hard disk drives (HDD), where random read performance could easily be a major bottleneck. Because of the design changes, Windows 7 powered PCs with SSDs run fast. To understand how these design changes improve overall system performance, read this team blog post by the Windows 7 product group. Technorati Tags: Windows 7,Solid State Drives,SSD,Performance Optimization
https://docs.microsoft.com/en-us/archive/blogs/innov8showcase/windows-7-optimizations-on-solid-state-drives
2020-01-18T02:25:59
CC-MAIN-2020-05
1579250591431.4
[]
docs.microsoft.com
TechNet Flash - Volume 15, Issue 11: May 15, 2013 Top News Worldwide Free MVA Q&A on May 21: Symon Perriman answers your System Center 2012 questions ITProGuru vs. Tony Asaro: A look at the virtualization landscape in a heterogeneous world Your opinion matters: Take the 2013 Flash Newsletter Survey Technet Sponsors: 365 Command, and LinkTek • The best way to manage Office 365 – get your free trial now • Automatically fix broken links when you move or rename files. Editor’s Note TechNet Flash Editor We spend a lot of time telling you about all the great content, events, downloads, and training opportunities available from Microsoft. Now we want to hear from you. The 2013 Flash Newsletter Survey has just gone live and we want your feedback. Read more Read more Your Featured Content System Center 2012 SP1 enhancements and Windows Azure Services for Windows Server. Private cloud: Infrastructure-as-a-service product line architecture guidance. Responsive workload scalability with Windows Server 2012 With the release of Windows Server 2012, Hyper-V greatly expanded support for up to 64 virtual processors and a new VHDX virtual hard disk format that supports greater capacity – up to 64 TB. Download Windows Server 2012 and try these new features now.. TechNet Radio: Create and delegate private clouds in Virtual Machine Manager. The new Office Garage Series: Automate user provisioning in Office 365 In this latest installment, the Garage Series takes you through identity options and demonstrates how to automate provisioning and service assignment with Directory Sync, Active Directory Federation Services, and PowerShell. The hosts also take to the water with their latest XStream install while wakeboarding. A visual guide to configuring Hyper-V Replica In Windows Server 2012, Hyper-V Replica provides a built-in replication mechanism at the virtual machine level. This screen-by-screen walkthrough will help you quickly configure Hyper-V Replica for improved server reliability. How Microsoft protects client data with System Center Data Protection Manager 2010. The Edge Show: A look at System Center Advisor Joseph Chan, a Principal PM Manager on the System Center team, visits the Edge Show to talk about System Center Advisor and how it now integrates with System Center 2012. Find out how you can start using this free cloud service. Calling all community leaders: Access to content, speakers, sponsorship, and more The Microsoft Technical Community (MSTC) Program is a hub for community leaders. It allows you to search for speakers, access technical content, download train-the-trainer sessions, and request sponsorship funding. Register today. The Edge Show: Migrating failover clusters to Windows Server 2012 Symon Perriman welcomes Rob Hindman, a Program Manager on the Windows Server Clustering & High-Availability team, to talk about migrating failover clusters from Windows Server 2008/R2 to Windows Server 2012. Training and Books Explore Windows 8 for the IT Professional.. Events Brad Anderson to keynote TechEd 2013: Register now. Free Q&A: A live discussion about System Center 2012 Join Symon Perriman on May 21 when he’ll discuss System Center 2012 and answer your questions in a free event from Microsoft Virtual Academy. Download System Center 2012 SP1 now.
https://docs.microsoft.com/en-us/archive/newsletter/technet/2013/technet-flash-volume-15-issue-11-may-15-2013
2020-01-18T00:02:46
CC-MAIN-2020-05
1579250591431.4
[array(['http://image.email.microsoftemail.com/lib/feed1d7871600d/m/1/Mitch2010_69x88.jpg', 'Mitch Irsfeld'], dtype=object) ]
docs.microsoft.com
To edit a virtual server, use the following request: PUT /virtual_machines/:id.xml PUT /virtual_machines/:id.json XML Request example JSON Request example billing plan. For virtual servers built using instance packages: instance_package_id - ID of the new instance package If the VS is modified successfully, an HTTP 204 response is returned. If scheduling for changes fails, an HTTP 422 response is returned. Page History v.4.1 v.4.0 primary_disk_min_iopsand swap_disk_min_iopsparameters v.3.3
https://docs.onapp.com/plugins/viewsource/viewpagesrc.action?pageId=44763358
2020-01-18T01:34:48
CC-MAIN-2020-05
1579250591431.4
[]
docs.onapp.com
We are preparing a new source of documentation for you. Work in progress! Revision history of "Resco licensing service" Diff selection: Mark the radio boxes of the revisions to compare and hit enter or the button at the bottom. Legend: (cur) = difference with latest revision, (prev) = difference with preceding revision, m = minor edit.
https://docs.resco.net/mediawiki/index.php?title=Resco_licensing_service&curid=106&action=history
2020-01-18T00:13:27
CC-MAIN-2020-05
1579250591431.4
[]
docs.resco.net
Module ast rustc_ap_syntax pub use GenericArgs::*; pub use UnsafeSource::*; pub use crate::util::parser::ExprPrecedence; A path like Foo<'a, T>.. DefId An arm of a 'match'. A constraint on an associated type (e.g., A = Bar in Foo<A = Bar> or A: TraitA + TraitB in Foo<A: TraitA + TraitB>). A = Bar Foo<A = Bar> A: TraitA + TraitB Foo<A: TraitA + TraitB> Metadata associated with an item. Doc-comments are promoted to attributes that have is_sugared_doc = true. is_sugared_doc = true A block ({ .. }). { .. } An expression. A single field in a struct pattern A header (not the body) of a function declaration. A function header. Foreign module declaration. Represents lifetime, type and const parameters attached to a declaration of a function, enum, trait, etc. Global inline assembly. Represents anything within an impl block. impl Inline assembly. An item. An AST literal. Local represents a let statement, e.g., let <pat>:<ty> = <expr>;. let let <pat>:<ty> = <expr>; Represents a macro invocation. The Path indicates which macro is being invoked, and the vector of token-trees contains the source of the macro invocation. Path A spanned compile-time attribute item. Represents a method's signature in a trait declaration, or in an implementation. Module declaration. A symbol is an interned or gensymed string. A gensym is a symbol that is never equal to any other symbol. A parameter in a function header.. Self position A statement Field of a struct. Represents an item declaration within a trait declaration, possibly including a default implementation. A trait item is either required (meaning it doesn't have an implementation, just a signature) or provided (meaning it has a default implementation). TraitRefs appear in impls. TraitRef A tree of paths sharing common prefixes. Used in use items both at top-level and inside of braces in import groups. use A type bound. A where-clause in a definition. An equality predicate (unsupported). A lifetime predicate. Inline assembly dialect. The kinds of an AssocTyConstraint. AssocTyConstraint Distinguishes between Attributes that decorate items and Attributes that are contained as statements within items. These two cases need to be distinguished for pretty-printing. Attribute A capture clause. An item within an extern block. extern The arguments of a path segment. The AST represents all type param bounds as types. typeck::collect::compute_bounds matches these against the "special" built-in traits (see middle::lang_items) and detects Copy, Send and Sync. typeck::collect::compute_bounds middle::lang_items Copy Send Sync Represents various kinds of content within an impl. Is the trait definition an auto trait? Literal kind. A compile-time attribute item. The movability of a generator / closure literal. Possible values inside of compile-time attribute lists. Specifies the enforced ordering for generic parameters. In the future, if we wanted to relax this order, we could override PartialEq and PartialOrd, to allow the kinds to be unordered. PartialEq PartialOrd Limit types of a range (inclusive or exclusive) Alternative representation for Args describing self parameter of methods. Arg self A modifier on a bound, currently this is only used for ?Sized, where the modifier is Maybe. Negative bounds should also be handled here. ?Sized Maybe Syntax used to declare a trait object. The various kinds of type recognized by the compiler. Part of use item to the right of its prefix. Fields and constructor ids of enum variants and structs. A single predicate in a where-clause. NodeId used to represent the root of the crate. NodeId When parsing and doing expansions, we initially give all AST nodes this AST node value. Then later, in the renumber pass, we renumber them to have small, positive ids. The set of MetaItems that define the compilation environment of the crate, used to drive conditional compilation. MetaItem
https://docs.rs/rustc-ap-syntax/610.0.0/rustc_ap_syntax/ast/index.html
2020-01-18T00:21:42
CC-MAIN-2020-05
1579250591431.4
[]
docs.rs
Design Decisions¶ VRS contributors confronted numerous trade-offs in developing this specification. As these trade-offs may not be apparent to outside readers, this section highlights the most significant ones and the rationale for our design decisions, including: Variation Rather than Variant¶ The abstract Variation class is intentionally not labeled “Variant”, despite this being the primary term used in other molecular variation exchange formats (e.g. Variant Call Format, HGVS Sequence Variant Nomenclature). This is because the term “Variant” as used in the Genetics community is intended to describe discrete changes in nucleotide / amino acid sequence. “Variation”, in contrast, captures other classes of molecular variation, including epigenetic alteration and transcript abundance. Capturing these other classes of variation is a future goal of VRS, as there are many annotations that will require these variation classes as the subject. Allele Rather than Variant¶ The most primitive sequence assertion in VRS is the Allele entity. Colloquially, the words “allele” and “variant” have similar meanings and they are often used interchangeably. However, the VR contributors believe that it is essential to distinguish the state of the sequence from the change between states of a sequence. It is imperative that precise terms are used when modelling data. Therefore, within VR, Allele refers to a state and “variant” refers to the change from one Allele to another. The word “variant”, which implies change, makes it awkward to refer to the (unchanged) reference allele. Some systems will use an HGVS-like syntax (e.g., NC_000019.10:g.44906586G>G or NC_000019.10:g.44906586=) when referring to an unchanged residue. In some cases, such “variants” are even associated with allele frequencies. Similarly, a predicted consequence is better associated with an allele than with a variant. Alleles are Fully Justified¶ In order to standardize the presentation of sequence variation, computed ids from VRS require that Alleles be fully justified from the description of the NCBI Variant Overprecision Correction Algorithm (VOCA). Furthermore, normalization rules must be identical for all sequence types; although this need not be a strict requirement, there is no reason to normalize using different strategies based on sequence type. The choice of algorithm was relatively straightforward: VOCA is published, easily understood, easily implemented, and covers a wide range of cases. The choice to fully justify is a departure from other common variation formats. The HGVS nomenclature recommendations, originally published in 1998, require that alleles be right normalized (3’ rule) on all sequence types. The Variant Call Format (VCF), released as a PDF specification in 2009, made the conflicting choice to write variants left (5’) normalized and anchored to the previous nucleotide. Fully-justified alleles represent an alternate approach. A fully-justified representation does not make an arbitrary choice of where a variant truly occurs in a low-complexity region, but rather describes the final and unambiguous state of the resultant sequence. Interbase Coordinates¶ Sequence ranges use an interbase coordinate system. Interbase coordinate conventions are used in this terminology because they provide conceptual consistency that is not possible with residue-based systems. Important The choice of what to count–residues versus inter-residue positions–has significant semantic implications for coordinates. Because interbase coordinates and the corresponding 0-based residue-counted coordinates are numerically identical in some circumstances, uninitiated readers often conflate the choice of numerical base with the choice of residue or inter-residue counting. Whereas the choice of numerical base is inconsequential, the semantic advantages of interbase are significant. When humans refer to a range of residues within a sequence, the most common convention is to use an interval of ordinal residue positions in the sequence. While natural for humans, this convention has several shortcomings when dealing with sequence variation. For example, interval coordinates are interpreted as exclusive coordinates for insertions, but as inclusive coordinates for substitutions and deletions; in effect, the interpretation of coordinates depends on the variant type, which is an unfortunate coupling of distinct concepts. Modelling Language¶ The VRS collaborators investigated numerous options for modelling data, generating code, and writing the wire protocol. Required and desired selection criteria included: - language-neutral – or at least C/C++, java, python - high-quality tooling/libraries - high-quality code generation - documentation generation - supported constructs and data types - typedefs/aliases - enums - lists, maps, and maps of lists/maps - nested objects - protocol versioning (but not necessarily automatic adaptation) Initial versions of the VRS logical model were implemented in UML, protobuf, and swagger/OpenAPI, and JSON Schema. We have implemented our schema in JSON Schema. Nonetheless, it is anticipated that some adopters of the VRS logical model may implement the specification in other protocols. Serialization Strategy¶ There are many packages and proposals that aspire to a canonical form for json in many languages. Despite this, there are no ratified or de facto winners. Many packages have similar names, which makes it difficult to discern whether they are related or not (often not). Although some packages look like good single-language candidates, none are ready for multi-language use. Many seem abandoned. The need for a canonical json form is evident, and there was at least one proposal for an ECMA standard. Therefore, we implemented our own serialization format, which is very similar to Gibson Canonical JSON (not to be confused with OLPC Canonical JSON).
https://vr-spec.readthedocs.io/en/master/appendices/design_decisions.html
2020-07-02T22:01:42
CC-MAIN-2020-29
1593655880243.25
[]
vr-spec.readthedocs.io
The Fields tab shows a listing of all fields bound to elements in the current record format, and where each one of the fields is used. You can click a field to select the related element; or you can double-click a field to bring up the binding dialog for that field.
https://docs.profoundlogic.com/pages/viewpage.action?pageId=39420138
2020-07-02T23:03:20
CC-MAIN-2020-29
1593655880243.25
[]
docs.profoundlogic.com
Finds a shader with the given name. Shader.Find can be used to switch to another shader without having to keep a reference to the shader. name is the name you can see in the shader popup of any material, for example "Standard", "Unlit/Texture", "Legacy Shaders/Diffuse" etc. Note that a shader might be not included into the player build if nothing references it! In that case, Shader.Find will work only in the editor, and will result in pink "missing shader" materials in the player build. Because of that, it is advisable to use shader references instead of finding them by name. To make sure a shader is included into the game build, do either of: 1) reference it from some of the materials used in your scene, 2) add it under "Always Included Shaders" list in ProjectSettings/Graphics or 3) put shader or something that references it (e.g. a Material) into a "Resources" folder. See Also: Material class. using UnityEngine; using System.Collections; public class ExampleClass : MonoBehaviour { void Start() { Material material = new Material(Shader.Find("Transparent/Diffuse")); material.color = Color.green; GetComponent<Renderer>().material = material; } }
https://docs.unity3d.com/kr/2017.3/ScriptReference/Shader.Find.html
2020-07-02T23:06:37
CC-MAIN-2020-29
1593655880243.25
[]
docs.unity3d.com
AI edge engineer The interplay between AI, cloud, and edge is a rapidly evolving domain. Currently, many IoT solutions are based on basic telemetry. The telemetry function captures data from edge devices and stores it in a data store. Our approach extends beyond basic telemetry. We aim to model problems in the real world through machine learning and deep learning algorithms and implement the model through AI and Cloud on to edge devices. The model is trained in the cloud and deployed on the edge device. The deployment to the edge provides a feedback loop to improve the business process (digital transformation). In this - Systems thinking - Experimentation and Problem solving - Improving through experimentation - Deployment and analysis through testing - Impact on other engineering domains - Forecasting behaviour of a component or system - Design considerations - Working within constraints/tolerances and specific operating conditions – for example, device constraints - Safety and security considerations - Building tools which help to create the solution - Improving processes - Using edge(IoT) to provide an analytics feedback loop to the business process to drive processes - The societal impact of engineering - The aesthetical impact of design and engineering - Deployments at scale - Solving complex business problems by an end-to-end deployment of AI, edge, and cloud.: - Understand the process of deploying IoT based solutions on edge devices - Learn the process of implementing models to edge devices using containers - Explore the use of DevOps for edge devices Produced in partnership with the University of Oxford – Ajit Jaokar Artificial Intelligence:Cloud and Edge Implementations course. Prerequisites None Modules in this learning path Explain the significance of Azure IoT and the problems it solves. Describe Azure IoT components and explain how you combine them to solve IoT solutions, which create value for enterprises. Assess the characteristics of Azure IoT Hub and determine scenarios when to use IoT Hub.. Assess the characteristics of Azure Functions for IoT. Describe the function of triggers and bindings and show how you combine them to create a scalable IoT solution. Describe the benefits of using cloud infrastructure to rapidly deploy IoT applications with Azure Functions. Create and deploy an Azure function to make a language translation IoT device. The function will use Cognitive Speech Service. Your device will record a voice in a foreign language and convert the speech to a target language. Implement a cognitive service for performing language detection on an IoT Edge device. Describe the components and steps for implementing a cognitive service on an IoT device Analyze the significance of MLOps in the development and deployment of machine learning models for IoT Edge. Describe the components of the MLOps pipeline and show how you can combine them to create models that can be retrained automatically for IoT Edge devices. Define a solution for smoke testing for virtual IoT Edge devices. Your solution will employ a CI/CD (Continuous Integration/ Continuous Deployment) strategy using Azure DevOps, Azure Pipelines, and Azure Application Insights on a Kubernetes cluster. Determine the types of business problems that can be solved using Azure Sphere. Explain the capabilities and the components (microcontroller unit, operating system, cloud-based security service) for the Azure Sphere. Describe how the components provide a secure platform to develop, deploy, and maintain secure internet connected IoT solutions. Implement a neural network model for performing real-time image classification on a secured, internet-connected microcontroller-based device (Azure Sphere). Describe the components and steps for implementing a pre-trained image classification model on Azure Sphere.
https://docs.microsoft.com/en-us/learn/paths/ai-edge-engineer/?WT.mc_id=Build2020_student_charlotteOMB_-blog-cxa
2020-07-02T23:42:21
CC-MAIN-2020-29
1593655880243.25
[]
docs.microsoft.com
: Setup.exe /qs /ACTION=Install /FEATURES=SQLEngine,Replication /INSTANCENAME=MSSQLSERVER /SQLSVCACCOUNT="<DomainName\UserName>" /SQLSVCPASSWORD="<StrongPassword>" /SQLSYSADMINACCOUNTS="<DomainName\UserName>" /AGTSVCACCOUNT="NT AUTHORITY\Network Service" /TCPENABLED=1 /IACCEPTSQLSERVERLICENSETERMS. or Windows Server 2012 Server Core,
https://docs.microsoft.com/en-us/previous-versions/sql/sql-server-2012/hh231669(v%3Dsql.110)
2020-07-02T23:51:29
CC-MAIN-2020-29
1593655880243.25
[]
docs.microsoft.com
Notices Was this page helpful? Yes No Submitting... What is wrong with this page? Confusing Missing screenshots, graphics Missing technical details Needs a video Not correct Not the information I expected Your feedback: Send Skip Thank you Last modified by Shubhangi Godbole on Jul 25, 2018 wide-layout Comments Home PDFs
https://docs.bmc.com/docs/apaclims13/notices-718490324.html
2020-07-02T23:20:11
CC-MAIN-2020-29
1593655880243.25
[]
docs.bmc.com
Message-ID: <1677012390.102332.1593729894562.JavaMail.j2ee-conf@bmc1-rhel-confprod1> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_102331_752599010.1593729894562" ------=_Part_102331_752599010.1593729894562 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: You can launch BMC Atrium Explorer from the administrator consol= e to access the BMC Impact Model Designer so that you can edit the service = model components that are imported and synchronized with the BMC Atrium CMD= B. To launch BMC Atrium Explorer Where to go from here For details about using BMC Impact Model Designer, see BMC Impact Model Designer = user interface. For details about using BMC Atrium Explorer, see the BMC Atrium Core online documentation.= p>
https://docs.bmc.com/docs/exportword?pageId=722073377
2020-07-02T22:44:54
CC-MAIN-2020-29
1593655880243.25
[]
docs.bmc.com
changes.mady.by.user Erica Johnson Saved on Apr 17, 2014 MyPMS 101 Complete MyPMS Users Guide About MyPMS Getting Started as an Admin Getting support MyPMS release notes Installing MyPMS Importing content Managing users Need support? Submit a ticket Find articles and answers to your questions Watch step-by-step tutorials on our most commonly used features and functions
https://docs.bookingcenter.com/pages/diffpages.action?originalId=1376730&pageId=1376731
2020-07-02T21:38:19
CC-MAIN-2020-29
1593655880243.25
[]
docs.bookingcenter.com
. Note In addition to the logging instructions in this article, there's new, integrated logging capability with Azure Monitoring. You'll find more on this capability in the Send logs to Azure Monitor (preview) section. Note App Service provides a dedicated, interactive diagnostics tool to help you troubleshoot your application. For more information, see Azure App Service diagnostics overview. In addition, you can use other Azure services to improve the logging and monitoring capabilities of your app, such as Azure Monitor. Enable application logging (Windows) Note Application logging for blob storage can only use storage accounts in the same region as the App Service To enable application logging for Windows apps in the Azure portal, navigate to your app and select App Service logs. Select On for either Application Logging (Filesystem) or Application Logging (Blob), or both. The Filesystem option is for temporary debugging purposes, and turns itself off in 12 hours. The Blob option is for long-term logging, and needs a blob storage container to write logs to. The Blob option also includes additional information in the log messages, such as the ID of the origin VM instance of the log message ( InstanceId), thread ID ( Tid), and a more granular timestamp ( EventTickCount). Note Currently only .NET application logs can be written to the blob storage. Java, PHP, Node.js, Python application logs can only be stored on the App Service file system (without code modifications to write logs to external storage). Also, if you regenerate your storage account's access keys, you must reset the respective logging configuration to use the updated access keys. To do this: - In the Configure tab, set the respective logging feature to Off. Save your setting. - Enable logging to the storage account blob again. Save your setting. Select the Level, or the level of details to log. The following table shows the log categories included in each level: When finished, select Save. Enable application logging (Linux/Container) To enable application logging for Linux apps or custom container apps in the Azure portal, navigate to your app and select App Service logs. In Application logging, select File System. In Quota (MB), specify the disk quota for the application logs. In Retention Period (Days), set the number of days the logs should be retained. When finished, select Save. Enable web server logging.. When finished, select Save. Log detailed errors To save the error page or failed request tracing for Windows apps in the Azure portal, navigate to your app and select App Service logs. Under Detailed Error Logging or Failed Request Tracing, select On, then select Save. Both types of logs are stored in the App Service file system. Up to 50 errors (files/folders) are retained. When the number of HTML files exceed 50, the oldest 26 errors are automatically deleted. Add log messages in code In your application code, you use the usual logging facilities to send log messages to the application logs. For example: ASP.NET applications can use the System.Diagnostics.Trace class to log information to the application diagnostics log. For example: System.Diagnostics.Trace.TraceError("If you're seeing this, something bad happened"); By default, ASP.NET Core uses the Microsoft.Extensions.Logging.AzureAppServices logging provider. For more information, see ASP.NET Core logging in Azure. Stream logs Before you stream logs in real time, enable the log type that you want. Any information written to files ending in .txt, .log, or .htm that are stored in the /LogFiles directory (d:/home/logfiles) is streamed by App Service.. In Azure portal To stream logs in the Azure portal, navigate to your app and select Log stream. In Cloud Shell To stream logs live in Cloud Shell, use the following command: az webapp log tail --name appname --resource-group myResourceGroup In local terminal To stream logs in the local console, install Azure CLI and sign in to your account. Once signed in, followed the instructions for Cloud Shell Access log files If you configure the Azure Storage blobs option for a log type, you need a client tool that works with Azure Storage. For more information, see Azure Storage Client Tools. For logs stored in the App Service file system, the easiest way is to download the ZIP file in the browser at: - Linux/container apps: https://<app-name>.scm.azurewebsites.net/api/logs/docker/zip - Windows apps: https://<app-name>.scm.azurewebsites.net/api/dump For Linux/container apps, the ZIP file contains console output logs for both the docker host and the docker container. For a scaled-out app, the ZIP file contains one set of logs for each instance. In the App Service file system, these log files are the contents of the /home/LogFiles directory. For Windows apps, the ZIP file contains the contents of the D:\Home\LogFiles directory in the App Service file system. It has the following structure: Send logs to Azure Monitor (preview) With the new Azure Monitor integration, you can create Diagnostic Settings (preview) to send logs to Storage Accounts, Event Hubs and Log Analytics. Supported log types The following table shows the supported log types and descriptions:
https://docs.microsoft.com/da-dk/azure/app-service/troubleshoot-diagnostic-logs
2020-07-02T23:33:52
CC-MAIN-2020-29
1593655880243.25
[array(['media/troubleshoot-diagnostic-logs/diagnostic-settings-page.png', 'Diagnostic Settings (preview)'], dtype=object) ]
docs.microsoft.com
Julian Calendar Class Definition Represents the Julian calendar. public ref class JulianCalendar : System::Globalization::Calendar public class JulianCalendar : System.Globalization.Calendar [System.Serializable] public class JulianCalendar : System.Globalization.Calendar [System.Runtime.InteropServices.ComVisible(true)] [System.Serializable] public class JulianCalendar : System.Globalization.Calendar type JulianCalendar = class inherit Calendar Public Class JulianCalendar Inherits Calendar - Inheritance - - Attributes - Remarks In 45 B.C., Julius Caesar ordered a calendar reform, which resulted in the calendar called the Julian calendar. The Julian calendar is the predecessor of the Gregorian calendar. Note For information about using the JulianCalendar class and the other calendar classes in the .NET Framework, see Working with Calendars., the JulianCalendar class can be used only to calculate dates in the Julian.
https://docs.microsoft.com/en-us/dotnet/api/system.globalization.juliancalendar?view=netcore-3.1
2020-07-02T23:11:49
CC-MAIN-2020-29
1593655880243.25
[]
docs.microsoft.com
Exercise - Run the web application locally You have a SQL database that contains sample data. Later you'll deploy the Space Game website to App Service through Azure Pipelines. But first let's verify that you can bring up the application locally. We want to ensure that everything works before you run the pipeline. Fetch the branch from GitHub Here you fetch the database branch from GitHub. You then check out, or switch to, that branch. This branch contains the Space Game project that you worked with in the previous modules. It also contains an Azure Pipelines configuration. To fetch and check out the database branch from GitHub: In Visual Studio Code, open the integrated terminal. Run the following gitcommands to fetch a branch named databasefrom the Microsoft repository and to switch to that branch. git fetch upstream database git checkout -b database upstream/database The format of these commands enables you to get starter code from the Microsoft GitHub repository. This repository is known as upstream. Shortly, you'll push this branch up to your GitHub repository. Your GitHub repository is known as origin. Optionally, in Visual Studio Code, open azure-pipelines.yml. Familiarize yourself with the initial configuration. The configuration resembles the ones that you created in the previous modules in this learning path. It builds only the application's Release configuration. For brevity, the configuration omits the triggers, manual approvals, and tests that you set up in previous modules. Note A more robust configuration might specify the branches that participate in the build process. For example, to help verify code quality, you might run unit tests each time you push up a change on any branch. You might also deploy the application to an environment that performs more exhaustive testing. But you do this deployment only when you have a pull request, when you have a release candidate, or when you merge code to master. For more information, see Implement a code workflow in your build pipeline by using Git and GitHub and Build pipeline triggers. Optional - Explore the database project If you're interested specifically in SQL Server, you can check out the database project. Find the project, Tailspin.SpaceGame.Database.sqlproj, in the Tailspin.SpaceGame.Database directory. We discussed this SQL Server Data Tools project earlier. The Tables directory contains .sql files that define the four SQL tables that you worked with in the previous part. Note You won't need to build the database project locally. But keep in mind that this projects builds only on Windows. To see how the Space Game web app runs SQL queries against the database, open the RemoteDBRepository.cs file in the Tailspin.SpaceGame.Web directory. This example gets the achievements for a specific profile. These achievements appear when you select a player profile from the leaderboard. sql = string.Format("SELECT a.description from dbo.Achievements a JOIN dbo.ProfileAchievements pa on a.id = pa.achievementid WHERE pa.profileid = {0}", profileId); command = new SqlCommand(sql, conn); using (SqlDataReader reader = command.ExecuteReader()) { //get the array of achievements user.Achievements = new string[recordCount]; int i = 0; while (reader.Read()) { user.Achievements[i] = reader.GetString(0); i++; } } conn.Close(); Specify your database connection string Here you fetch the connection string for your database. You store it in a file named secrets.json. Doing so enables your web application, running locally, to connect to the database. The project is already set up to read from this file. You just need to create the file and specify the connection string. Note The connection string is sensitive information. Because your secrets.json file contains the connection string, the file isn't placed under source control. Later you'll use a different approach to specify the connection string for App Service. Fetch the connection string from the Azure portal In the Azure portal, on the left, select SQL databases. Choose tailspindatabase. Under Settings, select Connection strings. Copy the connection string that appears on the ADO.NET tab. Notice that the connection string doesn't show your password. You'll specify your password shortly. Specify the connection string locally In Visual Studio Code, in the Tailspin.SpaceGame.Web directory, open Tailspin.SpaceGame.Web.csproj. Notice the entry for UserSecretsId. The web project uses this UserSecretsIdGUID to locate your secrets.json file. The GUID in the secrets.json file matches the GUID that is the name of the directory where the secrets file is located. <UserSecretsId>d7faad9d-d27a-4122-89ff-b9376c13b153</UserSecretsId> In Visual Studio Code, open the terminal. Move to the Tailspin.SpaceGame.Web directory. cd Tailspin.SpaceGame.Web In a temporary file, paste your connection string. Then replace {your_password}with your SQL password. Copy the entire connection string back to the clipboard. In the integrated terminal, create a Bash variable that specifies your connection string. Replace {your_connection_string}with your connection string. DB_CONNECTION_STRING="{your_connection_string}" Here's a complete example: DB_CONNECTION_STRING= following dotnet user-secrets setcommand to write your connection string to secrets.json. dotnet user-secrets set "ConnectionStrings:DefaultConnection" "$DB_CONNECTION_STRING" On Windows, the file is written to %APPDATA%\Microsoft\UserSecrets\d7faad9d-d27a-4122-89ff-b9376c13b153\secrets.json. On macOS, the file is written to ~/.microsoft/usersecrets/d7faad9d-d27a-4122-89ff-b9376c13b153/secrets.json. As an optional step, you can print this file to verify its contents. Run dotnet user-secrets listto print the contents of secrets.json. dotnet user-secrets list You see your connection string. Here's an example: ConnectionStrings:DefaultConnection = web application locally Here you build and run the web application locally to verify that the application can connect to the database. In Visual Studio Code, navigate to the terminal window. Run this dotnet buildcommand to build the application: dotnet build --configuration Release Set the ASPNETCORE_ENVIRONMENTBash variable to Development. Export the variable. export ASPNETCORE_ENVIRONMENT=Development This setting tells ASP.NET Core that you're in development mode, not in production mode. In development mode, it's safe to read from the secrets.json file. Run this dotnet runcommand to run the application: dotnet run --configuration Release --no-build On a new browser tab, navigate to see the running application. You see this interface: Tip In your browser, if you see an error that's related to a privacy or certificate error: - In your terminal, use Ctrl+C to stop the application. - Run dotnet dev-certs https --trust. - When prompted, select Yes. For more information, see the blog post Developing locally with ASP.NET Core under HTTPS, SSL, and self-signed certificates. - After your computer trusts your local SSL certificate, run the dotnet runcommand again. - From a new browser tab, go to see the running application. You can interact with the page, including the leaderboard. When you select a player's name, you see details about that player. Unlike in previous modules, the leaderboard data is read from your Azure SQL Database. When you finish, return to the terminal window. Use Ctrl+C to stop the application. Need help? See our troubleshooting guide or provide specific feedback by reporting an issue.
https://docs.microsoft.com/en-us/learn/modules/manage-database-changes-in-azure-pipelines/5-run-locally
2020-07-02T23:30:38
CC-MAIN-2020-29
1593655880243.25
[]
docs.microsoft.com
Drupal First, sign in to your account and then click on “Structure” in the top bar. And then the “Block layout” item. You will see the administration options for each block that your page consists of. Scroll completely down and on the “Footer fifth” item click on the “Place block”. In the dialog box that appears, click on the “+ Add custom block” button to open a custom block editor. In Block description, choose any block description. In the “Body”section, choose in the “Text format” drop-down menu for “Full HTML” and click on the <> Source icon in the text editor to enable HTML code entry. Then, copy the package code from the website administration to the text field. Save changes. Next,the next step will be displayed, in which the name of the created block will selected, and the unchecking of the “Display title” to prevent the name from appearing on the website. In the “Region” drop-down menu, for example, select “Footer fifth” and click “Save block”. In this step, you can also set up a specific sub-page of the website where the mluvii button is shown. In your case, it will be available from all subpages.
https://docs.mluvii.com/guide/en/for-it-specialists/widget-inserting/drupal.html
2020-07-02T22:11:01
CC-MAIN-2020-29
1593655880243.25
[array(['../../assets/drupal_001.png', None], dtype=object) array(['../../assets/drupal_002.png', None], dtype=object) array(['../../assets/drupal_003.png', None], dtype=object) array(['../../assets/drupal_004.png', None], dtype=object)]
docs.mluvii.com
$ oc adm new-project logging --node-selector="" $ oc project logging As an OpenShift Container Platform cluster administrator, you can deploy the EFK stack to aggregate logs for a range of OpenShift Container Platform. After deployment. Aggregated logging is supported using the json-file or journald driver in Docker. The Docker log driver is set to journald as the default for all nodes. See Updating Fluentd’s Log Source After a Docker Log Driver Update for more information about switching between json-file and journald. Fluentd automatically determines which log driver ( journald or json-file) the container runtime is using. When the log driver is set to journald, Fluentd reads journald logs. When set to json-file Fluentd reads from /var/log/containers. See Managing Docker Container Logs for information on json-file logging driver options to manage container logs and prevent filling node disks.c adm new-project logging --node-selector="" $ oc project logging Parameters for the EFK deployment may be specified to the inventory host file to override the default parameter values. Read the Elasticsearch and the Fluentd sections before choosing parameters:/byo/openshift-cluster/openshift-logging They will eventually enter Running status. For additional details about the status of the pods during deployment by retrieving associated events: $ oc describe pods/<pod_name> Check the logs if the pods do not run successfully: $ oc logs -f <pod_name> This section describes adjustments that you can make to deployed components. If you set openshift_logging_use_ops to true in your inventory file, enable this option. Most of the following discussion also applies to the operations cluster if present, just with the names changed to include -ops.. (change to --selector component=es-ops for Ops cluster): $ for dc in $(oc get deploymentconfig --selector component=es ) (change to --selector component=es-ops openshift-logging.yml Ansible playbook: $ ansible-playbook [-i </path/to/inventory>] \ /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-cluster/openshift-logging, Docker, and OpenShift. For container logs, Fluentd determines which log driver Docker is using, json-file or journald, and automatically reads the logs from that source. logging deployment.. Elasticsearch down to zero and scale Fluentd so it does not match any other nodes. Then, make the changes and scale Elasticsearch and Fluentd back. To scale Elastic Elastic OpenShift Container Platform OpenShift Container Platform. openshift_logging Ansible role provides a ConfigMap from which Curator reads its configuration. You may edit or replace this ConfigMap to reconfigure Curator. Currently the logging-curator ConfigMap is used to configure both your ops and non-ops Curator instances. Any .operations configurations. $ ansible-playbook [-i </path/to/inventory>] \ /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-cluster/openshift-logging.yml \ -e openshift_logging_install_logging=False/byo/openshift-cluster/openshift-logging.yml $ ansible-playbook [-i </path/to/inventory>] \ /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-cluster/openshift-logging.yml. \ "" By default, aggregated logging uses the journald log driver unless json-file was specified during installation. You can change the log driver between journald and json-file as needed. Fluentd determines the driver Docker is using by checking the /etc/docker/daemon.json and /etc/sysconfig/docker files. You can determine which driver Docker is using with the docker info command: # docker info | grep Logging Logging Driver: journald To change between json-file and journald after installation: Update the Fluentd log source.)." } }' Once complete, for each dc you have for an Elasticsearch cluster, run oc rollout latest to deploy the latest version of the dc object: $ oc rollout latest <dc_name> You will see a new pod deployed. Once the pod has two ready containers, you can move on to the next dc. Once all `dcPUT '' Once complete, for each dc you have for an ES cluster, run oc rollout latest to deploy the latest version of the dc object: $ oc rollout latest <dc_name> You will see a new pod deployed. Once the pod has two ready containers, you can move on to the next dc. Once all DCs" } }' Once the restart is complete, enable all external communications to the ES cluster. Edit your non-cluster logging service (for example, logging-es, logging-es-ops) to match the Elasticsearch pods running again: $ oc patch svc/logging-es -p '{"spec":{"selector":{"component":"es","provider":"openshift"}}}'
https://docs.openshift.com/container-platform/3.7/install_config/aggregate_logging.html
2020-07-02T23:15:26
CC-MAIN-2020-29
1593655880243.25
[]
docs.openshift.com
Computed Identifiers¶ VRS provides an algorithmic solution to deterministically generate a globally unique identifier from a VRS object itself. All valid implementations of the VRS Computed Identifier will generate the same identifier when the objects are identical, and will generate different identifiers when they are not. The VRS Computed Digest algorithm obviates centralized registration services, allows computational pipelines to generate “private” ids efficiently, and makes it easier for distributed groups to share data. A VRS Computed Identifier for a VRS concept is computed as follows: - If the object is an Allele, normalize it. - Generate binary data to digest. If the object is a Sequence string, encode it using UTF-8. Otherwise, serialize the object using Digest Serialization. - Generate a truncated digest from the binary data. - Construct an identifier based on the digest and object type. The following diagram depicts the operations necessary to generate a computed identifier. These operations are described in detail in the subsequent sections. Serialization, Digest, and Computed Identifier Operations sha512t24u, ga4gh_digest, and ga4gh_identifyfunctions respectively, depict the dependencies among functions. SHA512/192is SHA-512 truncated at 192 bits using the systematic name recommended by SHA-512 (§5.3.6). base64url is the official name of the variant of Base64 encoding that uses a URL-safe character set. [figure source] Note Most implementation users will need only the ga4gh_identify function. We describe the ga4gh_serialize, ga4gh_digest, and sha512t24u functions here primarily for implementers. Requirements¶ Implementations MUST adhere to the following requirements: - Implementations MUST use the normalization, serialization, and digest mechanisms described in this section when generating GA4GH Computed Identifiers. Implementations MUST NOT use any other normalization, serialization, or digest mechanism to generate a GA4GH Computed Identifier. - Implementations MUST ensure that all nested objects are identified with GA4GH Computed Identifiers. Implementations MAY NOT reference nested objects using identifiers in any namespace other than ga4gh. Note The GA4GH schema MAY be used with identifiers from any namespace. For example, a SequenceLocation may be defined using a sequence_id = refseq:NC_000019.10. However, an implementation of the Computed Identifier algorithm MUST first translate sequence accessions to GA4GH SQ accessions to be compliant with this specification. Digest Serialization¶ Digest serialization converts a VRS object into a binary representation in preparation for computing a digest of the object. The Digest Serialization specification ensures that all implementations serialize variation objects identically, and therefore that the digests will also be identical. VRS provides validation tests to ensure compliance. Important Do not confuse Digest Serialization with JSON serialization or other serialization forms. Although Digest Serialization and JSON serialization appear similar, they are NOT interchangeable and will generate different GA4GH Digests. Although several proposals exist for serializing arbitrary data in a consistent manner ([Gibson], [OLPC], [JCS]), none have been ratified. As a result, VRS defines a custom serialization format that is consistent with these proposals but does not rely on them for definition; it is hoped that a future ratified standard will be forward compatible with the process described here. The first step in serialization is to generate message content. If the object is a string representing a Sequence, the serialization is the UTF-8 encoding of the string. Because this is a common operation, implementations are strongly encouraged to precompute GA4GH sequence identifiers as described in Required External Data. If the object is a composite VRS object, implementations MUST: - ensure that objects are referenced with identifiers in the ga4ghnamespace - replace nested identifiable objects (i.e., objects that have id properties) with their corresponding digests - order arrays of digests and ids by Unicode Character Set values - filter out fields that start with underscore (e.g., _id) - filter out fields with null values The second step is to JSON serialize the message content with the following REQUIRED constraints: The criteria for the digest serialization method was that it must be relatively easy and reliable to implement in any common computer language. Example allele = models.Allele(location=models.SequenceLocation( sequence_id="ga4gh:SQ.IIB53T8CNeJJdUqzn9V_JnRtQadwWCbl", interval=simple_interval), state=models.SequenceState(sequence="T")) ga4gh_serialize(allele) Gives the following binary (UTF-8 encoded) data: {"location":"u5fspwVbQ79QkX6GHLF8tXPCAXFJqRPx","state":{"sequence":"T","type":"SequenceState"},"type":"Allele"} For comparison, here is one of many possible JSON serializations of the same object: allele.for_json() { " } Truncated Digest (sha512t24u)¶ The sha512t24u truncated digest algorithm computes an ASCII digest from binary data. The method uses two well-established standard algorithms, the SHA-512 hash function, which generates a binary digest from binary data, and Base64 URL encoding, which encodes binary data using printable characters. Computing the sha512t24u truncated digest for binary data consists of three steps: - Compute the SHA-512 digest of a binary data. - Truncate the digest to the left-most 24 bytes (192 bits). See Truncated Digest Timing and Collision Analysis for the rationale for 24 bytes. - Encode the truncated digest as a base64url ASCII string. >>> import base64, hashlib >>> def sha512t24u(blob): digest = hashlib.sha512(blob).digest() tdigest = digest[:24] tdigest_b64u = base64.urlsafe_b64encode(tdigest).decode("ASCII") return tdigest_b64u >>> sha512t24u(b"ACGT") 'aKF498dAxcJAqme6QYQ7EZ07-fiw8Kw2' Identifier Construction¶ The final step of generating a computed identifier for a VRS object is to generate a W3C CURIE formatted identifier, which has the form: prefix ":" reference The GA4GH VRS constructs computed identifiers as follows: "ga4gh" ":" type_prefix "." <digest> Warning Do not confuse the W3C CURIE prefix (“ga4gh”) with the type prefix. Type prefixes used by VRS are: For example, the identifer for the allele example under Digest Serialization gives: ga4gh:VA.EgHPXXhULTwoP4-ACfs-YCXaeUQJBjH_
https://vr-spec.readthedocs.io/en/master/impl-guide/computed_identifiers.html
2020-07-02T21:37:31
CC-MAIN-2020-29
1593655880243.25
[array(['../_images/id-dig-ser.png', '../_images/id-dig-ser.png'], dtype=object) ]
vr-spec.readthedocs.io
Coders 4 Charities is kicking off and it has been far more successful than we expected. Doug and I both asked if we thought anyone would show up as we have had registration open for 2 months. We have 29 people here who have given up their weekend and 5 charities that are getting excellent software that will help them run their organizations better after the weekend The Crew Early In The Morning Rock Band Rooms (208 & 207) Tonight is the Rock Band contest at 11:00pm, I will definitely have video for that on YouTube!
http://docs.geekswithblogs.net/jjulian/archive/2008/04/26/121666.aspx
2020-07-02T23:08:33
CC-MAIN-2020-29
1593655880243.25
[]
docs.geekswithblogs.net
Struct nannou:: ContextBuilder[−] pub struct ContextBuilder<'a> { pub gl_attr: GlAttributes<&'a Context>, // some fields omitted } Object that allows you to build Contexts. Fields gl_attr: GlAttributes<&'a Context> The attributes to use to create the context. Methods Initializes a new ContextBuilder with default values. Sets how the backend should choose the OpenGL API and version. Sets the desired OpenGL context profile. Sets the debug flag for the OpenGL context. The default value for this flag is cfg!(debug_assertions), which means that it's enabled when you run cargo build and disabled when you run cargo build --release. Sets the robustness of the OpenGL context. See the docs of Robustness. Requests that the window has vsync enabled. By default, vsync is not enabled. Share the display lists with the given Context. Sets the multisampling level to request. A value of 0 indicates that multisampling must not be enabled. Panic Will panic if samples is not a power of two. Sets the number of bits in the depth buffer. Sets the number of bits in the stencil buffer. Sets the number of bits in the color buffer. Request the backend to be stereoscopic. Sets whether sRGB should be enabled on the window. The default value is false. Sets whether double buffering should be enabled. The default value is None. Platform-specific This option will be taken into account on the following platforms: - MacOS - Linux using GLX with X - Windows using WGL Sets whether hardware acceleration is required. The default value is Some(true) Platform-specific This option will be taken into account on the following platforms: - MacOS - Linux using EGL with either X or Wayland - Windows using EGL or WGL - Android using EGL
https://docs.rs/nannou/0.8.0/nannou/struct.ContextBuilder.html
2018-12-09T23:26:10
CC-MAIN-2018-51
1544376823228.36
[]
docs.rs
Struct nannou:: Frame[−][src] pub struct Frame { /* fields omitted */ } A Frame represents all graphics for the application for a single "frame" of time. The Frame itself consists of a WindowFrame for each window in the App. Methods Return the part of the Frame associated with the given window. Return the part of the Frame associated with the main window. Return an iterator yielding each window::Id along with its WindowFrame for drawing. Short-hand for clearing all windows with the given color.
https://docs.rs/nannou/0.8.0/nannou/struct.Frame.html
2018-12-10T00:49:52
CC-MAIN-2018-51
1544376823228.36
[]
docs.rs