content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
Version: 2006R1 Date: May. 23, 2006 The latest version of this document can always be found here: Tool Chain: Toolchain Release 2006R1 u-boot: uboot_1.1.3 Release 2006R1 Host platform: SuSE Linux 9.2 or above Target board: STAMP & EZKIT Board Note: However, other similar host platforms are also supported, but they are not well tested by now. Source files uClinux_dist_2006R1.tar.bz2 Linux ELF file linux-bf533-stamp linux-bf533-ezkit linux-bf537 linux-bf561 Compressed Linux image uImage-bf533-stamp uImage-bf533-ezkit uImage-bf537 uImage-bf561 This document release_notes_2006R1.pdf Compressed archive of test results test_results_bf533_STAMP_2006R1.tar.gz test_results_bf533_EZKIT_2006R1.tar.gz test_results_bf537_2006R1.tar.gz test_results_bf561_2006R1.tar.gz Summary of test results test_results_summary_2006R1 A full list of known issues can be found at: There are also some issues in the LTP test cases. They are recorded as bug 352, 744, 745, 1010, 1050, 1096, 1211, 1212, 1218, 1219 and 1263. 1. Install Toolchain Release 2006R1 Go to for more information 2. Download the source code of project uClinux for Blackfin release 2006R1 Go to 3. Uncompress uclinux-dist.tar.bz to working directory cp uClinux-dist_2006R1.tar.bz /(WORK_DIR) cd /(WORK_DIR) bunzip2 uClinux-dist_2006R1.tar.bz tar -xvf uClinux-dist_2006R1.tar 4. Compile the source using following commands cd uClinux-dist make clean make menuconfig (save and exit without making any changes) make 5. Find the compiled Blackfin executable linux in the following location $(WORK_DIR)/uClinux-dist/images 6. Above file shall be used to download to the target board. 1. Use below serial cables to connect board to host computer. Male-Female 1-1 serial cable 2. Use minicom or some other serial communications utility to configure the serial port with the following parameters. If run minicom for the first time, run “minicom -s” to setup the port. Serial Device = /dev/ttyS0 Baud Rate = Baud that have been selected in kernel menuconfig (Default value is 57600) Number of bits = 8 Parity = None Stop bits = 1 3. Make sure the BMODE pins on the target board are set to 00. If u-boot loads automatically on reset, the pins are already set correctly. 4. Make sure tftp server is installed in the host machine. Copy linux from uClinux-dist/images/ that is built in above steps to the /tftpboot of the host PC. 5. Load the linux file with the following boot loader commands. Make sure the ipaddr (target board IP) and serverip (host IP) are correct. STAMP> setenv ipaddr x.y.z.n STAMP> setenv serverip x.y.z.m STAMP> saveenv STAMP> tftp 0x1000000 linux STAMP> bootelf 0x1000000 6. The kernel should then boot (This Image Will be Used in bootm command) Linux ELF image has to be changed as per u-boot standards to load Linux using bootm command. Following subsections explain how to build compressed and uncompressed Linux images. Compressed Linux images can be found under folder “uClinux-dist/image”. But, you can also generate by yourself as follows. 1. Generate the binary file from the ELF file, using following command $ bfin-uclinux-objcopy -O binary linux linux.bin 2. Compress the binary file obtained above, using following command $ gzip -9 linux.bin 3. Build the final linux image, using following command (WORK_DIR)/u-boot_1.1.3/tools/mkimage -A blackfin -O linux -T kernel -C gzip -a 0x1000 -e 0x1000 -n "Bfin uClinux Kernel" -d linux.bin.gz uImage Use following commands to build uncompressed Linux image (WORK_DIR)/u-boot_1.1.3tools/mkimage -A blackfin -O linux -T kernel -C none -a 0x1000 -e 0x1000 -n "Bfin uClinux kernel" -d linux.bin uImage STAMP> tftp 0x1000000 uImage STAMP> protect off all STAMP> erase 0x20040000 0x203EFFFF STAMP> cp.b 0x1000000 0x20040000 $(filesize) STAMP> setenv bootcmd bootm 0x20040000 STAMP> save STAMP> reset 1. make menuconfig in uClinux for Blackfin project 2. select option “Kernel/Library/Defaults Selection” → “Customize Kernel Settings” and exit 3. In kernel configuration, processor and board specific options can be changed in “Processor type and features”, such as cache status, CPU, DMA, etc. 4. Driver specific options are in respective menus. Such as Ethernet driver in “Networking supporting”, sound card driver in “Sound”, video driver in “Graphic Support”, etc. 5. Save and exit. Then make the image again as mentioned before. The changes of the kernel take effects after you load and run the new image. 1. make menuconfig in uClinux for Blackfin project 2. select Blackfin architecture in menu “Vendor/Product Selection”→“AnalogDevices Product” 3. select option “Kernel/Library/Defaults Selection” → “Customize Vendor/User Settings” and exit 4. In user configuration, applications can be selected to build and debugging information can be enabled. 5. In order to configure uClibc, you should go into the folder “uClibc” and do menuconfig. 6. After the menuconfig is done, make the image again as mentioned before. The new selected application can be found in the romfs after you load and run the new image. 1. make menuconfig in uClinux for Blackfin project 2. select Blackfin architecture in menu “Vendor/Product Selection”→“AnalogDevices Product” 3. select “Kernel/Library/Defaults Selection” → “Customize Vendor/User Settings” and exit 4. select “Blackfin Build Options” → “Build FD-PIC ELF binaries” to build application into ELF binary. Unselect to build FLAT binary. 5. Change into folder uClibc and do menuconfig. 6. select “Target Architecture” → “bfin” to build uClibc into ELF binary. Select “Target Architecture” → “bfinfdpic” to build FLAT binary. 7. Change into folder uClinux-dist and make the image. 1. To debug an application, please refer to the document “gdb_guide_bfin.txt” in patch folder bfin_patch/kgdb_patch. 2. To do source level kernel debugging by kgdb, please refer to the README file in patch folder bfin_patch/kgdb_patch. After apply the kgdb patch file to the kernel, a simple guide “kgdb_bfin.txt” can be found in subfolder “linux-2.6.x/Documentation/blackfin/” 1. Go to the following Blackfin uClinux bug tracker page, 2. If the bug is not already reported click on “Submit New” button to report new bug.
https://docs.blackfin.uclinux.org/doku.php?id=uclinux-dist:release-notes:2006r1
2019-03-18T15:43:08
CC-MAIN-2019-13
1552912201455.20
[]
docs.blackfin.uclinux.org
VXML Form Block Contents Use this block to embed VXML code directly into a callflow diagram. Name Property Find this property's details under Common Properties. Block Notes Property Can be used for both callflow and workflow blocks to add comments. Exceptions Property Find this property's details under Common Properties. Enable Status Property Find this property's details under Common Properties. Body Property This property contains all the executable content of the <form> element before directing to a block or external application. - Click opposite Body under Value. This brings up the button. - Click the button to bring up the Configure Body dialog box. - Enter the executable content of the <form> element. . - When through, click OK. Note: The editor does not validate against the VXML schema. Gotostatements Property This property allows the you to configure the output nodes of the blocks. An output port is created for every GOTOStatement item with target enabled. - Click opposite Gotostatements under Value. This brings up the button. - Click the button to bring up the Gotostatements dialog box. - Click Add. - When Target is disabled, select ProjectFile or URL to indicate the destination application type. When ProjectFile is selected, you can click the button to enter the URI. When URL is selected, you can click the URI button and specify a literal or a variable. - When URL is selected, you can also click the Parameters button to select a system variable. - For each goto statement, specify at least one event, condition, or target (you are not required to complete all three fields). An output port is created for every goto statement. - Name--Composer uses the name of the goto statement to label the outport. - Event--Use to select the event that will trigger the goto statement. - Condition--The guard condition for this goto statement. The goto statement is selected only if the condition evaluates to true. - Target--If a target is set, an outport for that goto statement will appear and you can connect it to other blocks. If a target is not set, an outport for that goto statement does not appear; in this case, you can add some VXML code to handle the event. This page was last modified on November 21, 2016, at 07:55. Feedback Comment on this article:
https://docs.genesys.com/Documentation/Composer/8.1.4/Help/VXMLFormBlock
2019-03-18T15:28:57
CC-MAIN-2019-13
1552912201455.20
[]
docs.genesys.com
background_set_alpha_from_background(ind, back); Returns: N/A This function uses the value/saturation of one background and multiplies it with the alpha of the target background. Ideally the background being used to generate the new alpha map should be greyscale, with the white areas having an equivalent alpha value of 1 (opaque), the black areas being equivalent to alpha 0 (transparent), and the grey areas being an alpha in between 0 and 1. The background that you are setting the alpha of cannot be a permanent resource as this will give an error, so you should be using the background_duplicate function first to duplicate the resource and then set the alpha on that. You will also need to duplicate the source image to be used with the same function, then set the alpha using both the duplicated images. Below is an image that illustrates how this function works with sprites, but the same is true for backgrounds: bck = background_duplicate(bck_Clouds); var t_bck = background_duplicate(bck_Clouds_Alpha); background_set_alpha_from_background(bck, t_bck); background_delete(t_bck); The above code duplicates two background resources and then changes the first one's alpha using the value/saturation of the colour data from the resource "bck_Clouds_Alpha". The temporary background that was created to hold the "bck_Clouds_Alpha" data is then deleted to free up its memory.
https://docs.yoyogames.com/source/dadiospice/002_reference/game%20assets/backgrounds/background_set_alpha_from_background.html
2019-03-18T16:27:12
CC-MAIN-2019-13
1552912201455.20
[]
docs.yoyogames.com
Available on the Open Source plans. Configure /var/lib/megam/regions.yml to modify the regions to allow an user to choose the region in our product Console (UI - nilavu) regions: # The name of the region to launch Sydney: # The flag of the region launched flag: '../../images/regions/au.png' # The billable currency of the region currency: '€' # The cost of the cpu per hour in the billable currency cpu_cost_per_hour: '0.01' # The cost of the ram per hour in the billable currency ram_cost_per_hour: '0.02' # The cost of the storage per hour in the billable currency storage_cost_per_hour: '0.01' # The maximum cpu the region has. max_cpu: '10' # The maximum ram the region has. max_ram: '256 GB' # The maximum storage the region has. max_storage: '500 GB' # The available ip options for the region. private_ipv4: true public_ipv4: false private_ipv6: false public_ipv6: false # Minimum 3 flavors are needed. flavors: # The different types of launch options a customer can choose From begining there is no Active Flavors displayed on Lauche Page. Admin can create Flavors from Vertice Admin panel. Multi region support and advanced scaling is available in the enterprise plan
https://docs.megam.io/configuration/regions/
2019-03-18T16:21:35
CC-MAIN-2019-13
1552912201455.20
[]
docs.megam.io
Secure boot applications, and the operating system.). device. The revoked list contains items that are no longer trusted and may not be loaded. If an image hash is in both databases, the revoked signatures database (dbx) takes precedent.. You should contact your firmware manufacturer for tools and assistance in creating these databases. Boot sequence -.
https://docs.microsoft.com/de-de/windows-hardware/design/device-experiences/oem-secure-boot
2019-03-18T15:54:46
CC-MAIN-2019-13
1552912201455.20
[]
docs.microsoft.com
Versions Compared Old Version 18 changes.mady.by.user Nacho Saved on New Version Current changes.mady.by.user Javier Lopez Saved on Key - This line was added. - This line was removed. - Formatting was changed.: Configuring pyCGA for HGVA Configuration parameters can be passed as a JSON file, YAML file or a Python Dictionary: Load the configuration will be the first step, to use the python client. We will use the ConfigClient class, passing the name of the path of the configuration file or the dictionary with the configuration. After that the instance created will be passed to the Client. Once the library is imported and configured, you can proceed to run the examples below. Examples Getting information about genomic variants Getting information about projects Getting information about studies Getting information about samples Getting information about cohorts Table of Contents:
http://docs.opencb.org/pages/diffpagesbyversion.action?pageId=328215&selectedPageVersions=19&selectedPageVersions=18
2019-03-18T16:18:52
CC-MAIN-2019-13
1552912201455.20
[]
docs.opencb.org
Contents Tags help with filtering objects and grouping disparate objects. For example, you could define a “Project X” tag and associate that tag with all the devices, IP addresses, etc… that are associated with Project X. Tags can be placed on most objects in Device42. Most reports can be filtered based on tags. Creating tags From the tags list page, click Add Tag to create a tag. Click on a tag to edit the tag. For each tag, you give it a name and a slug (essential a unique resource identifier). In the example above, 2 devices and 1 IP addresses were “tagged” with tag “nh4th”. An example of tagging a device with ‘nh4th’ is shown below… Merging tags If you end up with multiple tags that you would like to merge, ie. due to misspelling or duplication, you can do so easily in Device42. To merge two or more tags first browse to the tags page in Device42 in Tools>Tags. From the tags list page, select the tags that you would like to merge and from the Action menu click “Merge Selected Tags” You should receive a confirmation message, and clicking “Okay” will confirm the merge.
https://docs.device42.com/tools/tags/
2019-03-18T16:37:01
CC-MAIN-2019-13
1552912201455.20
[array(['/media/images/wpid-media_14144930840731.png', 'Creating tags'], dtype=object) array(['/media/images/wpid-media_14144932471591.png', None], dtype=object) array(['/media/images/wpid-media_14144934131171.png', None], dtype=object) array(['/media/images/tags/2016-04-18-tags-01.png', 'Merging tags'], dtype=object) array(['/media/images/tags/2016-04-18-tags-02.png', 'Merging tags'], dtype=object) array(['/media/images/tags/2016-04-18-tags-03.png', 'Merging tags'], dtype=object) ]
docs.device42.com
The network-setup-control interface network-setup-control allows read/write access to Netplan configuration files. This is restricted because it gives access to system network configuration which can container network security details. Auto-connect: no Requires snapd version 2.22+. This is a snap interface. See Interface management and Supported interfaces for further details on how interfaces are used. Last updated a month ago. Help improve this document in the forum.
https://docs.snapcraft.io/the-network-setup-control-interface/7885
2019-03-18T15:42:36
CC-MAIN-2019-13
1552912201455.20
[]
docs.snapcraft.io
administrators for backup through inSync Client. -. - inSync does not backup network location due to VSS limitation.. The global variables listed here are custom variables defined in inSync, that automatically expand to the defined path as mentioned in the Usage Example column.
https://docs.druva.com/Endpoints/040_Backup_and_Restore/020_Configure_folders_for_backup_on_user_laptops/030_Configure_custom_folders_for_backup
2022-05-16T11:11:10
CC-MAIN-2022-21
1652662510117.12
[]
docs.druva.com
Use Volume Shadow Copy Service (VSS) for Windows device backups Overview The VSS feature allows you to take volume backups while applications on a device continue to write to those volumes. VSS captures and copies stable images for backup on devices without affecting the performance and stability of services. inSync uses either of these methods for volume backups: - Backup using VSS: When backup starts, inSync creates a snapshot of backup folders using VSS. It creates the backup by using this snapshot. If snapshot creation fails, inSync backup operation fails, and it does not take volume backups devices, inSync uses Backup from live data. Enable backup using VSS By default, backup using VSS is enabled on your Windows computer. Enable backup from live data To enable backup from live data - Submit a request to Support requesting them to enable backup from live data for all your users.
https://docs.druva.com/Endpoints/040_Backup_and_Restore/060_Configure_resources_consumed_during_backup/060_Use_VSS_for_Windows_laptop_backups
2022-05-16T12:08:59
CC-MAIN-2022-21
1652662510117.12
[]
docs.druva.com
Git terminology The following are commonly-used Git terms. Repository When you want to contribute to someone else’s repository, you make a copy of it. This copy is called a fork. The process is called “creating a fork.” When you fork a repository, After you save a local copy of a repository and modify the files on your computer, you can upload the changes to GitLab. This action is known as pushing to the remote, because you use the command git push. When the remote repository changes, your local copy is behind. You can update your local copy with the new changes in the remote repository. This action is known as pulling from the remote, because you use the command git pull.
https://docs.gitlab.com/ee/topics/git/terminology.html
2022-05-16T11:06:50
CC-MAIN-2022-21
1652662510117.12
[]
docs.gitlab.com
Azure Cosmos DB's API for MongoDB (4.0 server version): supported features and syntax APPLIES TO: Azure Cosmos DB API for. By using the Azure Cosmos DB's API for MongoDB, you can enjoy the benefits of the MongoDB you're used to, with all of the enterprise capabilities that Cosmos DB provides: global distribution, automatic sharding, availability and latency guarantees, encryption at rest, backups, and much more. Protocol Support The supported operators and any limitations or exceptions are listed below. Any client driver that understands these protocols should be able to connect to Azure Cosmos DB's API for MongoDB. When using Azure Cosmos DB's API for MongoDB accounts, the 3.6+ versions of accounts have the endpoint in the format *.mongo.cosmos.azure.com whereas the 3.2 version of accounts has the endpoint in the format *.documents.azure.com. Note This article only lists the supported server commands and excludes client-side wrapper functions. Client-side wrapper functions such as deleteMany() and updateMany() internally utilize the delete() and update() server commands. Functions utilizing supported server commands are compatible with Azure Cosmos DB's API for MongoDB. Query language support Azure Cosmos DB's API for MongoDB provides comprehensive support for MongoDB query language constructs. Below you can find the detailed list of currently supported operations, operators, stages, commands, and options. Database commands Azure Cosmos DB's API for MongoDB supports the following database commands: Query and write operation commands Transaction commands Authentication commands Administration commands Diagnostics commands Aggregation pipeline Aggregation commands Aggregation stages Note $lookup does not yet support the uncorrelated subqueries feature introduced in server version 3.6. You will receive an error with a message containing let is not supported if you attempt to use the $lookup operator with let and pipeline fields. Boolean expressions Conversion expressions Set expressions Comparison expressions Note The API for MongoDB does not support comparison expressions with an array literal in the query. Arithmetic expressions String expressions Text search operator Array expressions Variable operators System variables Literal operator Date expressions Conditional expressions Data type operator Accumulator expressions Merge operator Data types Azure Cosmos DB's API for MongoDB supports documents encoded in MongoDB BSON format. The 4.0 API version enhances the internal usage of this format to improve performance and reduce costs. Documents written or updated through an endpoint running 4.0 benefit from this. In an upgrade scenario, documents written prior to the upgrade to version 4.0 will not benefit from the enhanced performance until they are updated via a write operation through the 4.0 endpoint. Indexes and index properties Indexes Index properties Operators Logical operators Element operators Evaluation query operators In/}] }). Array operators Comment operator Projection operators Update operators Field update operators Array update operators Update modifiers Bitwise update operator Geospatial operators Sort operations When using the findOneAndUpdate operation, sort operations on a single field are supported but sort operations on multiple fields are not supported. Indexing The API for MongoDB supports a variety of indexes to enable sorting on multiple fields, improve query performance, and enforce uniqueness. GridFS Azure Cosmos DB supports GridFS through any GridFS-compatible Mongo driver. Replication Azure Cosmos DB supports automatic, native replication at the lowest layers. This logic is extended out to achieve low-latency, global replication as well. Cosmos DB does not support manual replication commands. Retryable Writes Cosmos DB does not yet support retryable writes. Client drivers must add the 'retryWrites=false' URL parameter to their connection string. More URL parameters can be added by prefixing them with an '&'. Sharding Azure Cosmos DB supports automatic, server-side sharding. It manages shard creation, placement, and balancing automatically. Azure Cosmos DB does not support manual sharding commands, which means you don't have to invoke commands such as addShard, balancerStart, moveChunk etc. You only need to specify the shard key while creating the containers or querying the data. Sessions Azure Cosmos DB does not yet support server-side sessions commands. Time-to-live (TTL) Azure Cosmos DB supports a time-to-live (TTL) based on the timestamp of the document. TTL can be enabled for collections from the Azure portal. Transactions Multi-document transactions are supported within an unsharded collection. Multi-document transactions are not supported across collections or in sharded collections. The timeout for transactions is a fixed 5 seconds. User and role management Azure Cosmos DB does not yet support users and roles. However, Cosmos DB supports Azure role-based access control (Azure RBAC) and read-write and read-only passwords/keys that can be obtained through the Azure portal (Connection String page). Write Concern Some applications rely on a Write Concern, which specifies the number of responses required during a write operation. Due to how Cosmos DB handles replication in the background all writes are all automatically Quorum by default. Any write concern specified by the client code is ignored. Learn more in Using consistency levels to maximize availability and performance.
https://docs.microsoft.com/en-gb/azure/cosmos-db/mongodb/feature-support-40
2022-05-16T13:06:37
CC-MAIN-2022-21
1652662510117.12
[]
docs.microsoft.com
-12-07 As we wind down 2016 and move into the holiday season, the remaining releases (today's, and the release in two weeks) will be relatively light and fluffy as snowflakes, as compared to many releases in the past year that introduced major features. One or two cool new features, a stockingful of bug fixes. You'll need to look elsewhere for a really big haul of presents in the waning weeks of 2016. But look on the bright side: we won't give you an ugly sweater. As anyone who has attended a ThousandEyes CONNECT event can attest, we only hand out great t-shirts... And if you're a last-minute type, live in the metro New York City and want one of those t-shirts, then procrastinate on your shopping this Wednesday, December 8th, and come on down to CONNECT NYC! Register here or just come on down to LMHQ in Manhattan and do day-of registration. Onward to the details of today's release... Reports The Transaction Step Time and Transaction Page Time metrics from the Timings tab of a Transaction test are now available in Reports. Create a pie chart of page timings: to show the relative sizes of the page timings. (Wow, our staging site's "Log In" page is slow! And don't even ask what the orange thing represents...) Or create a stacked area chart of step timings: to show the change over time of each step's timing. Useful for quickly spotting a page or step that's occasionally being naughty instead of nice, and blowing up the timing results: Endpoint Agent We've added new permissions available to a user's role that allow viewing the personally identifiable information (PII) returned by Endpoint Agents. The the PII is divided into two categories, each with a permission: machine and user information is covered by the View endpoint data that identifies users permission page title and URL information is covered by the View endpoint data that identifies visited pages permission Minor features & bug fixes The stocking stuffers... API Fixed an issue which caused the /alerts endpoint to fail to return the Agent list for DNS Server alerts Reports Fixed an issue which could cause the Agent to Agent tests to be missing from the Test filter in Reports widgets User Management Single sign-on users no longer need to set a local password upon activation Endpoint Agent The tooltips in Path Visualizations now include all wireless metrics Fixed an issue which caused the tooltip from the Page Title column in User Sessions to display "Unknown Page" Fixed an issue which caused systems with the same public IP address at one time in the past to be given the same organization name (whois) when a system moves to a network that maps to a different organization name When adding a filter in Views, the selector now opens automatically Tests Fixed an issue which allowed users with no test editing permissions to toggle the "Follow Redirects" checkbox of an HTTP Server test, even though the setting could not be saved. Fixed an issue in the Path Visualization view which could cause a negative Average Delay in the tooltip when mousing over a link Fixed a bug which generated an "Unable to process your request. Please try again later." error when creating a test. Miscellany A broken link on the Billing tab now correctly points to the Usage tab. Registration emails to users whose names were not entered upon initial account creation are now more aesthetically pleasing. Questions or comments? Give us the gift of your feedback, here at . No need to wrap it in pretty paper. Even if it's a lump of coal, we'll still treasure it--bow or no. Release Notes: 2016-12-21 Release Notes: 2016-11-23 Last modified 4mo ago Copy link Contents Reports Endpoint Agent Minor features & bug fixes API Reports User Management Endpoint Agent Tests Miscellany Questions or comments?
https://docs.thousandeyes.com/archived-release-notes/2016/2016-12-07-release-notes
2022-05-16T11:28:06
CC-MAIN-2022-21
1652662510117.12
[]
docs.thousandeyes.com
The Search Results Configuration settings define how search results are displayed in the List Search Simple Web Part. These settings provide you a way to customize what you prefer your users within your environment can see and interact with when they use List Search Simple. Listed in the table below are the sections of the web part settings that you can control to optimize search results for your organization. Define the columns and view options for the search results grid:
https://docs.bamboosolutions.com/document/search_results_configuration/
2022-05-16T12:05:15
CC-MAIN-2022-21
1652662510117.12
[]
docs.bamboosolutions.com
Registrations Through registrations you can have capacity control of the attendance for a meeting. With this feature for instance you can limit how many people could attend to the meeting, or you can know before the start of a meeting if you need to find a bigger room to have the meeting. Enabling this feature will add a button so that participants can express their wish to go to the meeting. Depending in how this feature is configured, then: it’s possible to define how many slots are available for controling the maximum capacity for this meeting a custom registration form for asking information to participants can be configured administrators can make invitations to other participants or people that isn’t registered in the platform it’s possible to control attendance to the meeting through registration codes Once participants confirm joining a meeting they get asked if they’re reprensenting a group and if they want to show publicly that they’re attending. Enable registrations for a meeting To enable registrations for a meeting: Go to admin panel In the main sidebar, click in the button for the space that you want to configure the component for. For instance, it could be "Processes", "Assemblies", or "Conferences" Click on "Meetings" Search the meeting that you want to enable registrations for Click on the "Edit" button Change the "Registration type" field to "On this platform" Define how many slots are available in "Available slots for this meeting" Click on the "Update" button Click on the "Registrations" button Check the "Registrations enabled" checkbox Fill the form Registration form Export all It’s possible to export registrations in multiple formats: CSV, JSON and XLSX (Excel). The exported data will have these fields: id: The registration id code: the registration code (if this feature is enabled) user/name: the name of the user user/email: the email of the user user/user_group: the group of the user if she has selected that’s representing a group when registering Invitations This feature allows you to invite attendes to a meeting. These could be already registered or non-existing participants in the platform. Registration code This feature allows you to check if the attendee is registered in the meeting. She needs to provide her code, that gets entered in this form and it’s checked against the database. It can receive two kind of responses: Registration code successfully validated. This registration code is invalid.
https://docs.decidim.org/en/admin/components/meetings/registrations/
2022-05-16T11:03:18
CC-MAIN-2022-21
1652662510117.12
[]
docs.decidim.org
Introduction Welcome to the OpenClinica SOAP Web Services Guide. Newcomers to OpenClinica SOAP Web Services should read the overview provided below, and then refer to the documentation for the specific web service(s) of interest. The OpenClinica SOAP API is part of the OpenClinica-ws package and require installation separate from the standard OpenClinica-web deployment (unlike REST web services which are part of OpenClinica-web). See ‘Using OpenClinica Web Services‘ for more info.
https://docs.openclinica.com/3-1-technical-documents/openclinica-web-services-guide/
2022-05-16T12:04:02
CC-MAIN-2022-21
1652662510117.12
[]
docs.openclinica.com
Using the Cxense Semantic Plug-in Once the Cxense Cxense returns any tags, then they are added to the content item as suggestions: If Cxense can't find any references to subjects that it knows about, then it will not return any tags and no suggestions are made. If you are satisfied with the tags suggested by Cxense, then you can just save the content item and continue working. If you are not satisfied with the suggestions, you can: Remove suggested tags Add other tags not suggested by Cxense Change the relevance values set by Cxense
http://docs.escenic.com/semantic-cxense-guide/1.4/usage.html
2022-05-16T12:11:01
CC-MAIN-2022-21
1652662510117.12
[]
docs.escenic.com
ListAssociations Returns all State Manager associations in the current AWS account and AWS Region. You can limit the results to a specific State Manager association document or managed node by specifying a filter. State Manager is a capability of AWS Systems Manager. Request Syntax { "AssociationFilterList": [ { "key": " string", "value": " string" } ], "MaxResults": number, "NextToken": " string" } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. - AssociationFilterList One or more filters. Use a filter to return a more specific list of results. Note Filtering associations using the InstanceIDattribute only returns legacy associations created using the InstanceIDattribute. Associations targeting the managed node that are part of the Target Attributes ResourceGroupor Tagsaren't returned. Type: Array of AssociationFilter objects Array Members: Minimum number of 1 item. { "Associations": [ { "AssociationId": "string", "AssociationName": "string", "AssociationVersion": "string", "DocumentVersion": "string", "InstanceId": "string", "LastExecutionDate": number, "Name": "string", "Overview": { "AssociationStatusAggregatedCount": { "string" : number }, "DetailedStatus": "string", "Status": "string" }, "ScheduleExpression": "string", "ScheduleOffset": number, "TargetMaps": [ { "string" : [ "string" ] } ], "Targets": [ { "Key": "string", "Values": [ "string" ] } ] } ], "NextToken": "string" } Response Elements If the action is successful, the service sends back an HTTP 200 response. The following data is returned in JSON format by the service. - Associations The associations. Type: Array of Association ListAssociations. Sample Request POST / HTTP/1.1 Host: ssm.us-east-2.amazonaws.com Accept-Encoding: identity X-Amz-Target: AmazonSSM.ListAssociations Content-Type: application/x-amz-json-1.1 User-Agent: aws-cli/1.17.12 Python/3.6.8 Darwin/18.7.0 botocore/1.14.12 X-Amz-Date: 20200325T143814: 2 Sample Response { "Associations": [ { "AssociationId": "fa94c678-85c6-4d40-926b-7c791EXAMPLE", "AssociationVersion": "1", "LastExecutionDate": 1582037438.692, "Name": "AWS-UpdateSSMAgent", "Overview": { "AssociationStatusAggregatedCount": { "Success": 3 }, "DetailedStatus": "Success", "Status": "Success" }, "Targets": [ { "Key": "tag:ssm", "Values": [ "true" ] } ] } ] } See Also For more information about using this API in one of the language-specific AWS SDKs, see the following:
https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_ListAssociations.html
2022-05-16T13:17:12
CC-MAIN-2022-21
1652662510117.12
[]
docs.aws.amazon.com
Mule (dw::Mule) This DataWeave module contains functions for interacting with Mule runtime engine. To use this module, you must import it to your DataWeave code, for example, by adding the line import * from dw::Mule to the header of your DataWeave script. Types Submit your feedback! Share your thoughts to help us build the best documentation experience for you!Take our latest survey!
https://docs.mulesoft.com/dataweave/2.4/dw-mule
2022-05-16T10:59:17
CC-MAIN-2022-21
1652662510117.12
[]
docs.mulesoft.com
Logging in If you are having problems logging in to the Curate console, file a ticket in our support portal. Please thoroughly explain the issues you are having so we can help you more quickly. Screenshots are also helpful. General information File a ticket in our support portal. Please thoroughly explain the issues you are having so we can help you more quickly. Screenshots are also helpful. Yes. If you do not see all of the content to which you should have access, please file a support ticket. If you find that your show is not listed in the Curate console, please file a support ticket and detail what shows you expect to see. Yes. You will be able to include full episodes, clips, previews, photo galleries, a description, and times and dates the show airs. Show items that are carried over automatically from Merlin include: - Video - Video description - Funder text - Official website link that links to your website - Full episodes - Clips - Previews No. For now, you must manage your video portal through the Media Manager console. This will change when station video portals launch on the new platform. There is no date for the launch yet. If you have a schedule that is due to go live on a future date, you must go to that date to view the schedule. View dates by clicking the calendar textbox in the upper right corner of the page. - The current day is highlighted in blue or has a blue tickmark. - Each day you have content scheduled is highlighted in green. When viewing an item highlighted in green, the text in the green box turns white, letting you know which one you are currently viewing. Any schedules in the past will appear at the top of the list while schedules in the future will appear beneath the current live schedule. Scheduling You can schedule items to change as often as you like. You can schedule a homepage item to change every few seconds if you want. Stations can program: - One of five of the featured carousel images in the PBS.org HomePage HeroCarousel. - Station contact information in the PBS.org Header Station Links module. - Up to 15 local videos in the PBS.org Videolanding Local Videos module. - Up to five social media links with the PBS.org HomePage Social Media Hashtags module. - Up to five videos on the Shows page using the PBS.org HomePage Shows module. Yes. Click into any collection and click the calendar textbox in the upper right corner of the page. - The current day is highlighted in blue or has a blue tickmark in the corner. - Each day you have content scheduled is highlighted in green. When viewing the items in green, the text in the green box turns white. No. Only schedules that are in DRAFT or PUBLISHED mode can be deleted. To remove the LIVE schedule, you must create a new schedule and make it LIVE to replace the current LIVE schedule. No. You need to use the Media Manager console to upload videos and web objects.
https://docs.pbs.org/display/CUR/FAQ
2022-05-16T12:26:36
CC-MAIN-2022-21
1652662510117.12
[]
docs.pbs.org
Interface TcpNioConnectionSupport - All Known Implementing Classes: DefaultTcpNioConnectionSupport, DefaultTcpNioSSLConnectionSupport - Functional Interface: - This is a functional interface and can therefore be used as the assignment target for a lambda expression or method reference. @FunctionalInterface public interface TcpNioConnectionSupport Used by NIO connection factories to instantiate a TcpNioConnectionobject. Implementations for SSL and non-SSL TcpNioConnections are provided. - Since: - 2.2 - Author: - Gary Russell Method Details createNewConnectionTcpNioConnection createNewConnection(SocketChannel socketChannel, boolean server, boolean lookupHost, @Nullable ApplicationEventPublisher applicationEventPublisher, String connectionFactoryName)Create a new TcpNioConnectionobject wrapping the SocketChannel. - Parameters: socketChannel- the SocketChannel. server- true if this connection is a server connection. lookupHost- true if hostname lookup should be performed, otherwise the connection will be identified using the ip address. applicationEventPublisher- the publisher to which OPEN, CLOSE and EXCEPTION events will be sent; may be null if event publishing is not required. connectionFactoryName- the name of the connection factory creating this connection; used during event publishing, may be null, in which case "unknown" will be used. - Returns: - the TcpNioConnection
https://docs.spring.io/spring-integration/api/org/springframework/integration/ip/tcp/connection/TcpNioConnectionSupport.html
2022-05-16T13:24:47
CC-MAIN-2022-21
1652662510117.12
[]
docs.spring.io
Connect Streamlit to Deta BaseConnect Streamlit to Deta Base IntroductionIntroduction This guide explains how to securely access and write to a Deta Base database from Streamlit Cloud. Deta Base is a fully-managed, fast, scalable and secure NoSQL database with a focus on end-user simplicity. This guide uses the deta Python SDK for Deta Base and Streamlit's secrets management. Sign up for Deta Base and sign inSign up for Deta Base and sign in First, you need to sign up for Deta Base. Once you have an account, sign in to Deta. When you sign in, Deta will create a default project and show you the project's Project Key and Project ID. Note down your Project Key and Project ID. Be sure to store your Project Key securely. It is shown only once, and you will need it to connect to your Deta Base. Click to see your Project Key Securely store your Project Key Add Project Key to your local app secretsAdd Project Key to your local app secrets Your local Streamlit app will read secrets from a file .streamlit/secrets.toml in your app's root directory. Create this file if it doesn't exist yet and add the Project Key (from the previous step) of your Deta Base as shown below: # .streamlit/secrets.toml deta_key = "xxx" Replace xxx above ☝️ with your Project Key from the previous step. deta to your requirements fileAdd deta to your requirements file Add the deta Python SDK for Deta Base to your requirements.txt file, preferably pinning its version (replace x.x.x with the version you want installed): # requirements.txt deta==x.x.x Write your Streamlit appWrite your Streamlit app Copy the code below to your Streamlit app and run it. The example app below writes data from a Streamlit form to a Deta Base database example-db. import streamlit as st from deta import Deta # Data to be written to Deta Base with st.form("form"): name = st.text_input("Your name") age = st.number_input("Your age") submitted = st.form_submit_button("Store in database") # Connect to Deta Base with your Project Key deta = Deta(st.secrets["deta_key"]) # Create a new database "example-db" # If you need a new database, just use another name. db = deta.Base("example-db") # If the user clicked the submit button, # write the data from the form to the database. # You can store any data you want here. Just modify that dictionary below (the entries between the {}). if submitted: db.put({"name": name, "age": age}) "---" "Here's everything stored in the database:" # This reads all items from the database and displays them to your app. # db_content is a list of dictionaries. You can do everything you want with it. db_content = db.fetch().items st.write(db_content) If everything worked out (and you used the example we created above), your app should look like this:
https://docs.streamlit.io/knowledge-base/tutorials/databases/deta-base
2022-05-16T12:54:03
CC-MAIN-2022-21
1652662510117.12
[]
docs.streamlit.io
Custom Lambda Rules (Amazon EC2 Example) This procedure guides you through the process of creating a Custom Lambda rule that evaluates whether each of your EC2 instances is the t2.micro type. AWS Config will run event-based evaluations for this rule, meaning it will check your instance configurations each time AWS Config detects a configuration change in an instance. AWS Config will flag t2.micro instances as compliant and all other instances as noncompliant. The compliance status will appear in the AWS Config console. To have the best outcome with this procedure, you should have one or more EC2 instances in your AWS account. Your instances should include a combination of at least one t2.micro instance and other types. To create this rule, first, you will create an AWS Lambda function by customizing a blueprint in the AWS Lambda console. Then, you will create a Custom Lambda rule in AWS Config, and you will associate the rule with the function. Topics Creating an AWS Lambda Function for a Custom Config Rule Sign in to the AWS Management Console and open the AWS Lambda console at In the AWS Management Console menu, verify that the region selector is set to a region that supports AWS Config rules. For the list of supported regions, see AWS Config Regions and Endpoints in the Amazon Web Services General Reference. In the AWS AWS Policy templates. For Runtime, keep Node.js. For Role name, type name. For Policy templates, choose AWS Config Rules permission. For Lambda function code function, keep the preconfigured code. The Node.js code for your function is provided in the code editor. For this procedure, you do not need to change the code. Verify the details and choose Create function. The AWS Lambda console displays your function. To verify that your function is set up correctly, test it with the following steps: Choose Test from the menu below Function overview and then choose Configure test event. For Template, choose AWS Config Configuration Item Change Notification. For Name, type a name. Choose Test. AWS AWS Config. The result token identifies the AWS Config rule and the event that caused the evaluation, and the result token associates an evaluation with a rule. This exception indicates that your function has the permission it needs to send results to AWS AWS Config console at In the AWS Management Console menu, verify that the region selector is set to the same region in which you created the AWS AWS Lambda function ARN, specify the ARN that AWS Lambda assigned to your function. Note The ARN that you specify in this step must not include the $LATESTqualifier. You can specify an ARN without a version qualifier or with any qualifier besides $LATEST. AWS Lambda supports function versioning, and each version is assigned an ARN with a qualifier. AWS Lambda uses the $LATESTqualifier for the latest version. For Trigger type, choose When configuration changes. For Scope of changes, choose Resources. For Resources, choose AWS EC2 Instance from the Resource Type dropdown list. In the Parameters section, you must specify the rule parameter that your AWS AWS Config receives evaluation results from your AWS - AWS Config evaluated your resources against the rule. The rule did not apply to the AWS resources in its scope, the specified resources were deleted, or the evaluation results were deleted. To get evaluation results, update the rule, change its scope, or choose Re-evaluate. Verify that the scope includes AWS EC2 Instance for Resources, and try again. No resources in scope - AWS Config cannot evaluate your recorded AWS resources against this rule because none of your resources are within the rule’s scope. To get evaluation results, edit the rule and change its scope, or add resources for AWS Config to record by using the Settings page. Verify that AWS Config is recording EC2 instances. Evaluations failed - For information that can help you determine the problem, choose the rule name to open its details page and see the error message. If your rule works correctly and AWS Config provides evaluation results, you can learn which conditions affect the compliance status of your rule. You can learn which resources, if any, are noncompliant, and why. For more information, see Viewing Configuration Compliance.
https://docs.aws.amazon.com/config/latest/developerguide/evaluate-config_develop-rules_getting-started.html
2022-05-16T12:58:21
CC-MAIN-2022-21
1652662510117.12
[]
docs.aws.amazon.com
Devo Stats The Devo Stats application analyzes different stats related to a single domain or multiple ones. Using this application, users can easily detect problems related to queries, users, injections, and other processes, and make decisions using the information displayed in its multiple widgets. Like any other Devo application, you can access it from the Applications option in the navigation panel once it is installed in your domain. To install the Devo stats application in your domain, please contact us. Upon installation, we can configure the application according to your needs. For example, you can decide to analyze data from a single domain or several ones. Go to the following sections to learn more about the actions you can perform in the application, and the information you can check in each of the application tabs:
https://docs.devo.com/confluence/ndt/v7.9.0/applications/devo-stats
2022-05-16T11:50:33
CC-MAIN-2022-21
1652662510117.12
[]
docs.devo.com
Download a report of your data Export your reporting data from NS8. We let you download data from several pages in the platform. This data is exported as a Comma Separated Values (CSV) file that you can open in Microsoft Excel. Or, as a JavaScript Object Notation (JSON) file, a more advanced data format. Any downloaded files will reflect the data as it appears on the page. Therefore, make sure you configure your data correctly configure before you download it. Prepare to download your data Before you download any data, select and refine it. To do this, create a filter, adjust the date range, and then select the columns that you want. Filter data If there are any filters on the page that you don't want to impact your report, remove them before you download the data. To do this, select Delete. To add or change filters, follow the steps in Create a filter. Adjust the date range Before you download your data, select the range of data that you want to download from the bar at the top of the page: We recommend downloading data in increments no larger than six months to avoid issues with internet timeouts, especially if your business has a large amount of data. You can read more about adjusting date ranges for further clarification. Customize columns The Customize Columns menu allows you to add or remove data columns to a page view, which is useful for adding or removing information to your report prior to downloading the data. For example, the columns containing campaign data (Campaign, Latest Campaign, First Campaign) are useful to add if you are researching the relationship between campaigns and orders. If you are researching fraud trends, you may want to add columns like our score to your view. Download report data Once you have configured the data to your liking, select Download and your browser will download the data to your device. Depending on your browser settings, the data file will most likely be saved to the Downloads folder on your computer. Updated over 1 year ago
https://docs.ns8.com/docs/downloading-a-report-of-your-data
2022-05-16T12:04:44
CC-MAIN-2022-21
1652662510117.12
[]
docs.ns8.com
Create a Traffic Distribution Profile An SD-WAN policy rule references a Traffic Distribution profile to distribute sessions and to fail over to a better path when path quality deteriorates. Based on your SD-WAN configuration plan, create the SD-WAN Traffic Distribution Profiles you need based on how you want the applications in your SD-WAN policy rules to be session loaded and to fail over. - Ensure you already configured the Link Tags in an SD-WAN interface profile and committed and pushed them. The Link Tags must be pushed to your hubs and branches in order for Panorama™ to successfully associate the Link Tags you specify in this Traffic Distribution profile to an SD-WAN interface profile. - Select aDevice Group. - Create a Traffic Distribution profile. - SelectObjectsSD-WAN Link ManagementTraffic Distribution Profile - Adda Traffic Distribution profile byNameusing a maximum of 31 alphanumeric characters. - SelectSharedonly if you want to use this traffic distribution profile across all Device Groups (both hubs and branches). - Select one traffic distribution method and add a maximum of four Link Tags that use this method for this profile. If multiple physical interfaces have the same tag, the firewall distributes matching sessions evenly among them. If all paths fail a health (path quality) threshold, the firewall selects the path that has the best health statistics. If no SD-WAN links are available (perhaps due to a blackout), the firewall uses static or dynamic routing to route the matching packets.If a packet is routed to a virtual SD-WAN interface, but the firewall cannot find a preferred path for the session based on the SD-WAN policy’s Traffic Distribution profile, the firewall implicitly uses the Best Available Path method to find the preferred path. The firewall distributes any application sessions that don’t match an SD-WAN policy rule based on the firewall’s implicit, final rule, which distributes the sessions in round-robin order among all available links, regardless of the Traffic Distribution profile.If you prefer to control how the firewall distributes unmatched sessions, create a final catch-all rule to Distribute Unmatched Sessions to specific links in the order you specify. - Best Available Path—Addone or moreLink Tags. During the initial packet exchanges, before App-ID has classified the application in the packet, the firewall uses the path in the tag that has the best health metrics (based on the order of tags). After the firewall identifies the application, it compares the health (path quality) of the path it was using to the health of the first path (interface) in the first Link Tag. If the original path’s health is better, it remains the selected path; otherwise, the firewall replaces the original path. The firewall repeats this process until it has evaluated all the paths in the Link Tag. The final path is the path the firewall selects when a packet arrives that meets the match criteria.When a link becomes unqualified and must fail over to the next best path, the firewall can migrate a maximum of 1,000 sessions per minute from the unqualified link to the next best path. For example, suppose tunnel.901 has 3,000 sessions; 2,000 of those sessions match SD-WAN policy rule A and 1,000 sessions match SD-WAN policy rule B (both rules have a traffic distribution policy configured withBest Path Available). If tunnel.901 becomes unqualified, it takes three minutes to migrate the 3,000 sessions from the unqualified link to the next best path. - Top Down Priority—Addone or moreLink Tags. The firewall distributes new sessions (that meet the match criteria) to links using the top-to-bottom order of theLink Tagsyou added. The firewall examines the first tag configured for this profile, and examines the paths that use that tag, selecting the first path it finds that is qualified (that is at or below the Path Quality thresholds for this rule). If no qualified path is found from that Link Tag, firewall examines the paths that use the next Link Tag. If the firewall finds no path after examining all paths in all of the Link Tags, the firewall uses theBest Available Pathmethod. The first path selected is the preferred path until one of the Path Quality thresholds for that path is exceeded, at which point the firewall again starts at the top of the Link Tag list to find the new preferred path.If you have only one link at the hub, that link supports all of the virtual interfaces and DIA traffic. If you want to use the link types in a specific order, you must apply a Traffic Distribution profile to the hub that specifiesTop Down Priority, and then order the Link Tags to specify the preferred order. If you apply a Traffic Distribution profile that instead specifiesBest Available Path, the firewall will use the link, regardless of cost, to choose the best performing path to the branch. In summary, Link Tags in a Traffic Distribution profile, the Link Tag applied to a hub virtual interface, and aVPN Failover Metricin an SD-WAN Interface Profile work only when the Traffic Distribution profile specifiesTop Down Priority. - Weighted Session Distribution—Addone or moreLink Tagsand then enter theWeightpercentage for eachLink Tagso that the weights total 100%. The firewall performs session load distribution between Link Tags until their percentage maximums are reached. If there is more than one path in the Link Tag, the firewall distributes equally using round-robin until the path health metrics are reached, and then distributes sessions to the other member(s) that are not at the limit. - Optional) After adding Link Tags, use theMove UporMove Downarrows to change the order of tags in the list, so they reflect the order in which you want the firewall to use links for this profile and for the selected applications in the SD-WAN policy rule. - ClickOK. - CommitandCommit and Pushyour configuration changes. - Commityour changes. Recommended For You Recommended Videos Recommended videos not found.
https://docs.paloaltonetworks.com/sd-wan/2-1/sd-wan-admin/configure-sd-wan/create-traffic-distribution-profile
2022-05-16T12:16:51
CC-MAIN-2022-21
1652662510117.12
[]
docs.paloaltonetworks.com
OpenID Connect authentication After you install NiFi, you can enable authentication through OpenID Connect. With OpenID Connect authentication, when a user attempts to access NiFi, NiFi redirects the user to the corresponding identity provider to log in. After the user logs into the identity provider, the identity provider sends NiFi a response that contains the user's credentials. With knowledge of the user's identity, NiFi can now authenticate the user. To enable authentication through OpenID Connect, set the following OpenID Connect related properties in the nifi.properties file. Then, restart NiFi for the changes in the nifi.properties file to take effect. If NiFi is clustered, configuration files must be the same on all nodes.
https://docs.cloudera.com/cfm/2.1.4/cfm-security/topics/cfm-security-openidconnect-authentication.html
2022-05-16T12:49:01
CC-MAIN-2022-21
1652662510117.12
[]
docs.cloudera.com
Method GtkWidgetget_action_group Declaration [src] GActionGroup* gtk_widget_get_action_group ( GtkWidget* widget, const gchar* prefix ) Description [src] Retrieves the GActionGroup that was registered using prefix. The resulting GActionGroup may have been registered to widget or any GtkWidget in its ancestry. If no action group was found matching prefix, then NULL is returned.
https://docs.gtk.org/gtk3/method.Widget.get_action_group.html
2022-05-16T12:26:50
CC-MAIN-2022-21
1652662510117.12
[]
docs.gtk.org
Release Notes 8.7.7¶ Key Issue Type Summary FR8340 Improvement Added DB and API time to WebRequest summaries in Crash Protection emails FR8341 Improvement Added Entries for DB and API time in the request log FR8342 Bug Fixed NullPointerException that occurs when using GraphQL with Spring-Boot 8.7.6¶ Key Issue Type Summary FR8309 Improvement In web request details, service time has been renamed to API time FR8307 Improvement Included CFHTTP in the request API time 8.7.5¶ Key Issue Type Summary FR8301 Improvement Added the ability to preview Headers in the transaction history summary views via a settings page FR8287 Bug Added limits for linked transactions to protect against increased memory usage FR8285 Bug Added support for ColdFusion 2021 solr 8.7.4¶ Key Issue Type Summary FR8258 Improvement Removed full stack trace from about page on license failure. FR8255 Improvement Permissions errors causing the license key to fail to persist now show a warning notification on the About page. FR8254 Improvement Add sources in debugger now closes when a source is added. FR8252 Improvement When a JDBC request is truncated, a message explaining how to disable or expand truncation limits has been added. FR8250 Improvement Added check to debug library not found page to see if the library has been added. FR8235 Improvement Included DB Time and service time into the webrequest and transaction summary view - speeding up root cause focus. FR8289 Bug Timestamp objects passed in via CF query param honours the timezone set in the data object. FR8288 Bug Timestamp objects passed in via CF query param are now wrapped in single quotes when processed by FR. FR8281 Bug "No source directories" message in Debugger page no longer gets removed if breakpoints are set. FR7970 Bug On the WebRequest activity page, request kill now attempts interrupt before forcefully killing the thread. 8.7.3¶ Key Issue Type Summary FR8253 Improvement Enhance error message in email settings page to explain how to debug emails not sending. FR8249 Improvement Make Crash Protection emails enabled by default. FR8153 Improvement Improve Instance Manager failed install pages. FR8269 Bug Upgrade Java.mail library to support newer TLS versions and cyphers. FR8232 Bug UI Fix - Debugger does not show breakpoints if debug email alerts are disabled. 8.7.2¶ Key Issue Type Summary FR8259 Task Upgrade ASM to 9.2. FR8247 Improvement Improvements to support chat user experience. FR8246 Improvement Improvements to support chat usability. FR8260 Bug Fix an issue where FR causes an exception to occur on startup in the OSGi log. Bug Fix a case where MongoDB datasources use an incorrect pivot. Bug Fix an issue with a menu entry appearing incorrectly. 8.7.1¶ Key Issue Type Summary FR8175 Improvement Support chat improvements. FR8224 Improvement Performance and timing improvements to Redisson async tracking. FR8227 Improvement In the case where the connection has been lost, queued datapacks are now sent oldest first. FR8231 Bug Fix an issue where NPEs are thrown in certain cases with startup delay configured. FR8225 Bug Fix an issue where HTTP client connections leak on Lucee servers. FR8223 Bug Fix an issue where JDBC connections using the Datastore to establish connections are not tracked. FR8222 Bug Fix connection properties not being tracked for DataDirect JDBC driver connections. FR8221 Bug Fix CF_SQL_TIME not displaying correctly in FusionReactor. FR8215 Bug Fix an issue where the true value of NVARCHAR is not rendered in JDBC requests. Bug Fix OkHttp transactions to not include a port in the description when one is not set. 8.7.0¶ Key Issue Type Summary FR8220 New Feature Support Java 16. FR8220 New Feature Support Java 15. FR8154 New Feature Support ColdBox Elasticsearch (cbElastic 2.0.0+). FR8214 Improvement Support running Datadog and FusionReactor in the same Java process. FR8195 Improvement Improve debug logging for JDBC tracking. FR8160 Improvement Improve HttpClient tracking. FR8194 Improvement Add support to track Vertx 4. FR8210 Improvement Upgrade ASM to 9.1. FR8219 Bug Fix HttpClient tracking JSON POST/PUT data. FR8218 Bug Fix an issue where some docker instances were not being detected as "Docker". FR8184 Bug Fix an issue where clicking support chat button with no internet connection causes error. 8.6.0¶ Key Issue Type Summary FR8145 New Feature Support Adobe ColdFusion 2021. FR8162 New Feature Support Adobe ColdFusion 2021 in Instance Manager. FR8156 New Feature Support Adobe ColdFusion 2021 CF Metrics, requires perfmon cf module. FR8158 New Feature Support running SeeFusion and FusionReactor. FR8101 New Feature Add support to track the Jest - Elasticsearch Java Client. FR7615 New Feature Add support to track the official Elasticsearch - Java REST Client. FR8090 Improvement Add support for Redisson 3.13.1 to 3.13.6. FR8157 Improvement Upgrade ASM to 9.0. FR8177 Improvement Send Spring Boot transactions to the cloud. FR8155 Improvement Make licensing more robust when looking up the hostname and IP address. FR8061 Improvement Add Freshchat into FusionReactor (available from the top right navigation bar). FR8167 Bug Fix a performance issue where licensing requests can be very slow when reverse DNS is slow. FR8144 Bug Fix a UI redraw issue when setting a variable in the debugger or using "CF Set" that would not update the watches value immediately. FR8098 Bug Fix a NullPointerException on shutdown from CloudStateLogger. 8.5.0¶ Key Issue Type Summary FR8135 Improvement Add support to track CFTHREAD tags / threads on Adobe ColdFusion servers. FR8114 Improvement Add support to track all http headers on Adobe ColdFusion servers. FR8109 Improvement Add support to show Event Snapshots in the Cloud. FR8096 Improvement Add support to track http headers on undertow. FR8094 Improvement Send profiles information to the Cloud for slow requests. FR8055 Improvement Add support for OkHttp 4.4.1 to 4.8 tracking. FR8053 Improvement Add support for Micronaut version 2.x APIs. FR8125 Improvement Add support for Kafka 2.4.x and 2.5.x tracking FR8140 Bug Fix HTTPClient tracking on Adobe ColdFusion servers. FR8130 Bug Fix Java 14 memory tracking. FR8126 Bug Fix Enterprise Dashboard lookup calls when the instance name contains the string "login". FR8100 Bug Fix Memory Overview page when for 1 hour view. FR8079 Bug Fix Jedis tracking for version 3.3.0. 8.4.0¶ Key Issue Type Summary FR8083 New Feature JSON tracking for GET requests. FR8057 New Feature Add filter to enterprise dashboard. FR8091 Improvement Allow FR agent to send more than 2 custom metrics. FR8066 Improvement Add hostname and ipaddress of instances from EDDS. FR8062 Improvement Remove the final page of the installer and automatically show Instance Manager. FR8060 Improvement Improce Lucee line performance so it usespagePoolClear function, rather than resseting the engine, this preventsCFML pages being aborted. FR8054 Improvement Bump ASM to 8.0.1 – to support newest Java versions FR7885 Improvement Enable System Resources when a CF server is running as a non admin user on windows. FR8073 Improvement Decompililation support for java 10-15 FR8085 Bug Fix data truncation for massive JDBC statements / BLOBS going to FusionReactor Cloud. FR8071 Bug LDAP support is available in FRAM, FR8069 Bug Add setting to enable status code to be removed from WebMetrics page due to C2 Compiler crashing. FR8059 Bug Fix an issue where an instance is selected in a group in Enterprise Dashboard, the other instances in the group do notupdate metrics. FR7983 Bug Fix an IllegalStateException for the MSSQL driver. FR8080 Bug Fix an UnsupportedOperationException when running on CommandBox/ColdFusion. 8.3.2¶ Key Issue Type Summary FR8032 Bug Docker discover fails on some AWS deployments. FR8030 Bug Error Exclusions no longer works on FR. 8.3.0¶ Key Issue Type Summary FR8015 Improvement Support licensing even when hostname lookup fails. FR8007 Improvement Enable event snapshot for Adobe ColdFusion servers. FR7959 Improvement Enable Event Snapshots by default. FR7971 Improvement Add debug / native library support for ARM 64 bit (aarch64) FR7794 Improvement Add support for 64-bit ARM (aarch64) system metrics and system CPU. FR7962 Improvement Improve in the startup performance of FR agent. FR7956 Improvement The Apple Mac installer is now signed so it can run directly after download, without flagging Gatekeeper. FR7932 Improvement Improve the profiler scrolling to better fit the browser size. FR7876 Improvement Improve ColdBox error tracking to track errors handled by Coldbox. FR7749 Improvement Improve Lucee line performance by using the pagePoolClear function, rather than resseting the engine. FR7423 Improvement Improved the the datasource information inJDBC details, for Oracle databases when using a TNSnames.ora file, hasbeen resolved. FR740 Improvement Add CPU Thresholds for Crash Protection FR8010 Bug Fix an issue where the Lucee Line Performance transformer does not pick up changes to a compiled page. FR7997 Bug Fix an issue where the License key is not stored into reactor.conf if license/ dir is missing. FR7984 Bug Fix an issue with a Memory Leak when running CommandBox 4.9 prerelease. FR7979 Bug Fix an issue with the Lucee ColdBox Transaction naming. FR7964 Bug Fix an issue where CommandBox environment detected as ServerType UNKONWN. FR7958 Bug Fix WildFly Session tracking. FR7948 Bug Fix an issue with proxy support for the Enterprise Dashboard. FR7942 Bug Fix an issue with the debugger native library not opening on MacOS Catalina (10.15) due to new restrictions on software. FR7934 Bug Add support for middle click on FR buttons. FR7918 Bug Corrects an issue detecting Docker when running within a Kubernetes cluster. FR7915 Bug Fix an issue with FR causing exceptions to occur in the OSGI log. FR7914 Bug Fix an issue where Debug emails have a link to the OLD debugger UI not the new one. FR7901 Bug Fix an issue where Edit breakpoints dialog would reset the fire count to 1 and not keep the the selected values. FR7899 Bug Fixes a bug with Ephemeral Instances where large pages could cause the tunnel to become unresponsive. FR7895 Bug Fix an issue where the Enterprise Dashboard dials and bars reset from zero every refresh. FR7889 Bug Fix an NPE adding an FR 8.3 instance to an 8.3 ED instance via manage servers- FR7877 Bug Fix an issue in the Jedis tracking in FR. FR7872 Bug Fix an NPE when viewing Stack Trace All. FR7871 Bug Fix an issue with the ArchiveViewer not working some times. FR7868 Bug Fix an IllegalArgumentException in cflog tracking. FR7866 Bug Fix an issue where the Event Snapshots would generate on the incorrect log level. FR7861 Bug Fix an issue with Redisson PING command tracking. FR7842 Bug Fix a ClassCastException in Jersey tracking. FR7838 Bug Fix an issue where the Server Info would should the incorrect build number for ColdFusion after it was updated. FR7826 Bug Fix an issue where adding the FR agent twicewould cause the server being monitored to fail to start (in some cases)due to a race condition. 8.2.3¶ Key Issue Type Summary FR7953 Improvement Track RMI calls in java 6 – 8. FR7952 Bug Fix the Enterprise Dashboard proxy so that it honors the ‘use proxy’ setting for local connections. FR7951 Bug Fix an ArrayIndexOutOfBounds exception in licensing on some RedHat operating systems. 8.2.2¶ Key Issue Type Summary FR7937 Improvement Improve performance of the status code part of the WebMetrics page. FR7945 Bug Fix an issue detecting Docker when running within a Kubernetes cluster. FR7941 Bug Fix an issue with the Enterprise DashboardsEphemeral Instances where large pages could cause the tunnel to becomeunresponsive. This occurred when viewing long logs in FusionReactor. FR7925 Bug Fix an issue where the OSGI log would contain errors when using JSON Content-Type but returning none JSON content. 8.2.1¶ Key Issue Type Summary FR7887 Bug Fix a performance issue with System Monitorand Session tracking on ColdFusion servers which have Redis sessionstore enabled. This issue could cause high CPU usage. FR7858 Bug Fix an issue whereby LDAP roles could checkedin a random order. This could mean the user could log in as Observerwhen they are in the Admin group too. FR7850 Bug Fix an issue where Redisson PING command tracking would create transactions with empty descriptions and 0ms time. 8.2.0¶ Key Issue Type Summary FR7262 New Feature Added support for LDAP user authentication and group management. FR7818 Improvement Add support for Microsoft SQL driver 7.2.2 / Java 11 driver. FR7768 Improvement Add support for Spring Boot REST apis. FR7763 Improvement Add support for Micronaut REST server. FR7720 Improvement Add tracking for okHTTP client. FR7795 Bug Fix a NullPointerException in the heartbeat plugin which sometimes occurs on startup (very rarely). FR7839 Bug Fix a NullPointerException on shutdown in the json tracking. FR7831 Bug Fix the debugger so that it correclty displays classes in the default package. FR7830 Bug Fix an issue where the debugger breakpointwindow would not update the state of the breakpoint when a breakpoint is triggered. FR7829 Bug Fix an issue where the debugger breakpoint window doesnt update the state when stepping over or continuing. FR7808 Bug Fix an issue where the debugger classes and source tree would reset for not reason. FR7804 Bug Fix an issue where Event Snapshot files couldbecome corrupt if the same error occurred at the exact same time on more than 1 thread. FR7786 Bug Fix kafka tracking for kafka clients 2.3.0. FR7784 Bug Fix a Lucee 5.2.8+ session tracking where session creation was tracked twice when SessionRotate function was used. FR7777 Bug Fix an issue where Redisson CONNECT commands were not tracked in FR. FR7766 Bug Fix an issue where Jedis tracking would display the passwords in FR. FR7750 Bug Fix an issue where Redisson time tracked was incorrect due to async handling. FR7546 Bug Fix an html injection issue when viewing variables some object types in the debugger (toString() method had to return HTML). 8.1.1¶ Key Issue Type Summary FR7828 Bug Fix transaction naming with ColdBox 5.5. FR7821 Bug Fix a NullPointerException with AMF requests. FR7812 Bug Fix the Debug UI when using IE11. FR7801 Bug Fix a race condition in the Debug UI which could cause certain sections to not update. FR7800 Bug Fix instance manager so that it works when using IE 11. 8.1.0¶ Key Issue Type Summary FR7678 Obsolete FusionReactor has removed its 32 bit installers for 8.1.0, we only support manual install steps on 32bit operating systems. FR7286 New Feature Add support for Ephemeral Instances, like docker, to auto register with the Enterprise dashboard. FR7679 New Feature Enterprise EDDS: Status page for client and server FR7701 Improvement Improve the Debugger UI so that it looks morelike an IDE and is easier to use. This allows the loaded classes to bedecompiled making it easier to set breakpoints aswell as many moreimprovements. FR7164 Improvement Upgrade Highcharts to version 7. FR7662 Improvement Upgrade jQuery to version 3 as older versions are vulnerable to Cross-site Scripting (XSS) attacks- FR7698 Improvement Improve the time to switch license from a cloud license to a none cloud license. FR7702 Improvement Improve the transaction details page so that the event snapshot and profiler results have more space on the page. FR7683 Improvement Ensure licensing changes reset the session_id used for licensing requests. This improves license handling on the server side. FR7762 Improvement Improve the rendering of transactions in the top of the transaction details when no query params exist. FR7677 Improvement Add a link to the License Connection banner to include a link to a technote to help resolve issues. FR7642 Improvement Stop tracking FR resources in the PageUsageTracker on real FR pages. FR7635 Improvement Use the fernflower decompiler for java 8 or newer. This compiler is faster and more reliable for new class files. FR7293 Improvement Improve the HTTPClient transactions to include session information and other meta data. FR7734 Improvement Change the default fire count to 1 for the debuggers default instead of always. FR7735 Improvement Ensure HTTPClient calls that return 50x codesare tracked as errors same as FusionRequests and appear in the correctErrors page. FR7733 Improvement Add a white list entry to fr-osgi.conf to support debugging FR code with nerdvison java client. FR7610 Improvement Allow the decompile from the cloud to use the CFML source if the FR instance can find it. FR5416 Improvement Fix a performance issue with the decompiler where it was very slow on some files. FR7637 Improvement Fix a performance issue where the logging pointcuts would cause class loading performance issues. FR7582 Improvement Fix a performance issue where the ED menu (top right) would take a long time to be built. FR7633 Improvement Fix a performance issue where the ‘stackTraceall’ for the running requests is slower than the complete ‘stacktraceall’ for all threads. FR6118 Improvement Fix a performance issue where large uploads would be a lot slower with FR installed compared to without. FR7631 Bug Fix a bug in Json Data Tracker Plugin where it showed empty data when application/json header is set for a get request- FR7630 Bug Fix a bug in the Json Data Tracker Plugin where it would not handle smile compressed data. FR7772 Bug Fix a bug in Json Data Tracker Plugin where it would handle invalid json of a single byte. FR7693 Bug Fix a bug where Json Data Tracker Plugin would look for exact content type and ignore any with character set at the end. FR7765 Bug Fix a bug where Java would get stuck in java.util.zip.Inflater.inflateBytes(Native Method) when handling invalid Json Data. FR7571 Bug Fix a bug where FR would break when you have a Cookie header and no value FR7545 Bug Fix a bug where the debugger didnt show the classname for fields, variables in some cases. FR7544 Bug Fix a bug where the debugger would break with some Lucee versions as CGI scope .size() throws an NullPointerException. FR7747 Bug Fix an issue where the datasource name is very large for CFMAIL tags and can cause issues for both memory and sendingthe data to the cloud. FR7730 Bug Fix a bug where JDBC transactions would neverclose the underlying Statement. This could cause memory leaks with some JDBC drivers (DB2). FR7728 Bug Fix a bug where the daily, weekly and monthly reports would still be sent if if the license expired. FR7725 Bug Fix a ConcurrentModificationException bug inthe licensing request code which could cause the first license requestafter startup to fail. FR7724 Bug Fix a UI issue in IE11 where the top menu (About/Logs/Plugins) wouldn’t work. FR7719 Bug Fix an issue in the Enterprise Dashboard where you would get a Javascript error when the last ED instance stopping was removed. FR7716 Bug Remove the license key, which was visible to observer user, from the odl.log file. FR7710 Bug Fix a bug where FR upgrade banner warning is not dismissable- FR7708 Bug Fix an issue where the start_ts has been removed from licensing requests in FR 8.0.1 + 8.0.2. FR7676 Bug Fix a bug where the Kafka mixin would fail with NoSuchField on Kafka 2.2.0. FR7632 Bug Fix an issue where the legacy JDBC Wrapperonly loads on Java 8 and not on Java 9, 10, 11. This could affect somefallback logic if customers had to enable the legacy wrapper instead ofthe default, faster, mixins. FR7711 Bug Fix a NullPointerException in the thread list, if the thread finished at the correct time. FR7462 Bug Fix a NullPointerException on TransactionHistory page when viewing all transactions but filtering on a subflavorwhich JDBC requests doesn’t understand. 8.0.2¶ Key Issue Type Summary FR7689 Improvement Add support for FusionReactors licensing notifications showing HTML links. FR7688 Improvement Ensure the license session_id is cleared when the license key changes. FR7687 Improvement Add a link to the support documentation in the the License Connection notification. FR7681 Improvement Increase ASM version to 7.1 FR7665 Improvement Add link to FR 8 specific Third Party License agreement, but its currently the same as FR 7. FR7690 Bug Fix an issue where, after upgrading to FR 8with a manual activated license, FR would become active (for a shortperiod) when it should not. FR7682 Bug Fix Kafka tracking with Kafka version 2.2.0. FR7674 Bug The JSON capture feature of FR is showing empty data when application/json header is set but no body exists. FR7654 Bug Fix the debuggers Resume Thread button so it works on Windows. 8.0.1¶ Key Issue Type Summary FR7641 Bug Fix an issue where the debuggers UI would scale incorrectly after moving the slider / separators. FR7640 Bug Fix a performance issue where the logging capture feature in FusionReactor could cause ColdFusion startup slowness. 8.0.0¶ Key Issue Type Summary FR7247 New Feature Support the monitoring of Spring Sessions. FR7443 New Feature Support the capturing of variables of the stack when transactions fail. FR7137 New Feature Support for filtering the stacktrace all based on the locks currently being held or waited on. FR7279 New Feature Support capture of JSON request and responses. FR7477 New Feature Support for Java 11 (Oracle and OpenJDK). FR7474 New Feature Capture log statements of slf4j, apache commons and log4j frameworks and CFLOG tags. FR7589 New Feature Capture log statements via CFLOG tags. FR7313 New Feature Add a new cloud status page to help support and customer track down connection issues. FR7611 New Feature Add support for tracking Jedis v2 and v3. FR7473 New Feature Support for MongoDB drivers > 3.8.0 FR7408 Improvement UI pages with tabs maintain the selection on refresh. FR7623 Improvement Improve the text fields on the debugger dialog to add more meaning to possible options. FR7332 Improvement Improve the internal logging of Cloud based requests (IRs) to include more meta data. FR7442 Improvement Use CF 2018 Monitor class to track all CF metrics. FR7436 Improvement About page to show the licensing exceptions (in full) to the user. FR7378 Improvement Add additional information to the Web Metrics summary tables to show Recent WebRequest count, JDBC count and error count. FR7371 Improvement Improve the email alerts from the enterprisedashboard when a server comes online to include much more informationabout the server. FR7204 Improvement Improve the robustness of the Cloud datapack shipping so that it can handle errors more effectively. FR6775 Improvement Improve licensing so that the license serverhandles all state and messages so that these can be changed and updatedwithout needing new client functionality or changes. FR7561 Improvement Improve the debugger exception error messages when the debugger doesn’t have exception support enabled. FR7560 Improvement Add support for Amazon Corretto 8. FR7513 Improvement Show the Max MetaSpace value if its been set for the JVM. FR7512 Improvement Add the actual exception type to the historypages so that the user can see what type of exception occurs withoutneeding to use transaction details page. FR7534 Improvement Make the server discovery code discover FRAM instances as FRAM server type instead of UNKNOWN. FR6548 Improvement Improve the left menu of FR so that uncommon entries moved into sub menus and the main menu is more compact. FR7548 Improvement Include ENV variables in license request data so that the cloud can filter based on this data. FR7404 Improvement Make threads holding locks now have direct links to their stack trace via lock / thread ID. FR7531 Bug Fixed a lock contention issue where JDBC, with no memory tracking enabled, would block on the JDBC by Memory series. FR7509 Bug Fixed an issue where the debugger would notalways show all locks currently held by a thread when using the “Suspend Thread” functionality. FR7506 Bug Fixed a NullPointerException in FusionReactor when viewing profiles via the Cloud UI. FR7507 Bug Fixed an issue where the Request Content Capture feature would affect ColdFusion AJAX request. FR7494 Bug Fixed an issue where the “Stack Trace All” Plain view showed ‘blocked’ threads when the ‘waiting’ filter was active. FR7492 Bug Fixed an issue with the licensing debug page ‘ODL Information’ Copy to Clipboard button throws a JavaScript TypeError. FR7466 Bug Fixed an issue where ColdFusion requests could be stuck due to the line performance tool even when it is disabled. FR7444 Bug Fixed an issue where the profiler would break on any java thread which doesn’t have a stack trace. FR7420 Bug Fixed an issue where pressing the start profiler button could result in incorrect profiling of the thread. FR7421 Bug Fixed an issue where the duration of running profiles didn’t update until the thread was profiled. I.e at each sample. FR7413 Bug Fixed an issue with the decompiler breaking StringConcatFactory but never informing the user why if failed. FR7409 Bug Fixed an issue with CFHTTP calls in CF 2016which would throw a NoSuchFieldError when FusionReactor attempted toread the query_string property. FR7533 Bug Fixed an issue where manually installingFusionReactor (outside of the normal install process) would cause thelicense activation to fail with a NullPointerException. FR7498 Bug Fixed a ConcurrentModificationException whichcould occur if a new tracked statistical series became visible whileFusionReactor was preparing a Cloud datapack for upload. FR7481 Bug Fixed an issue with CFMAIL ColdFusion tag where the from and to addresses would be swapped around. FR7472 Bug Fixed an issue where internal debug checks were enabled in the release version of FusionReactor which would then breakCFPOP CFIMAP tags in ColdFusion. FR7461 Bug Fixed a NullPointerException in the Cloud Retry thread which could occur when we log the internal state changes of theconnection. FR7416 Bug Fixed a ConcurrentModificationException in the Enterprise Dashboard when servers are registered and deregistered automatically. FR7402 Bug Fixed the tracking of ColdFusion ASYNC requests in ColdFusion 2018. FR7388 Bug Fixed an issue where ColdBox transaction names are tracked as index.cfm rather than using event name. FR7338 Bug Fixed an issue where the description HTTP Client calls was incorrect with some versions. FR7601 Bug Fixed an issue where the number of transactions sent to the cloud were unlimited. FR7530 Bug Fixed an issue where the Debugger throws NumberFormatException when using the next frame button in the UI. FR7482 Bug Fixed an issue where conditional breakpoints fail on CF 2018 because CFEVALUATE signature changed slightly. FR7425 Bug Fixed a StringIndexOutOfBoundsException when FusionReactor tracks Mongo operations. FR7426 Bug Fixed Crash Protection and Debug emails when triggered from Grizzly and Jersey WebRequests. FR7412 Bug Fixed a java.lang.AbstractMethodError when monitoring using WebSockets in WildFly 13. FR7406 Bug Fixed an issue where stack traces would show the incorrect hash code for a lock under ownable synchronizers. FR7400 Bug Fixed an issue where the Docker truncated container ID in the license activation message is sometimes discovered incorrectly. FR7382 Bug Fixed an issue where the host in the description on mongo transaction is sometimes null. FR7207 Bug Fixed an issue where the profiler would showvery short profile times when the profiler kicks in just as thetransaction finishes. FR7590 Bug Fixed an issue where Request would not be filtered / limited when running recent / history IR from the Cloud. FR7455 Bug Fixed the HitCount data (application / database) information when its sent to cloud. It could sometimes contain partial data. FR7417 Bug Fixed an issue where transactions do not have the correct application name for some transactions. FR7617 Bug Fixed an issue where the Request detail from the Cloud would return invalid json. FR7613 Bug Fixed an issue where the Database page woulddisplay negative memory allocations when the setting was disabled forsub transactions. FR7427 Bug Fixed an issue where the sigar library (nativesystem metrics library) cannot be loaded on IBM I and ARM operatingsystems and would cause FusionReactor to fail to start. FR7626 Bug Fixed an issue where the Stack Trace All button would show the label twice due to a UI refresh timing problem.
https://docs.intergral.com/release-notes/
2022-05-16T11:40:58
CC-MAIN-2022-21
1652662510117.12
[]
docs.intergral.com
: - Press the date field to open the date picker: - To browse by month, press the month name at the top of the calendar: All months for the current year display: - Press the appropriate month for the date in question. - To change the year, use the left and right arrows on the display as needed. If you need to go back a number of years, press on the year at the top of the display: The browser displays a list of years. - Locate and select the appropriate year. If necessary, use the arrows to scroll through groups of years.
https://docs.openclinica.com/3-1/participate/participate-date-pickers/
2022-05-16T13:05:32
CC-MAIN-2022-21
1652662510117.12
[]
docs.openclinica.com
Tortle Combo Recipes Tortle uses Tortle Combo Recipes , these recipes are sets of different nodes that create products and strategies, our first recipe will be what we call a Bitcoin Savings Account . Our bitcoin savings account is a decentralized finance strategy that can make any user gain profit from Bitcoin and DeFi growth over time and sell it when certain conditions are fulfilled. This recipe will work as follows: First, split and swap any coin into the pair wBTC-USDC into a 50/50. Get some LP Token in a liquidity Pool. Start Farming on SPOOKYSWAP and earn REWARD TOKENS. Configure an easy oracle to set a condition to stop farming(ex: - When BTC falls under $30K). Unstake LP Tokens. Liquidate all tokens to USDC. Indeed! You are all set from a single interface. Some of these operations will be transparent for the users only when they have to: Add funds to any desired currency. Choose the BTC-related Farm. Set the condition to stop farming. Set the output currency. That's it! Welcome to the future of finance. MORE RECIPES Crypto Multibasket (Risky) 🧺🧺🧺 This recipe lets you create a basket of different cryptocurrencies, yield farms them, obtain tokens and liquidate them when certain conditions happen, so you can have a diversified portfolio of Yields, maximizing profits. With this, we are creating the most profitable combination on SpookySwap, mixing 3 Tokens. Having a 33% of our funds on each pair. As a limit order will set that BTC goes under $30K, in that case, all 3 stakings will be over, and the 3 tokens and all the reward tokens earned will be swapped to USDC. Crypto Champions Multi Stop-Loss 💎💎💎 This recipe lets crypto newbies get the best of multiple cryptocurrencies and liquidate them when the value of each goes down. In this case, we will buy some wBTC, BOO, ETH, and FTM and sell them (independently) if they reach a certain level. HODLer This recipe lets you buy some unstable cryptocurrencies in case they reach a certain level, and send them to your wallet. DipMaster 😼 😼 😼 This recipe lets you buy some unstable cryptocurrencies in case they are cheap enough and sell them in case they go up again. Continuous Cash💰💰💰 This DeFi strategy lets you maintain our principal while you liquidate the reward token into a stable coin, Basically, you can spend your earnings and maintain your principal. For this recipe you will just add funds, choose a Yield Farm option, and split the operation into 2, sending the farm tokens again to the yield and the reward token to be swapped by a stablecoin and right to our wallet, so every week we can have fresh well-earned money in our pocket. TORTLE Farming This recipe lets you stake and unstake LP tokens from any Token on a single operation, as the Bitcoin Savings Account gives the user the possibility to liquidate complex positions anytime the user wants, as well as watch their earnings and watch other users earnings over time. Even if the graph is a bit complex the visual representation using our node system is really easy to use. To create this recipe you will only have to use 4 nodes, making it easy for anyone to create it. Intro Nodes Last modified 1mo ago Copy link Contents MORE RECIPES Crypto Multibasket (Risky) 🧺🧺🧺 Crypto Champions Multi Stop-Loss 💎💎💎 HODLer DipMaster 😼😼😼 Continuous Cash💰💰💰 TORTLE Farming
https://docs.tortle.ninja/tortle-power-combo-recipes
2022-05-16T11:04:04
CC-MAIN-2022-21
1652662510117.12
[]
docs.tortle.ninja
3 Common Tasks¶. 3.1 Understanding and Creating Layers¶ The OpenEmbedded build system supports organizing Metadata into multiple layers. Layers allow you to isolate different types of customizations from each other. For introductory information on the Yocto Project Layer Model, see the “The Yocto Project Layer Model” section in the Yocto Project Overview and Concepts Manual. 3.1.1 Creating Your Own Layer¶ Note It is very easy to create your own layers to use with the OpenEmbedded build system, as the Yocto Project ships with tools that speed up creating layers. This section describes the steps you perform by hand to create layers so that you can better understand them. For information about the layer-creation tools, see the “Creating a new BSP Layer Using the bitbake-layers Script” section in the Yocto Project Board Support Package (BSP) Developer’s Guide and the “Creating a General Layer Using the bitbake-layers Script” section further down in this manual. Follow these general steps to create your layer without using tools:. You could find a layer that is identical or close to what you need. Create a Directory: Create the directory for your layer. When you create the layer, be sure to create the directory in an area not associated with the Yocto Project Source Directory (e.g. the cloned pokyrepository). While not strictly required, prepend the name of the directory with the string “meta-“. For example: meta-mylayer meta-GUI_xyz meta-mymachine With rare exceptions, a layer’s name follows this form: meta-root_name Following this layer naming convention can save you trouble later when tools, components, or variables “assume” your layer name begins with “meta-“. A notable example is in configuration files as shown in the following step where layer names without the “meta-” string are appended to several variables used in the configuration. Create a Layer Configuration File: Inside your new layer folder, you need to create a conf/layer.conffile. It is easiest to take an existing layer configuration file and copy that to your layer’s confdirectory and then modify the file as needed. The meta-yocto-bsp/conf/layer.conffile in the Yocto Project Source Repositories demonstrates the required syntax. For your layer, you need to replace “yoctobsp” with a unique identifier for your layer (e.g. “machinexyz” for a layer named “meta-machinexyz”): # = "dunfell" Following is an explanation of the layer configuration file: BBPATH: Adds the layer’s root directory to BitBake’s search path. Through the use of the BBPATH variable, BitBake locates class files ( .bbclass), configuration files, and files that are included with includeand requirestatements. For these cases, BitBake uses the first file that matches the name found in BBPATH. This is similar to the way the PATHvariable is used for binaries. It is recommended, therefore, that you use unique class and configuration filenames in your custom layer. BBFILES: Defines the location for all recipes in the layer. BBFILE_COLLECTIONS: Establishes the current layer through a unique identifier that is used throughout the OpenEmbedded build system to refer to the layer. In this example, the identifier “yoctobsp” is the representation for the container layer named “meta-yocto-bsp”. BBFILE_PATTERN: Expands immediately during parsing to provide the directory of the layer. BBFILE_PRIORITY: Establishes a priority to use for recipes in the layer when the OpenEmbedded build finds recipes of the same name in different layers. LAYERVERSION: Establishes a version number for the layer. You can use this version number to specify this exact version of the layer as a dependency when using the LAYERDEPENDS variable. LAYERDEPENDS: Lists all layers on which this layer depends (if any). LAYERSERIES_COMPAT: Lists the Yocto Project releases for which the current version is compatible. This variable is a good way to indicate if your particular layer is current.. Note For an explanation of layer hierarchy that is compliant with the Yocto Project, see the “Example Filesystem Layout” section in the Yocto Project Board Support Package (BSP) Developer’s Guide.. 3.1.2 Following Best Practices When Creating Layers¶ To create layers that are easier to maintain and that will not impact builds for other machines, you should consider the information in the following list:/package .incinstead of requirefile Variables to Support a Different Machine: Suppose you have a layer named meta-onethat adds support for building machine “one”. To do so, you use an append file named base-files.bbappendand create a dependency on “foo” by altering the DEPENDS variable: DEPENDS = "foo" The dependency is created during any build that includes the layer meta-one. However, you might not want this dependency for all machines. For example, suppose you are building for machine “two” but your bblayers.conffile has the meta-onelayer included. During the build, the base-filesfor machine “two” will also have the dependency on foo. To make sure your changes apply only when building machine “one”, use a machine override with the DEPENDS statement: DEPENDS:one = "foo" You should follow the same strategy when using :appendand :prependoperations: DEPENDS:append:one = " foo" DEPENDS:prepend:one = "foo " As an actual example, here’s a snippet from the generic kernel include file linux-yocto.inc, wherein the kernel compile and link options are adjusted in the case of a subset of the supported architectures: DEPENDS:append:aarch64 = " libgcc" KERNEL_CC:append:aarch64 = " ${TOOLCHAIN_OPTIONS}" KERNEL_LD:append:aarch64 = " ${TOOLCHAIN_OPTIONS}" DEPENDS:append:nios2 = " libgcc" KERNEL_CC:append:nios2 = " ${TOOLCHAIN_OPTIONS}" KERNEL_LD:append:nios2 = " ${TOOLCHAIN_OPTIONS}" DEPENDS:append:arc = " libgcc" KERNEL_CC:append:arc = " ${TOOLCHAIN_OPTIONS}" KERNEL_LD:append:arc = " ${TOOLCHAIN_OPTIONS}" KERNEL_FEATURES:append:qemuall=" features/debug/printk.scc"could.conffile includes the meta-onelayer.-layer_nameformat. Group Your Layers Locally: Clone your repository alongside other cloned metadirectories from the Source Directory. 3.1.3 Making Sure Your Layer is Compatible With Yocto Project¶. Note Only Yocto Project member organizations are permitted to use the Yocto Project Compatible Logo. The logo is not available for general use. For information on how to become a Yocto Project member organization, see the Yocto Project Website.”. Be a Yocto Project Member Organization. The remainder of this section presents information on the registration form and on the yocto-check-layer script. 3.1.3.1 Yocto Project Compatible Program Application¶. There is space at the bottom of the form for any explanations for items for which you answered “No”. Recommendations: Provide answers for the questions regarding Linux kernel use and build success. 3.1.3.2 yocto-check-layer Script¶SP,MEfile_world: Verifies that bitbake worldworks. common.test_signatures: Tests to be sure that BSP and DISTRO layers do not come with recipes that change signatures. common.test_layerseries_compat: Verifies layer compatibility is set properly. bsp.test_bsp_defines_machines: Tests if a BSP layer has machine configurations. bsp.test_bsp_no_set_machine: Tests to ensure a BSP layer does not set the machine when the layer is added. bsp.test_machine_world: Verifies that bitbake worldworks regardless of which machine is selected. bsp.test_machine_signatures: Verifies that building for a particular machine affects only the signature of tasks specific to that machine. distro.test_distro_defines_distros: Tests if a DISTRO layer has distro configurations. distro.test_distro_no_set_distros: Tests to ensure a DISTRO layer does not set the distribution when the layer is added. 3.1.4 Enabling Your Layer¶ your new meta-mylayer layer (note how your new layer exists outside of the official poky repository which you would have checked out earlier): # POKY_BBLAYERS_CONF_VERSION is increased each time build/conf/bblayers.conf # changes incompatibly POKY_BBLAYERS_CONF_VERSION = "2" BBPATH = "${TOPDIR}" BBFILES ?= "" BBLAYERS ?= " \ /home/user/poky/meta \ /home/user/poky/meta-poky \ /home/user/poky/meta-yocto-bsp \ /home/user/mystuff/meta-mylayer \ " BitBake parses each conf/layer.conf file from the top down as specified in the BBLAYERS variable within the conf/bblayers.conf file. During the processing of each conf/layer.conf file, BitBake adds the recipes, classes and configurations contained within the particular layer to the source directory. 3.1.5 Appending Other Layers Metadata With Your Layer¶_3.1.bbappend must apply to someapp_3.1.bb. This means the original recipe and append filenames are version number-specific. If the corresponding recipe is renamed to update to a newer version, you must also rename and possibly update the corresponding .bbappend. 3.1.5.1 Overlaying a File Using Your Layer¶" DESCRIPTION = "A formfactor configuration file provides information about the \ target hardware for which the image is being built and information that the \ build system cannot obtain from other sources such as the kernel." SECTION = "base" LICENSE = "MIT" LIC_FILES_CHKSUM = " Raspberry Pi BSP Layer named meta-raspberrypi. The file is in the layer formfactor in the same directory in which the append file resides (i.e. meta-raspberrypi/recipes-bsp/formfactor. This implies that you must have the supporting directory structure set up that will contain any files or patches you will be including from the layer. ${THISDIR }/${PN }, which resolves to a directory named Using the immediate expansion assignment operator :=is important because of the reference to THISDIR. The trailing colon character is important as it ensures that items in the list remain colon-separated. Note BitBake automatically defines the THISDIR variable. You should never set this variable yourself. Using “:prepend” as part of the FILESEXTRAPATHS ensures your path will be searched prior to other paths in the final list. Also, not all append files add extra files. Many append files simply allow to add build options (e.g. systemd). For these cases, your append file would not even use the FILESEXTRAPATHS statement. The end result of this .bbappend file is that on a Raspberry Pi, where rpi will exist in the list of OVERRIDES, the file meta-raspberrypi/recipes-bsp/formfactor/formfactor/rpi/machconfig will be used during do_fetch and the test for a non-zero file size in do_install will return true, and the file will be installed. 3.1.5.2 Installing Additional Files Using Your Layer¶ As another example, consider the main xserver-xf86-config recipe and a corresponding xserver-xf86-config append file both from the Source Directory. Here is the main xserver-xf86-config recipe, which is named xserver-xf86-config_0.1.bb and located in the “meta” layer at meta/recipes-graphics/xorg-xserver: SUMMARY = "X.Org X server configuration file" HOMEPAGE = " SECTION = "x11/base" LICENSE = "MIT" LIC_FILES_CHKSUM = "{COREBASE}/meta/COPYING.MIT;md5=3da9cfbcb788c80a0384361b4de20420" PR = "r33" SRC_URI = "" S = "${WORKDIR}" CONFFILES:${PN} = "${sysconfdir}/X11/xorg.conf" PACKAGE_ARCH = "${MACHINE_ARCH}" ALLOW_EMPTY:${PN} = "1" do_install () { if test -s ${WORKDIR}/xorg.conf; then install -d ${D}/${sysconfdir}/X11 install -m 0644 ${WORKDIR}/xorg.conf ${D}/${sysconfdir}/X11/ fi } Following is the append file, which is named xserver-xf86-config_%.bbappend and is from the Raspberry Pi BSP Layer named meta-raspberrypi. The file is in the layer at recipes-graphics/xorg-xserver: FILESEXTRAPATHS:prepend := "${THISDIR}/${PN}:" SRC_URI:append:rpi = " \ \ \ " do_install:append:rpi () { PITFT="${@bb.utils.contains("MACHINE_FEATURES", "pitft", "1", "0", d)}" if [ "${PITFT}" = "1" ]; then install -d ${D}/${sysconfdir}/X11/xorg.conf.d/ install -m 0644 ${WORKDIR}/xorg.conf.d/98-pitft.conf ${D}/${sysconfdir}/X11/xorg.conf.d/ install -m 0644 ${WORKDIR}/xorg.conf.d/99-calibration.conf ${D}/${sysconfdir}/X11/xorg.conf.d/ fi } FILES:${PN}:append:rpi = " ${sysconfdir}/X11/xorg.conf.d/*" Building off of the previous example, we once again are setting the FILESEXTRAPATHS variable. In this case we are also using SRC_URI to list additional source files to use when rpi is found in the list of OVERRIDES. The do_install task will then perform a check for an additional MACHINE_FEATURES that if set will cause these additional files to be installed. These additional files are listed in FILES so that they will be packaged. 3.1.6 Prioritizing Your Layer¶ and append the layer’s root name: BBFILE_PRIORITY_mylayer = "1" 3.1.7 Managing Layers¶ You can use the BitBake layer management tool bitbake-layers to provide a view into the structure of recipes across a multi-layer project. Being able to generate output that reports on configured layers with their paths and priorities and on .bbappend files and their applicable recipes can help to reveal potential problems. For help on the BitBake layer management tool, use the following command: $ bitbake-layers --help NOTE: Starting bitbake server... usage: bitbake-layers [-d] [-q] [-F] [--color COLOR] [-h] <subcommand> ... BitBake layers utility optional arguments: -d, --debug Enable debug output -q, --quiet Print only errors -F, --force Force add without recipe parse verification --color COLOR Colorize output (where COLOR is auto, always, never) -h, --help show this help message and exit subcommands: <subcommand> layerindex-fetch Fetches a layer from a layer index along with its dependent layers, and adds them to conf/bblayers.conf. layerindex-show-depends Find layer dependencies from layer index. add-layer Add one or more layers to bblayers.conf. remove-layer Remove one or more layers from bblayers.conf. flatten flatten layer configuration into a separate output directory. show-layers show current configured layers. show-overlayed list overlayed recipes (where the same recipe exists in another layer) show-recipes list available recipes, showing the layer they are provided by show-appends list bbappend files and recipe files they apply to show-cross-depends Show dependencies between recipes that cross layer boundaries. create-layer Create a basic layer Use bitbake-layers <subcommand> --help to get help on a specific command The following list describes the available commands: help:Displays general help or help on a specified command. show-layers:Shows the current configured layers. show-overlayed:Lists overlayed recipes. A recipe is overlayed when a recipe with the same name exists in another layer that has a higher layer priority. show-recipes:Lists available recipes and the layers that provide them. show-appends:Lists .bbappendfiles and the recipe files to which they apply. show-cross-depends:Lists dependency relationships between recipes that cross layer boundaries. add-layer:Adds a layer to bblayers.conf. remove-layer:Removes a layer from bblayers.conf flatten:Flattens the layer configuration into a separate output directory. Flattening your layer configuration builds a “flattened” directory that contains the contents of all layers, with any overlayed recipes removed and any .bbappendfiles.conffile. Only the lowest priority layer’s layer.confis used. Overridden and appended items from .bbappendfiles need to be cleaned up. The contents of each .bbappendend up in the flattened recipe. However, if there are appended or changed variable values, you need to tidy these up yourself. Consider the following example. Here, the bitbake-layerscommand" ... layerindex-fetch: Fetches a layer from a layer index, along with its dependent layers, and adds the layers to the conf/bblayers.conffile. layerindex-show-depends: Finds layer dependencies from the layer index. create-layer: Creates a basic layer. 3.1.8 Creating a General Layer Using the bitbake-layers Script¶ The bitbake-layers script with the create-layer subcommand simplifies creating a new general layer. Note For information on BSP layers, see the “BSP Layers” section in the Yocto Project Board Specific (BSP) Developer’s Guide. In order to use a layer with the OpenEmbedded build system, you need to add the layer to your bblayers.confconfiguration file. See the “Adding a Layer Using the bitbake-layers Script” section for more information. The default mode of the script’s operation with this subcommand is to create a layer with the following: A layer priority of 6. A confsubdirectory that contains a layer.conffile. A recipes-examplesubdirectory that contains a further subdirectory named example, which contains an example.bbrecipe file. A COPYING.MIT, which is the license statement for the layer. The script assumes you want to use the MIT license, which is typical for most layers, for the contents of the layer itself. A READMEfile, As an example, the following command creates a layer named meta-scottrif in your home directory: $ cd /usr/home $ bitbake-layers create-layer meta-scottrif NOTE: Starting bitbake server... Add your new layer with 'bitbake-layers add-layer meta-scottrif' If you want to set the priority of the layer to other than the default value of “6”, you can either use the --priority option or you can edit the BBFILE_PRIORITY value in the conf/layer.conf after the script creates it. Furthermore, if you want to give the example recipe file some name other than the default, you can use the --example 3.1.9 Adding a Layer Using the bitbake-layers Script¶ Once you create your general layer, you must add it to your bblayers.conf file. Adding the layer to this configuration file makes the OpenEmbedded build system aware of your layer so that it can search it for metadata. Add your layer by using the bitbake-layers add-layer command: $ bitbake-layers add-layer your_layer_name Here is an example that adds a layer named meta-scottrif to the configuration file. Following the command that adds the layer is another bitbake-layers command that shows the layers that are in your bblayers.conf file: $ bitbake-layers add-layer meta-scottrif NOTE: Starting bitbake server... Parsing recipes: 100% |##########################################################| Time: 0:00:49 Parsing of 1441 .bb files complete (0 cached, 1441 parsed). 2055 targets, 56 workspace /home/scottrif/poky/build/workspace 99 meta-scottrif /home/scottrif/poky/build/meta-scottrif 6 Adding the layer to this file enables the build system to locate the layer during the build. Note During a build, the OpenEmbedded build system looks in the layers from the top of the list down to the bottom in that order. 3.2 Customizing Images¶ You can customize images to satisfy particular requirements. This section describes several methods and provides guidelines for each. 3.2.1 Customizing Images Using local.conf¶ leading space after the opening quote and before the package name, which is strace in this example. This space is required since the :append operator does not add the space. Furthermore, you must use :append instead of the .bbclass files with operators like :append ensures the operation takes effect. +=operator if you want to avoid ordering issues. The reason for this is because doing so unconditionally appends to the variable and avoids ordering problems due to the variable being set in image recipes and ?=. Using. 3.2.2 Customizing Images Using Custom IMAGE_FEATURES and EXTRA_IMAGE_FEATURES¶/image.bbclass.. Note See the “Image Features” section in the Yocto Project Reference Manual for a complete list of image features that ship with the Yocto Project. 3.2.3 Customizing Images Using Custom .bb Files¶" 3.2.4 Customizing Images Using Custom Package Groups¶ For complex custom images, the best approach for customizing an image is to create a custom package group recipe that is used to build the image or images. A good example of a package group recipe is meta/recipes-core/packagegroups/packagegroup-base.bb. If you examine that recipe, you see that the PACKAGES variable lists the package group packages to produce. The inherit packagegroup statement sets appropriate default values and automatically adds -dev, -dbg, and -ptest complementary packages for each package specified in the PACKAGES statement. Note The inherit packagegroup line should be located near the top of the recipe, certainly before the PACKAGES statement. for a hypothetical packagegroup defined in packagegroup-custom.bb, where the variable PN is the standard way to abbreviate the reference to the full packagegroup name packagegroup-custom: DESCRIPTION = "My Custom Package Groups" inherit packagegroup PACKAGES = "\ ${PN}-apps \ ${PN}-tools \ " RDEPENDS:${PN}-apps = "\ dropbear \ portmap \ psplash" RDEPENDS:${PN}-tools = "\ oprofile \ oprofileui-server \ lttng-tools" RRECOMMENDS:${PN}. 3.2.5 Customizing an Image Hostname¶. 3.3 Writing a New Recipe¶ Recipes ( .bb files) are fundamental components in the Yocto Project environment. Each software component built by the OpenEmbedded build system requires a recipe to define the component. This section describes how to create, write, and test a new recipe. Note For information on variables that are useful for recipes and for information about recipe naming issues, see the “Recipes” section of the Yocto Project Reference Manual. 3.3.1 Overview¶ The following figure shows the basic process for creating a new recipe. The remainder of the section provides details for the steps. 3.3.2 Locate or Automatically Create a Base Recipe¶ You can always write a recipe from scratch. However, there are three choices that can help you quickly get started. Note For information on recipe syntax, see the “Recipe Syntax” section. 3.3.2.1 Creating the Base Recipe Using devtool add¶. 3.3.2.2 Creating the Base Recipe Using recipetool create¶). To get help on the tool, use the following command: $ recipetool -h NOTE: Starting bitbake server... usage: recipetool [-d] [-q] [--color COLOR] [-h] <subcommand> ... OpenEmbedded recipe tool options: -d, --debug Enable debug output -q, --quiet Print only errors --color COLOR Colorize output (where COLOR is auto, always, never) -h, --help show this help message and exit subcommands: create Create a new recipe newappend Create a bbappend for the specified target in the specified layer setvar Set a variable within a recipe appendfile Create/update a bbappend to replace a target file appendsrcfiles Create/update a bbappend to add or replace source files appendsrcfile Create/update a bbappend to add or replace a source file Use recipetool <subcommand> --help to get help on a specific commandto generate debugging information. Once generated, the recipe resides in the existing source code layer:recipetool create -d -o OUTFILE source 3.3.2.3 Locating and Using a Similar Recipe¶ Layer: If for some reason you do not want to use recipetooland you cannot find an existing recipe that is close to meeting your needs, you can use the following structure to provide the fundamental areas of a new recipe. DESCRIPTION = "" HOMEPAGE = "" LICENSE = "" SECTION = "" DEPENDS = "" LIC_FILES_CHKSUM = "" SRC_URI = "" 3.3.3 Storing and Naming the Recipe¶.conffilecasually (i.e. do not use them as part of your recipe name unless the string applies). Here are some examples: cups_1.7.0.bb gawk_4.0.2.bb irssi_0.8.16-rc1.bb 3.3.4 Running a Build on the Recipe¶ each recipe ( ${WORKDIR }) where it keeps extracted source files, log files, intermediate compilation and packaging files, and so forth. The path to the per-recipe temporary work directory depends on the context in which it is being built. The quickest way to find this path is to have BitBake return it by running the following: $ bitbake -e basename | grep ^WORKDIR=. Note You can find log files for each task in the recipe’s temp directory (e.g. poky/build/tmp/work/qemux86-poky-linux/foo/1.3.0-r0/temp). Log files are named log.taskname (e.g. log.do_configure, log.do_fetch, and log.do_compile). You can find more information about the build process in “The Yocto Project Development Environment” chapter of the Yocto Project Overview and Concepts Manual. 3.3.5 Fetching Code¶ Overview and Concepts version numbers in a URL used in SRC_URI. Rather than hard-code these values,/strace/strace_5.5.bb recipe where the source comes from a single tarball. Notice the use of the PV variable: SRC_UR-core/musl/gcompat_git.bb: SRC_URI = "git://git.adelielinux.org/adelie/gcompat.git;protocol= PV = "1.0.0+1.1+git${SRCPV}" SRCREV = "af5a49e489fdc04b9cf02547650d7aeaccd43793" combining lines from the files git.inc and git_2.24.1.bb: SRC_URI = "${KERNELORG_MIRROR}/software/scm/git/git-${PV}.tar.gz;name=tarball \ ${KERNELORG_MIRROR}/software/scm/git/git-manpages-${PV}.tar.gz;name=manpages" SRC_URI[tarball.md5sum] = "166bde96adbbc11c8843d4f8f4f9811b" SRC_URI[tarball.sha256sum] = "ad5334956301c86841eb1e5b1bb20884a6bad89a10a6762c958220c7cf64da02" SRC_URI[manpages.md5sum] = "31c2272a8979022497ba3d4202df145d" SRC_URI[manpages.sha256sum] = "9a7ae3a093bea39770eb96ca3e5b40bff7af0b9f6123f089d7821d0e5b8e1230", or you provide an incorrect checksum, the build will produce an error for each missing or incorrect checksum. As part of the error message, the build system provides the checksum string corresponding to the fetched file. Once you have the correct checksums, you can copy and paste them into your recipe and then run the build again to continue. Note As mentioned, if the upstream source provides signatures for verifying the downloaded source code, you should verify those manually before setting the checksum values in the recipe and continuing with the build. = " \ \ \" When you specify local files using the file:// URI protocol, the build system fetches files from the local machine. The path is relative to the FILESPATH variable and searches specific directories in a certain order: files. The directories are assumed to be subdirectories of the directory in which the recipe or append file resides. For another example that specifies these types of files, see the “Single .c File Package (Hello World!)” section. ${BP }, ${BPN }, and The previous example also specifies a patch file. Patch files are files whose names usually end in .patch or .diff but can end with compressed suffixes such as diff.gz and patch.bz2, for example. The build system automatically applies patches as described in the “Patching Code” section. 3.3.5.1 Fetching Code Through Firewalls¶ Some users are behind firewalls and need to fetch code through a proxy. See the “FAQ” chapter for advice. 3.3.5.2 Limiting the Number of Parallel Connections¶ Some users are behind firewalls or use servers where the number of parallel connections is limited. In such cases, you can limit the number of fetch tasks being run in parallel by adding the following to your local.conf file: do_fetch[number_threads] = "4" 3.3.6 Unpacking Code¶. 3.3.7 Patching Code¶ Sometimes it is necessary to patch code after it has been fetched. Any files mentioned in SRC_URI whose names end in .patch or .diff or compressed versions of these suffixes (e.g. diff.gzP and BPN) or “files”. 3.3.8 Licensing¶MEfiles. You could also find the information near the top of a source file. For example, given a piece of software licensed under the GNU General Public License version 2, you would set LICENSE as follows: LICENSE = "GPL-2.0-only"file: LIC_FILES_CHKSUM = ";md5=xxx" When you try to build the software, the build system will produce an error and give you the correct string that you can substitute into the recipe file for a subsequent build. 3.3.9 Dependencies¶ Most software there are nuances, items specified in DEPENDS should be names of other recipes. It is important that you specify all build-time dependencies explicitly. Another consideration is that configure scripts might automatically check for optional dependencies and enable corresponding functionality if those dependencies are found. RDEPENDS:${PN}. If the package were named ${PN}-tools, then you would set RDEPENDS:${PN}-tools, and so forth. ${PN }variable, then you specify the dependencies for the main package by setting” section in the Yocto Project Overview and Concepts Manual for further details. 3.3.10 Configuring the Recipe¶ Most software provides some means of setting build-time configuration options before compilation. Typically, setting these options is accomplished by running a configure script with options, or by modifying a build configuration file. Note As of Yocto Project Release 1.7, some of the core recipes that package binary configuration scripts now disable the scripts due to the scripts previously requiring error-prone path substitution. The OpenEmbedded build system uses pkg-config now, which is much more robust. You can find a list of the *-config scripts that are disabled in the “Binary Configuration Scripts Disabled” section in the Yocto Project Reference Manual.file, then your software is built using Autotools. If this is the case, you just need to modify the configuration. When using Autotools, your recipe needs to inherit the autotools class and your recipe does not have to contain a do_configure task. However, you might still want to make some adjustments. For example, you can set EXTRA_OECONF or PACKAGECONFIG_CONFARGS to pass any needed configure options that are specific to the recipe. CMake: If your source files have a CMakeLists.txtfile, then your software is built using CMake. If this is the case, you just need to modify. Note If you need to install one or more custom CMake toolchain files that are supplied by the application you are building, install the files to ${D}${datadir}/cmake/Modulesduring do_install. Other: If your source files do not have a configure.acor CMakeLists.txtfile, --help --help command within ${S} or consult the software’s upstream documentation. 3.3.11 Using Headers to Interface with Devices¶. Note Never copy and customize the libc headersystem, and not something you use to access the kernel directly. You should access libcthrough specific libccalls.. Note If for some reason your changes need to modify the behavior of the libc, and subsequently all other applications on the system, use a .bbappend to modify the linux-kernel-headers.inc file." 3.3.12 Compilation¶. Note For cases where improper paths are detected for configuration files or for when libraries/headers cannot be found, be sure you are using the more robust pkg-config. See the note in section “Configuring the Recipe” for additional information.only. The failure occurs when the compilation process uses improper headers, libraries, or other files from the host system when cross-compiling for the target. To fix the problem, examine the log.do_compilefile). 3.3.13 Installing¶task as part of your recipe. You just need to make sure the install portion of the build completes with no issues. However, if you wish to install additional files not already being installed by make install, you should do this using a do_install:appendfunction using the install command as described in the “Manual” bulleted item later in this list. Other (using make install): You need to define a do_installfunction in your recipe. The function should call oe_runmake installand will likely need to pass in the destination directory as well. How you pass that path is dependent on how the Makefilebeing run is written (e.g. DESTDIR=${D}, PREFIX=${D}, INSTALLROOT=${D}, and so forth). For an example recipe using make install, see the “Makefile-Based Package” section. Manual: You need to define a do_installfunction in your recipe. The function must first use install -dto create the directories under installto manually install the built software into the directories. You can find more information on install. Noteif needed. oe_runmake install, which can be run directly or can be run indirectly by the autotools and cmake classes, runs make install need to install one or more custom CMake toolchain files that are supplied by the application you are building, install the files to ${D}${datadir}/cmake/Modulesduring do_install. 3.3.14 Enabling System Services¶file located in your Source Directory section for more information. 3.3.15 Packaging¶ Successful packaging is a combination of automated processes performed by the OpenEmbedded build system and some specific steps you need to take. The following list describes the process: Splitting Files: The do_packagetask splits the files produced by the recipe into logical components. Even software that produces a single binary might still have debug symbols, documentation, and other logical components that should be split out. The do_packagetask }/packages-splitdirectory apply to any machine with the same architecture as the target machine. When a recipe produces packages that are machine-specific (e.g. the MACHINE value is passed into the configure script or a patch is applied only for a particular machine), you should mark them as such. 3.3.17 Using Virtual Providers¶ Prior to a build, if you know that several different recipes provide the same functionality, you can use a virtual provider (i.e. virtual/*) as a placeholder for the actual provider. The actual provider is determined at build-time. A common scenario where a virtual provider is used would be for the kernel recipe. Suppose you have three kernel recipes whose PN values map to kernel-big, kernel-mid, and kernel-small. Furthermore, each of these recipes in some way uses a PROVIDES statement that essentially identifies itself as being able to provide virtual/kernel. Here is one way through the kernel class: PROVIDES += "virtual/kernel" Any recipe that inherits the kernel class is going to utilize a PROVIDES statement that identifies that recipe as being able to provide the virtual/kernel item. Now comes the time to actually build an image and you need a kernel recipe, but which one? You can configure your build to call out the kernel recipe you want by using the PREFERRED_PROVIDER variable. As an example, consider the x86-base.inc include file, which is a machine (i.e. MACHINE) configuration file. This include file is the reason all x86-based machines use the linux-yocto kernel. Here are the relevant lines from the include file: PREFERRED_PROVIDER_virtual/kernel ??= "linux-yocto" PREFERRED_VERSION_linux-yocto ??= "4.15%" When you use a virtual provider, you do not have to “hard code” a recipe name as a build dependency. You can use the DEPENDS variable to state the build is dependent on virtual/kernel for example: DEPENDS = "virtual/kernel" During the build, the OpenEmbedded build system picks the correct recipe needed for the virtual/kernel dependency based on the PREFERRED_PROVIDER variable. If you want to use the small kernel mentioned at the beginning of this section, configure your build as follows: PREFERRED_PROVIDER_virtual/kernel ??= "kernel-small" Note Any recipe that PROVIDES a virtual/* item that is ultimately not selected through PREFERRED_PROVIDER does not get built. Preventing these recipes from building is usually the desired behavior since this mechanism’s purpose is to select between mutually exclusive alternative providers. The following lists specific examples of virtual providers: virtual/kernel: Provides the name of the kernel recipe to use when building a kernel image. virtual/bootloader: Provides the name of the bootloader to use when building an image. virtual/libgbm: Provides gbm.pc. virtual/egl: Provides egl.pcand possibly wayland-egl.pc. virtual/libgl: Provides gl.pc(i.e. libGL). virtual/libgles1: Provides glesv1_cm.pc(i.e. libGLESv1_CM). virtual/libgles2: Provides glesv2.pc(i.e. libGLESv2). 3.3.18 Properly Versioning Pre-Release Recipes¶}" 3.3.19 Post-Installation Scripts¶ Post-installation scripts run immediately after installing a package on the target or during image creation when a package is included in an image. To add a post-installation script to a package, add a pkg_postinst:PACKAGENAME .bb) and replace PACKAGENAME with the name of the package you want to attach to the postinst script. To apply the post-installation script to the main package for the recipe, which is usually what is required, specify ()function to the recipe file ( ${PN }in place of PACKAGENAME. A post-installation function has the following structure: pkg_postinst:PACKAGENAME() { # Commands to carry out } The script defined in the post-installation function is called when the root filesystem is created. If the script succeeds, the package is marked as installed. Note Any RPM post-installation script that runs on the target should return a 0 exit code. RPM does not allow non-zero exit codes for these scripts, and the RPM package manager will cause the package to fail installation on the target. Sometimes it is necessary for the execution of a post-installation script to be delayed until the first boot. For example, the script might need to be executed on the device itself. To delay script execution until boot time, you must explicitly mark post installs to defer to the target. You can use pkg_postinst_ontarget() or call postinst_intercept delay_to_first_boot from pkg_postinst(). Any failure of a pkg_postinst() script (including exit 1) triggers an error during the do_rootfs task. If you have recipes that use pkg_postinst function and they require the use of non-standard native tools that have dependencies during root filesystem construction, you need to use the PACKAGE_WRITE_DEPS variable in your recipe to list these tools. If you do not use this variable, the tools might be missing and execution of the post-installation script is deferred until first boot. Deferring the script to the first boot is undesirable and impossible for read-only root filesystems. Note There is equivalent support for pre-install, pre-uninstall, and post-uninstall scripts by way of pkg_preinst, pkg_prerm, and pkg_postrm, respectively. These scrips work in exactly the same way as does pkg_postinst with the exception that they run at different times. Also, because of when they run, they are not applicable to being run at image creation time like pkg_postinst. 3.3.20 Testing¶ “Customizing Images” section. 3.3.21 Examples¶ To help summarize how to write a recipe, this section provides some examples given various scenarios: Recipes that use local files Using an Autotooled package Using a Makefile-based package Splitting an application into multiple packages Adding binaries to an image 3.3.21.1 Single .c File Package (Hello World!)¶} ${LDFLAGS}. 3.3.21.2 Autotooled Package¶-2.0-or-later" in the Yocto Project Overview and Concepts Manual. You can quickly create Autotool-based recipes in a manner similar to the previous example. 3.3.21.3 Makefile-Based Package¶ or PACKAGECONFIG_CONFARGS variables., lz4 is a makefile-based package: SUMMARY = "Extremely Fast Compression algorithm"." HOMEPAGE = " LICENSE = "BSD-2-Clause | GPL-2.0-only" LIC_FILES_CHKSUM = ";md5=ebc2ea4814a64de7708f1571904b32cc \;md5=b234ee4d69f5fce4486a80fdaf4a4263 \;md5=d57c0d21cb917fb4e0af2454aa48b956 \ " PE = "1" SRCREV = "d44371841a2f1728a3f36839fd4b7e872d0927d3" SRC_URI = "git://github.com/lz4/lz4.git;branch=release;protocol= \ \ " UPSTREAM_CHECK_GITTAGREGEX = "v(?P<pver>.*)" S = "${WORKDIR}/git" # Fixed in r118, which is larger than the current version. CVE_CHECK_IGNORE += "CVE-2014-4715" EXTRA_OEMAKE = "PREFIX=${prefix} CC='${CC}' CFLAGS='${CFLAGS}' DESTDIR=${D} LIBDIR=${libdir} INCLUDEDIR=${includedir} BUILD_STATIC=no" do_install() { oe_runmake install } BBCLASSEXTEND = "native nativesdk" 3.3.21.4 Splitting an Application into Multiple Packages¶pm: X Pixmap extension library" LICENSE = "MIT" LIC_FILES_CHKSUM = ";md5=51f4270b012ecd4ab1a164f5f4ed6cf7" DEPENDS += "libxext libsm libxt". 3.3.21.5 Packaging Externally Produced Binaries¶ Sometimes, you need to add pre-compiled binaries to an image. For example, suppose that there are binaries for proprietary code,. Note Overview and Concepts If ${S}might contain a Makefile, or if you inherit some class that replaces do_configureand do_compilewith custom versions, then you can use the do_configure[noexec] = "1" do_compile[noexec] = "1" Unlike Deleting a Task, using the flag preserves the dependency chain from the do_fetch, do_unpack, and do_patch tasks to the do_install task. Make sure your do_installtask installs the binaries appropriately. Ensure that you set up FILES (usually FILES:${PN 3.3.22 Following Recipe Style Guidelines¶. 3.3.23 Recipe Syntax¶shell syntax, although access to OpenEmbedded variables and internal methods are also available. Here is an example function from the sedrecipe:and require) and export variables to the environment ( export). The following example shows the use of some of these keywords: export POSTCONF = "${STAGING_BINDIR}/postconf" inherit autoconf require otherfile.inc Comments (#): Any lines that begin with the hash character ( # ( VAR = "A really long \ line" Note You cannot have any characters including spaces or tabs after the slash character. Using Variables (${VARNAME}): Use the ${VARNAME}syntax to access the contents of a variable: SRC_URI = "${SOURCEFORGE_MIRROR}/libpng/zlib-${PV}.tar.gz" Note It is important to understand that the value of a variable expressed in this form does not get substituted automatically. The expansion of these expressions happens on-demand later (e.g. usually when a function that makes reference to the variable executes). This behavior ensures that the values are most appropriate for the context in which they are finally used. On the rare occasion that you do need the variable expression to be expanded immediately, you can use the := operator instead of = when you make the assignment, but this is not generally needed. Quote All Assignments (“value”): Use double quotes around values in all variable assignments (e.g. "value"). Following is an example: VAR1 = "${OTHERVAR}" VAR2 = "The version is ${PV}" Conditional Assignment (?=): Conditional assignment is used to assign a value to a variable, but only when the variable is currently unset. Use the question mark followed by the equal sign ( local.conffile for variables that are allowed to come through from the external environment. Here is an example where VAR1is set to “New value” if it is currently empty. However, if VAR1has already been set, it remains unchanged: VAR1 ?= "New value" In this next example, VAR1is left with the value “Original value”: VAR1 = "Original value" VAR1 ?= "New value" Appending (+=): Use the plus character followed by the equals sign ( Note This operator adds a space between the existing content of the variable and the new content. Here is an example: SRC_URI += "" Prepending (=+): Use the equals sign followed by the plus character ( Note This operator adds a space between the new content and the existing content of the variable. Here is an example: VAR =+ "Starts" Appending (:append): Use the :appendoperator to append values to existing variables. This operator does not add any additional space. Also, the operator is applied after all the The following example shows the space being explicitly added to the start to ensure the appended value is not merged with the existing value: SRC_URI:append = "" You can also use the :appendoperator with overrides, which results in the actions only being performed for the specified target or machine: SRC_URI:append:sh4 = "" Prepending (:prepend): Use the :prependoperator to prepend values to existing variables. This operator does not add any additional space. Also, the operator is applied after all the The following example shows the space being explicitly added to the end to ensure the prepended value is not merged with the existing value: CFLAGS:prepend = "-I${S}/myincludes " You can also use the :prependoperator). You indicate Python code using the ${@python_code}syntax for the variable assignment: SRC_URI = "{@d.getVar('PV',1).replace('.', '')}.tgz Shell Function Syntax: Write shell functions as if you were writing a shell script when you describe a list of actions to take. You should ensure that your script works with a generic shand that it does not require any bashor other shell-specific functionality. The same considerations apply to various system utilities (e.g. sed, grep, awk, and so forth) that you might wish to use. If in doubt, you should check with multiple implementations - including those from BusyBox. 3.4 Adding a New Machine¶ Adding bitbake-layers Script” section in the Yocto Project Board Support Package (BSP) Developer’s Guide. 3.4.1/.. 3.5 Upgrading Recipes¶ Over time, upstream developers publish new versions for software built by layer recipes. It is recommended to keep recipes up-to-date with upstream version releases. While there are several methods to upgrade a recipe, you might consider checking on the upgrade status of a recipe first. You can do so using the devtool check-upgrade-status command. See the “Checking on the Upgrade Status of a Recipe” section in the Yocto Project Reference Manual for more information. The remainder of this section describes three ways you can upgrade a recipe. You can use the Automated Upgrade Helper (AUH) to set up automatic version upgrades. Alternatively, you can use devtool upgrade to set up semi-automatic version upgrades. Finally, you can manually upgrade a recipe by editing the recipe itself. 3.5.1 Using the Auto Upgrade Helper (AUH)¶ The AUH utility works in conjunction with the OpenEmbedded build system in order to automatically generate upgrades for recipes based on new versions being published upstream. Use AUH when you want to create a service that performs the upgrades automatically and optionally sends you an email with the results. AUH allows you to update several recipes with a single use. You can also optionally perform build and integration tests using images with the results saved to your hard drive and emails of results optionally sent to recipe maintainers. Finally, AUH creates Git commits with appropriate commit messages in the layer’s tree for the changes made to recipes. Note In some conditions, you should not use AUH to upgrade recipes and should instead use either devtool upgrade or upgrade your recipes manually: When AUH cannot complete the upgrade sequence. This situation usually results because custom patches carried by the recipe cannot be automatically rebased to the new version. In this case, devtool upgradeallows you to manually resolve conflicts. When for any reason you want fuller control over the upgrade process. For example, when you want special arrangements for testing. The following steps describe how to set up the AUH utility: Be Sure the Development Host is Set Up: You need to be sure that your development host is set up to use the Yocto Project. For information on how to set up your host, see the “Preparing the Build Host” section. Make Sure Git is Configured: The AUH utility requires Git to be configured because AUH uses Git to save upgrades. Thus, you must have Git user and email configured. The following command shows your configurations: $ git config --list If you do not have the user and email configured, you can use the following commands to do so: $ git config --global user.name some_name $ git config --global user.email [email protected] Clone the AUH Repository: To use AUH, you must clone the repository onto your development host. The following command uses Git to create a local copy of the repository on your system: $ git clone git://git.yoctoproject.org/auto-upgrade-helper Cloning into 'auto-upgrade-helper'... remote: Counting objects: 768, done. remote: Compressing objects: 100% (300/300), done. remote: Total 768 (delta 499), reused 703 (delta 434) Receiving objects: 100% (768/768), 191.47 KiB | 98.00 KiB/s, done. Resolving deltas: 100% (499/499), done. Checking connectivity... done. AUH is not part of the OpenEmbedded-Core (OE-Core) or Poky repositories. Create a Dedicated Build Directory: Run the oe-init-build-env script to create a fresh build directory that you use exclusively for running the AUH utility: $ cd poky $ source oe-init-build-env your_AUH_build_directory Re-using an existing build directory and its configurations is not recommended as existing settings could cause AUH to fail or behave undesirably. Make Configurations in Your Local Configuration File: Several settings are needed in the local.conffile in the build directory you just created for AUH. Make these following configurations: If you want to enable Build History, which is optional, you need the following lines in the conf/local.conffile: INHERIT =+ "buildhistory" BUILDHISTORY_COMMIT = "1" With this configuration and a successful upgrade, a build history “diff” file appears in the upgrade-helper/work/recipe/buildhistory-diff.txtfile found in your build directory. If you want to enable testing through the testimage class, which is optional, you need to have the following set in your conf/local.conffile: INHERIT += "testimage" Note If your distro does not enable by default ptest, which Poky does, you need the following in your local.conffile: DISTRO_FEATURES:append = " ptest" Optionally Start a vncserver: If you are running in a server without an X11 session, you need to start a vncserver: $ vncserver :1 $ export DISPLAY=:1 Create and Edit an AUH Configuration File: You need to have the upgrade-helper/upgrade-helper.confconfiguration file in your build directory. You can find a sample configuration file in the AUH source repository. Read through the sample file and make configurations as needed. For example, if you enabled build history in your local.confas described earlier, you must enable it in upgrade-helper.conf. Also, if you are using the default maintainers.incfile supplied with Poky and located in meta-yoctoand you do not set a “maintainers_whitelist” or “global_maintainer_override” in the upgrade-helper.confconfiguration, and you specify “-e all” on the AUH command-line, the utility automatically sends out emails to all the default maintainers. Please avoid this. This next set of examples describes how to use the AUH: Upgrading a Specific Recipe: To upgrade a specific recipe, use the following form: $ upgrade-helper.py recipe_name For example, this command upgrades the xmodmaprecipe: $ upgrade-helper.py xmodmap Upgrading a Specific Recipe to a Particular Version: To upgrade a specific recipe to a particular version, use the following form: $ upgrade-helper.py recipe_name -t version For example, this command upgrades the xmodmaprecipe to version 1.2.3: $ upgrade-helper.py xmodmap -t 1.2.3 Upgrading all Recipes to the Latest Versions and Suppressing Email Notifications: To upgrade all recipes to their most recent versions and suppress the email notifications, use the following command: $ upgrade-helper.py all Upgrading all Recipes to the Latest Versions and Send Email Notifications: To upgrade all recipes to their most recent versions and send email messages to maintainers for each attempted recipe as well as a status email, use the following command: $ upgrade-helper.py -e all Once you have run the AUH utility, you can find the results in the AUH build directory: ${BUILDDIR}/upgrade-helper/timestamp The AUH utility also creates recipe update commits from successful upgrade attempts in the layer tree. You can easily set up to run the AUH utility on a regular basis by using a cron job. See the weeklyjob.sh file distributed with the utility for an example. 3.5.2 Using devtool upgrade¶ As mentioned earlier, an alternative method for upgrading recipes to newer versions is to use devtool upgrade. You can read about devtool upgrade in general in the “Use devtool upgrade to Create a Version of the Recipe that Supports a Newer Version of the Software” section in the Yocto Project Application Development and the Extensible Software Development Kit (eSDK) Manual. To see all the command-line options available with devtool upgrade, use the following help command: $ devtool upgrade -h If you want to find out what version a recipe is currently at upstream without any attempt to upgrade your local version of the recipe, you can use the following command: $ devtool latest-version recipe_name As mentioned in the previous section describing AUH, devtool upgrade works in a less-automated manner than AUH. Specifically, devtool upgrade only works on a single recipe that you name on the command line, cannot perform build and integration testing using images, and does not automatically generate commits for changes in the source tree. Despite all these “limitations”, devtool upgrade updates the recipe file to the new upstream version and attempts to rebase custom patches contained by the recipe as needed. Note AUH uses much of devtool upgrade behind the scenes making AUH somewhat of a “wrapper” application for devtool upgrade. A typical scenario involves having used Git to clone an upstream repository that you use during build operations. Because you have built the recipe in the past, the layer is likely added to your configuration already. If for some reason, the layer is not added, you could add it easily using the “bitbake-layers” script. For example, suppose you use the nano.bb recipe from the meta-oe layer in the meta-openembedded repository. For this example, assume that the layer has been cloned into following area: /home/scottrif/meta-openembedded The following command from your Build Directory adds the layer to your build configuration (i.e. ${BUILDDIR}/conf/bblayers.conf): $ bitbake-layers add-layer /home/scottrif/meta-openembedded/meta-oe NOTE: Starting bitbake server... Parsing recipes: 100% |##########################################| Time: 0:00:55 Parsing of 1431 .bb files complete (0 cached, 1431 parsed). 2040 targets, 56 skipped, 0 masked, 0 errors. Removing 12 recipes from the x86_64 sysroot: 100% |##############| Time: 0:00:00 Removing 1 recipes from the x86_64_i586 sysroot: 100% |##########| Time: 0:00:00 Removing 5 recipes from the i586 sysroot: 100% |#################| Time: 0:00:00 Removing 5 recipes from the qemux86 sysroot: 100% |##############| Time: 0:00:00 For this example, assume that the nano.bb recipe that is upstream has a 2.9.3 version number. However, the version in the local repository is 2.7.4. The following command from your build directory automatically upgrades the recipe for you: Note Using the -V option is not necessary. Omitting the version number causes devtool upgrade to upgrade the recipe to the most recent version. $ devtool upgrade nano -V 2.9.3 NOTE: Starting bitbake server... NOTE: Creating workspace layer in /home/scottrif/poky/build/workspace Parsing recipes: 100% |##########################################| Time: 0:00:46 Parsing of 1431 .bb files complete (0 cached, 1431 parsed). 2040 targets, 56 skipped, 0 masked, 0 errors. NOTE: Extracting current version source... NOTE: Resolving any missing task queue dependencies . . . NOTE: Executing SetScene Tasks NOTE: Executing RunQueue Tasks NOTE: Tasks Summary: Attempted 74 tasks of which 72 didn't need to be rerun and all succeeded. Adding changed files: 100% |#####################################| Time: 0:00:00 NOTE: Upgraded source extracted to /home/scottrif/poky/build/workspace/sources/nano NOTE: New recipe is /home/scottrif/poky/build/workspace/recipes/nano/nano_2.9.3.bb Continuing with this example, you can use devtool build to build the newly upgraded recipe: $ devtool build nano NOTE: Starting bitbake server... Loading cache: 100% |################################################################################################| Time: 0:00:01 Loaded 2040 entries from dependency cache. Parsing recipes: 100% |##############################################################################################| Time: 0:00:00 Parsing of 1432 .bb files complete (1431 cached, 1 parsed). 2041 targets, 56 skipped, 0 masked, 0 errors. NOTE: Resolving any missing task queue dependencies . . . NOTE: Executing SetScene Tasks NOTE: Executing RunQueue Tasks NOTE: nano: compiling from external source tree /home/scottrif/poky/build/workspace/sources/nano NOTE: Tasks Summary: Attempted 520 tasks of which 304 didn't need to be rerun and all succeeded. Within the devtool upgrade workflow, you can deploy and test your rebuilt software. For this example, however, running devtool finish cleans up the workspace once the source in your workspace is clean. This usually means using Git to stage and submit commits for the changes generated by the upgrade process. Once the tree is clean, you can clean things up in this example with the following command from the ${BUILDDIR}/workspace/sources/nano directory: $ devtool finish nano meta-oe NOTE: Starting bitbake server... Loading cache: 100% |################################################################################################| Time: 0:00:00 Loaded 2040 entries from dependency cache. Parsing recipes: 100% |##############################################################################################| Time: 0:00:01 Parsing of 1432 .bb files complete (1431 cached, 1 parsed). 2041 targets, 56 skipped, 0 masked, 0 errors. NOTE: Adding new patch 0001-nano.bb-Stuff-I-changed-when-upgrading-nano.bb.patch NOTE: Updating recipe nano_2.9.3.bb NOTE: Removing file /home/scottrif/meta-openembedded/meta-oe/recipes-support/nano/nano_2.7.4.bb NOTE: Moving recipe file to /home/scottrif/meta-openembedded/meta-oe/recipes-support/nano NOTE: Leaving source tree /home/scottrif/poky/build/workspace/sources/nano as-is; if you no longer need it then please delete it manually Using the devtool finish command cleans up the workspace and creates a patch file based on your commits. The tool puts all patch files back into the source directory in a sub-directory named nano in this case. 3.5.3 Manually Upgrading a Recipe¶ If for some reason you choose not to upgrade recipes using Using the Auto Upgrade Helper (AUH) or by Using devtool upgrade, you can manually edit the recipe files to upgrade the versions. Note Manually updating multiple recipes scales poorly and involves many steps. The recommendation to upgrade recipe versions is through AUH or devtool upgrade, both of which automate some steps and provide guidance for others needed for the manual process. To manually upgrade recipe versions, follow these general steps: Change the Version: Rename the recipe such that the version (i.e. the PV part of the recipe name) changes appropriately. If the version is not part of the recipe name, change the value as it is set for PV within the recipe itself. Update SRCREV if Needed: If the source code your recipe builds is fetched from Git or some other version control system, update SRCREV to point to the commit hash that matches the new version. Build the Software: Try to build the recipe using BitBake. Typical build failures include the following: License statements were updated for the new version. For this case, you need to review any changes to the license and update the values of LICENSE and LIC_FILES_CHKSUM as needed. Note License changes are often inconsequential. For example, the license text’s copyright year might have changed. Custom patches carried by the older version of the recipe might fail to apply to the new version. For these cases, you need to review the failures. Patches might not be necessary for the new version of the software if the upgraded version has fixed those issues. If a patch is necessary and failing, you need to rebase it into the new version. Optionally Attempt to Build for Several Architectures: Once you successfully build the new software for a given architecture, you could test the build for other architectures by changing the MACHINE variable and rebuilding the software. This optional step is especially important if the recipe is to be released publicly. Check the Upstream Change Log or Release Notes: Checking both these reveals if there are new features that could break backwards-compatibility. If so, you need to take steps to mitigate or eliminate that situation. Optionally Create a Bootable Image and Test: If you want, you can test the new software by booting it onto actual hardware. Create a Commit with the Change in the Layer Repository: After all builds work and any testing is successful, you can create commits for any changes in the layer holding your upgraded recipe. 3.6 Finding Temporary Source Code¶. Note The BP represents. - EXTENDPE: The epoch - (if PE is not specified, which is usually the case for most recipes, then EXTENDPE is blank). - - 3.7 Using Quilt in Your Workflow¶. Note With regard to preserving changes to source files, if you clean a recipe or have rm_work enabled, the devtool workflowtask as shown in the following example: $ bitbake -c compile -f package The -for --forceoption forces the specified task to execute. If you find problems with your code, you can just keep editing and re-testing iteratively until things work as expected. Note All the modifications you make to the temporary source code disappear once you run the do_cleanor do_cleanalltasks using BitBake (i.e. bitbake -c clean packageand bitbake -c cleanall package). Modifications will also disappear if you use the rm_workfeature as described in the “Conserving Disk Space During Builds” section. Generate the Patch: Once your changes work as expected, you need to use Quilt to generate the final patch that contains all your modifications. $ quilt refresh At this point, the my_changes.patchfile has all your edits made to the file1.c, file2.c, and file3.cfiles. += "" 3.8 Using a Development Shell¶ When debugging certain commands or even when just editing packages, devshell can be a useful tool. When you invoke devshell, all tasks up to and including do_patch are run for the specified target. Then, a new terminal is opened and you are placed in configure and make. The commands execute just as if the OpenEmbedded build system were executing them. Consequently, working this way can be helpful when debugging a build or preparing software to be used with the OpenEmbedded build system. ${S }, the source directory. In the new terminal, all the OpenEmbedded build-related environment variables are still defined so you can use commands such asvariable includes the cross-toolchain. The pkgconfigvariables find the correct .pcfiles. The configurecommand). To manually run a specific task using devshell, run the corresponding run.* script in the }/temp directory (e.g., run.do_configure.pid). If a task’s script does not exist, which would be the case if the task was skipped by way of the sstate cache, you can create the task by first running it outside of the devshell: ${WORKDIR $ bitbake -c task Note Execution of a task’s run.*script and BitBake’s execution of a task are identical. In other words, running the script re-runs the task just as it would be run using the bitbake -ccommand. Any run.*file that does not have a .pidextension, exit the shell or close the terminal window. Note It is worth remembering that when using devshellyou need to use the full compiler name such as arm-poky-linux-gnueabi-gccinstead of just using gcc. The same applies to other applications such as binutils, libtooland so forth. BitBake sets up environment variables such as CC to assist applications, such as maketo find the correct tools. It is also worth noting that devshellstill works over X11 forwarding and similar situations. 3.9 Using a Python Development Shell¶ Similar to working within a development shell as described in the previous section, you can also spawn and work within an interactive Python development shell. When debugging certain commands or even when just editing packages, pydevshell can be a useful tool. When you invoke the pydevshell task, all tasks up to and including do_patch are run for the specified target. Then a new terminal is opened. Additionally, key Python objects and code are available in the same way they are to BitBake tasks, in particular, the data store ‘d’. So, commands such as the following are useful when exploring the data store and running functions: pydevshell> d.getVar("STAGING_DIR") '/media/build1/poky/build/tmp/sysroots' pydevshell> d.getVar("STAGING_DIR", False) '${TMPDIR}/sysroots' pydevshell> d.setVar("FOO", "bar") pydevshell> d.getVar("FOO") 'bar' pydevshell> d.delVar("FOO") pydevshell> d.getVar("FOO") pydevshell on a target named matchbox-desktop: $ bitbake matchbox-desktop -c pydevshell This command spawns a terminal and places you in an interactive Python interpreter within the OpenEmbedded build environment. The OE_TERMINAL variable controls what type of shell is opened. When you are finished using pydevshell, you can exit the shell either by using Ctrl+d or closing the terminal window. 3.10 Building¶ This section describes various build procedures, such as the steps needed for a simple build, building a target for multiple configurations, generating an image for more than one machine, and so forth. 3.10.1 Building a Simple Image¶ In the development environment, you need to build an image whenever you change hardware support, add or change system libraries, or add or change services that have dependencies. There are several methods that allow you to build an image within the Yocto Project. This section presents the basic steps you need to build a simple image using BitBake from a build host running Linux. Note For information on how to build an image using Toaster, see the Toaster User Manual. For information on how to use devtoolto build images, see the “Using devtool in Your SDK Workflow” section in the Yocto Project Application Development and the Extensible Software Development Kit (eSDK) manual. For a quick example on how to build an image using the OpenEmbedded build system, see the Yocto Project Quick Build document. The build process creates an entire Linux distribution from source and places it in your Build Directory under tmp/deploy/images. For detailed information on the build process using BitBake, see the “Images” section in the Yocto Project Overview and Concepts Manual. The following figure and list overviews the build process: Set up Your Host Development System to Support Development Using the Yocto Project: See the “Setting Up to Use the Yocto Project” section for options on how to get a build host ready to use the Yocto Project. Initialize the Build Environment: Initialize the build environment by sourcing the build environment script (i.e. oe-init-build-env): $ source oe-init-build-env [build_dir] When you use the initialization script, the OpenEmbedded build system uses buildas the default Build Directory in your current work directory. You can use a build_dir argument with the script to specify a different build directory. Note A common practice is to use a different Build Directory for different targets; for example, ~/build/x86for a qemux86target, and ~/build/armfor a qemuarmtarget. In any event, it’s typically cleaner to locate the build directory somewhere outside of your source directory. Make Sure Your local.confFile is Correct: Ensure the conf/local.confconfigurationcommand: $ bitbake target Note For information on BitBake, see the BitBake User Manual. The target is the name of the recipe you want to build. Common targets are the images in meta/recipes-core/images, meta/recipes-sato/images, and so forth all found in the Source Directory. Alternatively, the target can be the name of a recipe for a specific piece of software such as BusyBox. For more details about the images the OpenEmbedded build system supports, see the “Images” chapter in the Yocto Project Reference Manual. As an example, the following command builds the core-image-minimalimage: $ bitbake core-image-minimal Once an image has been built, it often needs to be installed. The images and kernels built by the OpenEmbedded build system are placed in the Build Directory in tmp/deploy/images. For information on how to run pre-built images such as qemux86and qemuarm, see the Yocto Project Application Development and the Extensible Software Development Kit (eSDK) manual. For information about how to install these images, see the documentation for your particular board or machine. 3.10.2 Building Images for Multiple Targets Using Multiple Configurations¶ You can use a single bitbake command to build multiple images or packages for different targets where each image or package requires a different configuration (multiple configuration builds). The builds, in this scenario, are sometimes referred to as “multiconfigs”, and this section uses that term throughout. This section describes how to set up for multiple configuration builds and how to account for cross-build dependencies between the multiconfigs. 3.10.2.1 Setting Up and Running a Multiple Configuration Build¶ To accomplish a multiple configuration build, you must define each target’s configuration separately using a parallel configuration file in the Build Directory, and you must follow a required file hierarchy. Additionally, you must enable the multiple configuration builds in your local.conf file. Follow these steps to set up and execute multiple configuration builds: Create Separate Configuration Files: You need to create a single configuration file for each build target (each multiconfig). Minimally, each configuration file must define the machine and the temporary directory BitBake uses for the build. Suggested practice dictates that you do not overlap the temporary directories used during the builds. However, it is possible that you can share the temporary directory (TMPDIR). For example, consider a scenario with two different multiconfigs for the same MACHINE: “qemux86” built for two distributions such as “poky” and “poky-lsb”. In this case, you might want to use the same TMPDIR. Here is an example showing the minimal statements needed in a configuration file for a “qemux86” target whose temporary build directory is tmpmultix86: MACHINE = "qemux86" TMPDIR = "${TOPDIR}/tmpmultix86" The location for these multiconfig configuration files is specific. They must reside in the current build directory in a sub-directory of confnamed multiconfig. Following is an example that defines two configuration files for the “x86” and “arm” multiconfigs: The reason for this required file hierarchy is because the BBPATH variable is not constructed until the layers are parsed. Consequently, using the configuration file as a pre-configuration file is not possible unless it is located in the current working directory. Add the BitBake Multi-configuration Variable to the Local Configuration File: Use the BBMULTICONFIG variable in your conf/local.confconfiguration file to specify each multiconfig. Continuing with the example from the previous figure, the BBMULTICONFIG variable needs to enable two multiconfigs: “x86” and “arm” by specifying each configuration file: BBMULTICONFIG = "x86 arm" Note A “default” configuration already exists by definition. This configuration is named: “” (i.e. empty string) and is defined by the variables coming from your local.conffile. Consequently, the previous example actually adds two additional configurations to your build: “arm” and “x86” along with “”. Launch BitBake: Use the following BitBake command form to launch the multiple configuration build: $ bitbake [mc:multiconfigname:]target [[[mc:multiconfigname:]target] ... ] For the example in this section, the following command applies: $ bitbake mc:x86:core-image-minimal mc:arm:core-image-sato mc::core-image-base The previous BitBake command builds a core-image-minimalimage that is configured through the x86.confconfiguration file, a core-image-satoimage that is configured through the arm.confconfiguration file and a core-image-basethat is configured through your local.confconfiguration file. Note Support for multiple configuration builds in the Yocto Project 4.0.999 (Kirkstone) Release does not include Shared State (sstate) optimizations. Consequently, if a build uses the same object twice in, for example, two different TMPDIR directories, the build either loads from an existing sstate cache for that build at the start or builds the object fresh. 3.10.2.2 Enabling Multiple Configuration Build Dependencies¶ Sometimes dependencies can exist between targets (multiconfigs) in a multiple configuration build. For example, suppose that in order to build a core-image-sato image for an “x86” multiconfig, the root filesystem of an “arm” multiconfig must exist. This dependency is essentially that the do_image task in the core-image-sato recipe depends on the completion of the do_rootfs task of the core-image-minimal recipe. the example scenario from the first paragraph of this section. The following statement needs to be added to the recipe that builds the core-image-sato image: do_image[mcdepends] = "mc:x86:arm:core-image-minimal:do_rootfs" In this example, the from_multiconfig is “x86”. The to_multiconfig is “arm”. The task on which the do_image task in the recipe depends is the do_rootfs task from the core-image-minimal recipe associated with the “arm” multiconfig. Once you set up this dependency, you can build the “x86” multiconfig using a BitBake command as follows: $ bitbake mc:x86:core-image-sato This command executes all the tasks needed to create the core-image-sato image for the “x86” multiconfig. Because of the dependency, BitBake also executes through the do_rootfs task for the “arm” multiconfig build. Having a recipe depend on the root filesystem of another build might not seem that useful. Consider this change to the statement in the core-image-sato recipe: do_image[mcdepends] = "mc:x86:arm:core-image-minimal:do_image" In this case, BitBake must create the core-image-minimal image for the “arm” build since the “x86” build depends on it. Because “x86” and “arm” are enabled for multiple configuration builds and have separate configuration files, BitBake places the artifacts for each build in the respective temporary build directories (i.e. TMPDIR). 3.10.3 Building an Initial RAM Filesystem (initramfs) Image¶ An initial RAM filesystem (initramfs) image provides a temporary root filesystem used for early system initialization (e.g. loading of modules needed to locate and mount the “real” root filesystem). Note The initramfs image is the successor of initial RAM disk (initrd). It is a “copy in and out” (cpio) archive of the initial filesystem that gets loaded into memory during the Linux startup process. Because Linux uses the contents of the archive during initialization, the initramfs image needs to contain all of the device drivers and tools needed to mount the final root filesystem. Follow these steps to create an initramfs image: Create the initramfs Image Recipe: You can reference the core-image-minimal-initramfs.bbrecipe found in the meta/recipes-coredirectoryconfiguration file and set the INITRAMFS_IMAGE variable in the recipe that builds the kernel image. Note It is recommended that you bundle the initramfs image with the kernel image to avoid circular dependencies between the kernel recipe and the initramfs recipe should the initramfs image include kernel modules. Setting the INITRAMFS_IMAGE_BUNDLE flag causes the initramfs image to be unpacked into the ${B}/usr/directory. The unpacked initramfs image is then passed to the kernel’s Makefileusing the CONFIG_INITRAMFS_SOURCE variable, allowing the initramfs image to be built into the kernel normally. Note Bundling the initramfs with the kernel conflates the code in the initramfs with the GPLv2 licensed Linux kernel binary. Thus only GPLv2 compatible software may be part of a bundled initramfs. Note If you choose to not bundle the initramfs image with the kernel image, you are essentially using an Initial RAM Disk (initrd). Creating an initrd is handled primarily through the INITRD_IMAGE, INITRD_LIVE, and INITRD_IMAGE_LIVEvariables. For more information, see the image-live.bbclass file.. 3.10.3.1 Bundling an Initramfs Image From a Separate Multiconfig¶ There may be a case where we want to build an initramfs image which does not inherit the same distro policy as our main image, for example, we may want our main image to use TCLIBC="glibc", but to use TCLIBC="musl" in our initramfs image to keep a smaller footprint. However, by performing the steps mentioned above the initramfs image will inherit TCLIBC="glibc" without allowing us to override it. To achieve this, you need to perform some additional steps: Create a multiconfig for your initramfs image: You can perform the steps on “Building Images for Multiple Targets Using Multiple Configurations” to create a separate multiconfig. For the sake of simplicity let’s assume such multiconfig is called: initramfscfg.confand contains the variables: TMPDIR="${TOPDIR}/tmp-initramfscfg" TCLIBC="musl" Set additional initramfs variables on your main configuration: Additionally, on your main configuration ( local.conf) you need to set the variables: INITRAMFS_MULTICONFIG = "initramfscfg" INITRAMFS_DEPLOY_DIR_IMAGE = "${TOPDIR}/tmp-initramfscfg/deploy/images/${MACHINE}" The variables INITRAMFS_MULTICONFIG and INITRAMFS_DEPLOY_DIR_IMAGE are used to create a multiconfig dependency from the kernel to the INITRAMFS_IMAGE to be built coming from the initramfscfgmulticonfig, and to let the buildsystem know where the INITRAMFS_IMAGE will be located. Building a system with such configuration will build the kernel using the main configuration but the do_bundle_initramfstask will grab the selected INITRAMFS_IMAGE from INITRAMFS_DEPLOY_DIR_IMAGE instead, resulting in a musl based initramfs image bundled in the kernel but a glibc based main image. The same is applicable to avoid inheriting DISTRO_FEATURES on INITRAMFS_IMAGE or to build a different DISTRO for it such as poky-tiny. 3.10.4 Building a Tiny System¶. 3.10.4.1 Tiny System Overview¶ The following list presents the overall steps you need to consider and perform to create distributions with smaller root filesystems, achieve faster boot times, maintain your critical functionality, and avoid initial RAM disks: 3.10.4.2 Goals and Guiding Principles¶. 3.10.4.3 Understand What Contributes to Your Image Size¶. Note To use poky-tiny in your build, set the DISTRO variable in your local.conf filescript is part of the Linux Yocto kernel Git repositories (i.e. linux-yocto-3.14, linux-yocto-3.10, linux-yocto-3.8, and so forth) in the scripts/kconfigdirectory. For more information on configuration fragments, see the “Creating Configuration Fragments” section in the Yocto Project Linux Kernel Development Manual. bitbake -u taskexp -g bitbake_target: Using the BitBake command with these options brings up a Dependency Explorer from which you can view file dependencies. Understanding these dependencies allows you to make informed decisions when cutting out various pieces of the kernel and root filesystem. 3.10.4.4 Trim the Root Filesystem¶ taskexp . Note After each round of elimination, you need to rebuild your system and then use the tools to see the effects of your reductions. 3.10.4.5 Trim the Kernel¶ The kernel is built by including policies for hardware-independent aspects. What subsystems do you enable? For what architecture are you building? Which drivers do you build by default? Note You can modify the kernel source if you want to help with boot time.. 3.10.4.6 Remove Package Management Requirements¶. 3.10.4.7 Look for Other Ways to Minimize Size¶cfeaturescfeatures. 3.10.4.8 Iterate on the Process¶. 3.10.5 Building Images for More than One Machine¶set of packages could run on armv6and armv7processors in most cases). Similarly, i486binaries could work on i586 there are cases where injecting another level of package architecture beyond the three higher levels noted earlier can be useful. For example, consider how NXP (formerly Freescale) allows for the easy reuse of binary packages in their layer meta-freescale. In this example, the fsl-dynamic-packagearch class shares GPU packages for i.MX53 boards because all boards share the AMD GPU. The i.MX6-based boards can do the same because all boards share the Vivante GPU. This class inspects the BitBake datastore to identify if the package provides or depends on one of the sub-architecture values. If so, the class sets the PACKAGE_ARCH value based on the MACHINE_SUBARCHvalue. If the package does not provide or depend on one of the sub-architecture values but it matches a value in the machine-specific filter, it sets MACHINE_ARCH. This behavior reduces the number of packages built and saves build time by reusing binaries._CONSOLES, XSERVER, MACHINE_FEATURES, and so forth in code that is supposed to only be tune-specific or when the recipe depends (DEPENDS, RDEPENDS, RRECOMMENDS, RSUGGESTS, and so forth) on some other recipe that already has PACKAGE_ARCH defined as “${MACHINE_ARCH}”. Note Patches to fix any issues identified are most welcome as these issues occasionally do occur. For such cases, you can use some tools to help you sort out the situation: state-diff-machines.sh``*:* You can find this tool in the ``scriptsdirectoryigsover the matches to determine the stamps and delta where these two stamp trees diverge. 3.10.6 Building Software from an External Source¶ By default, the OpenEmbedded build system uses the Build Directory when building source code. The build process involves fetching the source files, unpacking them, and then patching them if necessary before the build takes place. There are situations" Note In order for these settings to take effect, you must globally or locally inherit the externalsrc class." 3.10.7 Replicating a Build Offline¶ It can be useful to take a “snapshot” of upstream sources used in a build and then use that “snapshot” later to replicate the build offline. To do so, you need to first prepare and populate your downloads directory your “snapshot” of files. Once your downloads directory is ready, you can use it at any time and from any machine to replicate your build. Follow these steps to populate your Downloads directory: Create a Clean Downloads Directory: Start with an empty downloads directory (DL_DIR). You start with an empty downloads directory by either removing the files in the existing directory or by setting DL_DIR to point to either an empty location or one that does not yet exist. Generate Tarballs of the Source Git Repositories: Edit your local.confconfiguration file as follows: DL_DIR = "/home/your-download-dir/" BB_GENERATE_MIRROR_TARBALLS = "1" During the fetch process in the next step, BitBake gathers the source files and creates tarballs in the directory pointed to by DL_DIR. See the BB_GENERATE_MIRROR_TARBALLS variable for more information. Populate Your Downloads Directory Without Building: Use BitBake to fetch your sources but inhibit the build: $ bitbake target --runonly=fetch The downloads directory (i.e. ${DL_DIR}) now has a “snapshot” of the source files in the form of tarballs, which can be used for the build. Optionally Remove Any Git or other SCM Subdirectories From the Downloads Directory: If you want, you can clean up your downloads directory by removing any Git or other Source Control Management (SCM) subdirectories such as ${DL_DIR}/git2/*. The tarballs already contain these subdirectories. Once your downloads directory has everything it needs regarding source files, you can create your “own-mirror” and build your target. Understand that you can use the files to build the target offline from any machine and at any time. Follow these steps to build your target using the files in the downloads directory: Using Local Files Only: Inside your local.conffile, add the SOURCE_MIRROR_URL variable, inherit the own-mirrors class, and use the BB_NO_NETWORK variable to your local.conf. SOURCE_MIRROR_URL ?= "" INHERIT += "own-mirrors" BB_NO_NETWORK = "1" The SOURCE_MIRROR_URL and own-mirrors class set up the system to use the downloads directory as your “own mirror”. Using the BB_NO_NETWORK variable makes sure that BitBake’s fetching process in step 3 stays local, which means files from your “own-mirror” are used. Start With a Clean Build: You can start with a clean build by removing the Build Your Target: Use BitBake to build your target: $ bitbake target The build completes using the known local “snapshot” of source files from your mirror. The resulting tarballs for your “snapshot” of source files are in the downloads directory. Note The offline build does not work if recipes attempt to find the latest version of software by setting SRCREV to SRCREV = "${AUTOREV}" When a recipe sets SRCREV to If you do have recipes that use AUTOREV, you can take steps to still use the recipes in an offline build. Do the following: Use a configuration generated by enabling build history. Use the buildhistory-collect-srcrevscommand to collect the stored SRCREV values from the build’s history. For more information on collecting these values, see the “Build History Package Information” section. Once you have the correct source revisions, you can modify those recipes to set SRCREV to specific versions of the software. 3.11 Speeding Up a Build¶ Build time can be an issue. By default, the build system uses simple controls to try and maximize build efficiency. In general, the default settings for all the following variables result in the most efficient build times when dealing with single socket systems (i.e. a single CPU). If you have multiple CPUs, you might try increasing the default values to gain more speed. See the descriptions in the glossary for each variable for more information: BB_NUMBER_THREADS: The maximum number of threads BitBake simultaneously executes. BB_NUMBER_PARSE_THREADS: The number of threads BitBake uses during parsing. PARALLEL_MAKE: Extra options passed to the makecommand during the do_compile task in order to specify parallel compilation on the local build host. PARALLEL_MAKEINST: Extra options passed to the makecommand during the do_install task in order to specify parallel installation on the local build host. As mentioned, these variables all scale to the number of processor cores available on the build system. For single socket systems, this auto-scaling ensures that the build system fundamentally takes advantage of potential parallel operations during the build based on the build machine’s capabilities. Following are additional factors that can affect build speed: File system type: The file system type that the build is being performed on can also influence performance. Using ext4is recommended as compared to ext2and ext3due to ext4improved features such as extents. Disabling the updating of access time using noatime: The noatimemount option prevents the build system from updating file and directory access times. Setting a longer commit: Using the “commit=” mount option increases the interval in seconds between disk cache writes. Changing this interval from the five second default to something longer increases the risk of data loss but decreases the need to write to the disk, thus increasing the build performance. Choosing the packaging backend: Of the available packaging backends, IPK is the fastest. Additionally, selecting a singular packaging backend also helps. Using tmpfsfor TMPDIR as a temporary file system: While this can help speed up the build, the benefits are limited due to the compiler using -pipe. The build system goes to some lengths to avoid sync()calls into the file system on the principle that if there was a significant failure, the Build Directory contents could easily be rebuilt. Inheriting the rm_work class: Inheriting this class has shown to speed up builds due to significantly lower amounts of data stored in the data cache as well as on disk. Inheriting this class also makes cleanup of TMPDIR faster, at the expense of being easily able to dive into the source code. File system maintainers have recommended that the fastest way to clean up large numbers of files is to reformat partitions rather than delete files due to the linear nature of partitions. This, of course, assumes you structure the disk partitions and file systems in a way that this is practical. Aside from the previous list, you should keep some trade offs in mind that can help you speed up the build: Remove items from DISTRO_FEATURES that you might not need. Exclude debug symbols and other debug information: If you do not need these symbols and other debug information, disabling the *-dbgpackage generation can speed up the build. You can disable this generation by setting the INHIBIT_PACKAGE_DEBUG_SPLIT variable to “1”. Disable static library generation for recipes derived from autoconfor libtool: Following is an example showing how to disable static libraries and still provide an override to handle exceptions: STATICLIBCONF = "--disable-static" STATICLIBCONF:sqlite3-native = "" EXTRA_OECONF += "${STATICLIBCONF}" Note Some recipes need static libraries in order to work correctly (e.g. pseudo-nativeneeds sqlite3-native). Overrides, as in the previous example, account for these kinds of exceptions. Some packages have packaging code that assumes the presence of the static libraries. If so, you might need to exclude them as well. 3.12 Working With Libraries¶ 3.12.1 Including Static Library Files¶. Note Some previously released versions of the Yocto Project defined the static library files through ${PN}-dev. Following is part of the BitBake configuration file, where you can see how the static library files are defined: PACKAGE_BEFORE_PN ?= "" PACKAGES = "${PN}-src $ ${prefix}/lib/udev \ ${base_libdir}/udev ${libdir}/udev \ $ \ ${libdir}/cmake ${datadir}/cmake"})" 3.12.2 Combining Multiple Versions of Library Files into One Image¶. There are several examples in the meta-skeleton layer found in the Source Directory: conf/multilib-example.conf configuration file. conf/multilib-example2.conf configuration file. recipes-multilib/images/core-image-multilib-example.bb recipe 3.12.2.1 Preparing to Use Multilib¶ User-specific requirements drive the Multilib feature. Consequently, there is no one “out-of-the-box” configuration that would. 3.12.2.2 Using Multilib¶:append = "lib32-glib-2.0"-glib-2.0-glib-2.0 3.12.2.3 Additional Implementation Details¶ There are generic implementation details as well as details that are specific to package management systems.mult. Here are the implementation details for the RPM Package Management System: A unique architecture is defined for the Multilib packages, along with creating a unique deploy folder under tmp/deploy/rpmin the Build Directory. For example, consider lib32in a qemux86-64image.system resolves to something similar to bash-4.1-r2.x86_64.rpm. Here are the implementation details for the IPK Package Management System: The ${MLPREFIX}is not stripped from ${PN}during IPK packaging. The naming for a normal RPM package and a Multilib IPK package in a qemux86-64system resolves to something like bash_4.1-r2.x86_64.ipk. 3.12.3 Installing Multiple Versions of the Same Library¶ There are be situations where you need to install and use multiple versions of the same library on the same system at the same time. This almost always happens" 3.13 Working with Pre-Built Libraries¶ 3.13.1 Introduction¶ Some library vendors do not release source code for their software but do release pre-built binaries. When shared libraries are built, they should be versioned (see this article for some background), but sometimes this is not done. To summarize, a versioned library must meet two conditions: The filename must have the version appended, for example: libfoo.so.1.2.3. The library must have the ELF tag SONAMEset to the major version of the library, for example: libfoo.so.1. You can check this by running readelf -d filename | grep SONAME. This section shows how to deal with both versioned and unversioned pre-built libraries. 3.13.2 Versioned Libraries¶ In this example we work with pre-built libraries for the FT4222H USB I/O chip. Libraries are built for several target architecture variants and packaged in an archive as follows: ├── build-arm-hisiv300 │ └── libft4222.so.1.4.4.44 ├── build-arm-v5-sf │ └── libft4222.so.1.4.4.44 ├── build-arm-v6-hf │ └── libft4222.so.1.4.4.44 ├── build-arm-v7-hf │ └── libft4222.so.1.4.4.44 ├── build-arm-v8 │ └── libft4222.so.1.4.4.44 ├── build-i386 │ └── libft4222.so.1.4.4.44 ├── build-i486 │ └── libft4222.so.1.4.4.44 ├── build-mips-eglibc-hf │ └── libft4222.so.1.4.4.44 ├── build-pentium │ └── libft4222.so.1.4.4.44 ├── build-x86_64 │ └── libft4222.so.1.4.4.44 ├── examples │ ├── get-version.c │ ├── i2cm.c │ ├── spim.c │ └── spis.c ├── ftd2xx.h ├── install4222.sh ├── libft4222.h ├── ReadMe.txt └── WinTypes.h To write a recipe to use such a library in your system: The vendor will probably have a proprietary licence, so set LICENSE_FLAGS in your recipe. The vendor provides a tarball containing libraries so set SRC_URI appropriately. Set COMPATIBLE_HOST so that the recipe cannot be used with an unsupported architecture. In the following example, we only support the 32 and 64 bit variants of the x86architecture. As the vendor provides versioned libraries, we can use oe_soinstallfrom utils.bbclass to install the shared library and create symbolic links. If the vendor does not do this, we need to follow the non-versioned library guidelines in the next section. As the vendor likely used LDFLAGS different from those in your Yocto Project build, disable the corresponding checks by adding ldflagsto INSANE_SKIP. The vendor will typically ship release builds without debugging symbols. Avoid errors by preventing the packaging task from stripping out the symbols and adding them to a separate debug package. This is done by setting the INHIBIT_flags shown below. The complete recipe would look like this: SUMMARY = "FTDI FT4222H Library" SECTION = "libs" LICENSE_FLAGS = "ftdi" LICENSE = "CLOSED" COMPATIBLE_HOST = "(i.86|x86_64).*-linux" # Sources available in a .tgz file in .zip archive # at # Found on # Since dealing with this particular type of archive is out of topic here, # we use a local link. SRC_URI = "{PV}.tgz" S = "${WORKDIR}" ARCH_DIR:x86-64 = "build-x86_64" ARCH_DIR:i586 = "build-i386" ARCH_DIR:i686 = "build-i386" INSANE_SKIP:${PN} = "ldflags" INHIBIT_PACKAGE_STRIP = "1" INHIBIT_SYSROOT_STRIP = "1" INHIBIT_PACKAGE_DEBUG_SPLIT = "1" do_install () { install -m 0755 -d ${D}${libdir} oe_soinstall ${S}/${ARCH_DIR}/libft4222.so.${PV} ${D}${libdir} install -d ${D}${includedir} install -m 0755 ${S}/*.h ${D}${includedir} } If the precompiled binaries are not statically linked and have dependencies on other libraries, then by adding those libraries to DEPENDS, the linking can be examined and the appropriate RDEPENDS automatically added. 3.13.3 Non-Versioned Libraries¶ 3.13.3.1 Some Background¶ Libraries in Linux systems are generally versioned so that it is possible to have multiple versions of the same library installed, which eases upgrades and support for older software. For example, suppose that in a versioned library, an actual library is called libfoo.so.1.2, a symbolic link named libfoo.so.1 points to libfoo.so.1.2, and a symbolic link named libfoo.so points to libfoo.so.1.2. Given these conditions, when you link a binary against a library, you typically provide the unversioned file name (i.e. -lfoo to the linker). However, the linker follows the symbolic link and actually links against the versioned filename. The unversioned symbolic link is only used at development time. Consequently, the library is packaged along with the headers in the development package ${PN}-dev along with the actual library and versioned symbolic links in ${PN}. Because versioned libraries are far more common than unversioned libraries, the default packaging rules assume versioned libraries. 3.13.3.2 Yocto Library Packaging Overview¶ It follows that packaging an unversioned library requires a bit of work in the recipe. By default, libfoo.so gets packaged into ${PN}-dev, which triggers a QA warning that a non-symlink library is in a -dev package, and binaries in the same recipe link to the library in ${PN}-dev, which triggers more QA warnings. To solve this problem, you need to package the unversioned library into ${PN} where it belongs. The following are the abridged default FILES variables in bitbake.conf: SOLIBS = ".so.*" SOLIBSDEV = ".so" FILES_${PN} = "... ${libdir}/lib*${SOLIBS} ..." FILES_SOLIBSDEV ?= "... ${libdir}/lib*${SOLIBSDEV} ..." FILES_${PN}-dev = "... ${FILES_SOLIBSDEV} ..." SOLIBS defines a pattern that matches real shared object libraries. SOLIBSDEV matches the development form (unversioned symlink). These two variables are then used in FILES:${PN} and FILES:${PN}-dev, which puts the real libraries into ${PN} and the unversioned symbolic link into ${PN}-dev. To package unversioned libraries, you need to modify the variables in the recipe as follows: SOLIBS = ".so" FILES_SOLIBSDEV = "" The modifications cause the .so file to be the real library and unset FILES_SOLIBSDEV so that no libraries get packaged into ${PN}-dev. The changes are required because unless PACKAGES is changed, ${PN}-dev collects files before ${PN}. ${PN}-dev must not collect any of the files you want in ${PN}. Finally, loadable modules, essentially unversioned libraries that are linked at runtime using dlopen() instead of at build time, should generally be installed in a private directory. However, if they are installed in ${libdir}, then the modules can be treated as unversioned libraries. 3.13.3.3 Example¶ The example below installs an unversioned x86-64 pre-built library named libfoo.so. The COMPATIBLE_HOST variable limits recipes to the x86-64 architecture while the INSANE_SKIP, INHIBIT_PACKAGE_STRIP and INHIBIT_SYSROOT_STRIP variables are all set as in the above versioned library example. The “magic” is setting the SOLIBS and FILES_SOLIBSDEV variables as explained above: SUMMARY = "libfoo sample recipe" SECTION = "libs" LICENSE = "CLOSED" SRC_URI = "" COMPATIBLE_HOST = "x86_64.*-linux" INSANE_SKIP:${PN} = "ldflags" INHIBIT_PACKAGE_STRIP = "1" INHIBIT_SYSROOT_STRIP = "1" SOLIBS = ".so" FILES_SOLIBSDEV = "" do_install () { install -d ${D}${libdir} install -m 0755 ${WORKDIR}/libfoo.so ${D}${libdir} } 3.14 Using x32 psABI¶ x32 processor-specific Application Binary Interface (x32 psABI) is a native 32-bit processor-specific ABI for Intel 64 (x86-64) architectures.. The Yocto Project supports the final specifications of x32 psABI as follows: You can create packages and images in x32 psABI format on x86_64 architecture targets. You can successfully build recipes with the x32 toolchain. You can create and boot core-image-minimaland core-image-satoimages. There is RPM Package Manager (RPM) support for x32 binaries. There is support for large images. To use the x32 psABI, you need to edit your conf/local.conf configuration file as follows: MACHINE = "qemux86-64" DEFAULTTUNE = "x86-64-x32" baselib = "${@d.getVar('BASE_LIB:tune-' + (d.getVar('DEFAULTTUNE') \ or 'INVALID')) or 'lib'}" Once you have set up your configuration file, use BitBake to build an image that supports the x32 psABI. Here is an example: $ bitbake core-image-sato 3.15 Enabling GObject Introspection Support¶ “Known Issues” section. 3.15.1 Enabling the Generation of Introspection Data¶. In either of these conditions, nothing will happen. Try to build the recipe. If you encounter build errors that look like something is unable to find .solibraries, check where these libraries are located in the source tree and add the following to the recipe: GIR_EXTRA_LIBS_PATH = "${B}/something/.libs" Note See recipes in the oe-corerepository that use that GIR_EXTRA_LIBS_PATH variable. Note Using a library that no longer builds against the latest Yocto Project release and prints introspection related errors is a good candidate for the previous procedure. 3.15.2 Disabling the Generation of Introspection Data¶. Note Future releases of the Yocto Project might have other features affected by this option. If you disable introspection data, you can still obtain it through other means such as copying the data from a suitable sysroot, or by generating it on the target hardware. The OpenEmbedded build system does not currently provide specific support for these techniques. 3.15.3 Testing that Introspection Works in an Image¶ see: 3.15.4 Known Issues¶ Here are know issues in GObject Introspection Support: qemu-ppc64immediately. 3.16 Optionally Using an External Toolchain¶.conffile through the BBLAYERS variable. Set the EXTERNAL_TOOLCHAINvariable in your local.conffile to the location in which you installed the toolchain. A good example of an external toolchain used with the Yocto Project is Mentor Graphics Sourcery G++ Toolchain. You can see information on how to use that particular layer in the README file at You can find further information by reading about the TCMODE variable in the Yocto Project Reference Manual’s variable glossary. 3.17 Creating Partitioned Images Using Wic¶ kickstart files as shown with the wic list images command in the “Generate an Image using an Existing Kickstart File” section. When you apply the command to a given set of build artifacts, the result is an image or set of images that can be directly written onto media and used on a particular system. Note For a kickstart file reference, see the “OpenEmbedded Kickstart ( plugin interface. See the “Using the Wic Plugin Interface” section for information on these plugins. This section provides some background information on Wic, describes what you need to have in place to run the tool, provides instruction on how to use the Wic utility, provides information on using the Wic plugins interface, and provides several examples that show how to use Wic. 3.17.1 Background¶ an existing functionality in OE-Core’s image-live class. The difference between Wic and those examples is that with Wic the functionality of those scripts is implemented by a general-purpose partitioning language, which is based on Redhat kickstart syntax. 3.17.2 Requirements¶ 3.17.3 Getting Help¶ You can get general help for the wic command by entering the wic command by itself or by entering the command with a help argument as follows: $ wic -h $ wic --help $ wic help Currently, Wic supports seven commands: cp, create, list, ls, rm, and write. You can get help for all these commands except “help” by using the following form: $ wic help command For example, the following command returns help for the write command: $ wic help write Wic supports help for three topics: overview, plugins, and kickstart. You can get help for any topic using the following form: $ wic help topic For example, the following returns overview help for Wic: $ wic help overview There is one additional level of help for Wic. You can get help on individual images through the list command. You can use the list command to return the available Wic images as follows: $ wic list images genericx86 Create an EFI disk image for genericx86* edgerouter Create SD card image for Edgerouter beaglebone-yocto Create SD card image for Beaglebone qemux86-directdisk Create a qemu machine 'pcbios' direct disk image systemd-bootdisk Create an EFI disk image with systemd-boot mkhybridiso Create a hybrid ISO image mkefidisk Create an EFI disk image sdimage-bootpart Create SD card image with a boot partition directdisk-multi-rootfs Create multi rootfs image using rootfs plugin directdisk Create a 'pcbios' direct disk image directdisk-bootloader-config Create a 'pcbios' direct disk image with custom bootloader config qemuriscv Create qcow2 image for RISC-V QEMU machines directdisk-gpt Create a 'pcbios' direct disk image efi-bootdisk Once you know the list of available Wic images, you can use help with the command to get help on a particular image. For example, the following command returns help on the “beaglebone-yocto” image: $ wic list beaglebone-yocto help Creates a partitioned SD card image for Beaglebone. Boot files are located in the first vfat partition. 3.17.4 Operational Modes¶. 3.17.4.1 Raw Mode¶ name of directory to create image in -e IMAGE_NAME, --image-name IMAGE_NAME name of the image to use the artifacts from e.g. core- image-sato -r ROOTFS_DIR, --rootfs-dir ROOTFS_DIR path to the /rootfs dir to use as the .wks rootfs source -b BOOTIMG_DIR, --bootimg-dir BOOTIMG_DIR path to the dir containing the boot artifacts (e.g. /EFI or /syslinux dirs) to use as the .wks bootimg source -k KERNEL_DIR, --kernel-dir KERNEL_DIR path to the dir containing the kernel to use in the .wks bootimg -n NATIVE_SYSROOT, --native-sysroot NATIVE_SYSROOT path_DIR directory with <image>.env files that store bitbake variables -D, --debug output debug information Note You do not need root privileges to run Wic. In fact, you should not run as root when using the utility. 3.17.4.2 Cooked Mode¶ Running Wic in cooked mode leverages off artifacts in_NAME Where: wks_file: An OpenEmbedded kickstart file. You can provide your own custom file or use a file from a set of existing files provided with the Yocto Project release. required argument: -e IMAGE_NAME, --image-name IMAGE_NAME name of the image to use the artifacts from e.g. core- image-sato 3.17.5 Using an Existing Kickstart File¶ genericx86 Create an EFI disk image for genericx86* beaglebone-yocto Create SD card image for Beaglebone" 3.17.6 Using the Wic Plugin Interface¶ You can extend and specialize Wic functionality by using Wic plugins. This section explains the Wic plugin interface. Note Wic plugins consist of “source” and “imager” plugins. Imager plugins are beyond the scope of this section. Source plugins provide a mechanism to customize partition content during the Wic image generation process. You can use source plugins to map values that you specify using --source commands in kickstart files (i.e. *.wks) to a plugin implementation used to populate a given partition. Note If you use plugins that have build-time dependencies (e.g. native tools, bootloaders, and so forth) when building a Wic image, you need to specify those dependencies using the WKS_FILE_DEPENDS variable. Source plugins are subclasses defined in plugin files. As shipped, the Yocto Project provides several plugin files. You can see the source plugin files that ship with the Yocto Project here. Each of these plugin files contains source plugins that are designed to populate a specific Wic image partition. Source plugins are subclasses of the SourcePlugin class, which is defined in the poky/scripts/lib/wic/pluginbase.py file. For example, the BootimgEFIPlugin source plugin found in the bootimg-efi.py file is a subclass of the SourcePlugin class, which is found in the pluginbase.py file. You can also implement source plugins in a layer outside of the Source Repositories (external layer). To do so, be sure that your plugin files are located in a directory whose path is scripts/lib/wic/plugins/source/ within your external layer. When the plugin files are located there, the source plugins they contain are made available to Wic. When the Wic implementation needs to invoke a partition-specific implementation, it looks for the plugin with the same name as the --source parameter used in the kickstart file given to that partition. For example, if the partition is set up using the following command in a kickstart file: part /boot --source bootimg-pcbios --ondisk sda --label boot --active --align 1024 The methods defined as class members of the matching source plugin (i.e. bootimg-pcbios) in the bootimg-pcbios.py plugin file are used. To be more concrete, here is the corresponding plugin definition from the bootimg-pcbios.py file for the previous command along with an example method called by the Wic implementation when it needs to prepare a partition using an implementation-specific function: . . . class BootimgPcbiosPlugin(SourcePlugin): """ Create MBR boot partition and install syslinux on it. """ name = 'bootimg-pcbios' . . . @classmethod def do_prepare_partition(cls, part, source_params, creator, cr_workdir, oe_builddir, bootimg_dir, kernel_dir, rootfs_dir, native_sysroot): """ Called to do the actual content population for a partition i.e. it 'prepares' the partition to be incorporated into the image. In this case, prepare content for legacy bios boot partition. """ . . . If a subclass (plugin) itself does not implement a particular function, Wic locates and uses the default version in the superclass. It is for this reason that all source plugins are derived from the SourcePlugin class. The SourcePlugin class defined in the pluginbase.py file defines a set of methods that source plugins can implement or override. Any plugins (subclass of SourcePlugin) that do not implement a particular method inherit the implementation of the method from the SourcePlugin class. For more information, see the SourcePlugin class in the pluginbase.py file for details: The following list describes the methods implemented in the SourcePlugin class: do_prepare_partition(): Called to populate a partition with actual content. In other words, the method prepares the final partition image that is incorporated into the disk image. do_configure_partition(): Called before do_prepare_partition(. Note get_bitbake_var()allows you to access non-standard variables that you might want to use for this behavior. You can extend the source plugin mechanism. To add more hooks, create more source plugin methods within SourcePlugin and the corresponding derived subclasses. The code that calls the plugin methods uses the plugin.get_source_plugin_methods() function to find the method or methods needed by the call. Retrieval of those methods is accomplished by filling up a dict with keys that contain the method names of interest. On success, these will be filled in with the actual methods. See the Wic implementation for examples and details. 3.17.7 Wic Examples¶ This section provides several examples that show how to use the Wic utility. All the examples assume the list of requirements in the “Requirements” section have been met. The examples assume the previously generated image is core-image-minimal. 3.17.7.1 Generate an Image using an Existing Kickstart File¶ This example runs in Cooked Mode and uses the mkefidisk kickstart file: $ wic create mkefidisk -e core-image-minimal INFO: Building wic-tools... . . . INFO: The new image(s) can be found here: ./mkefidisk-201804191017-sda/openembedded-core/scripts/lib/wic/canned-wks. Note You should always verify the details provided in the output to make sure that the image was indeed created exactly as expected. Continuing with the example, you can now write the image from the Build Directory onto a USB stick, or whatever media for which you built your image, and boot from the media. You can write the image by using bmaptool or dd: $ oe-run-native bmaptool copy mkefidisk-201804191017-sda.direct /dev/sdX or $ sudo dd if=mkefidisk-201804191017-sda.direct of=/dev/sdX Note For more information on how to use the bmaptool to flash a device with an image, see the “Flashing Images Using bmaptool” section. 3.17.7.2 Using a Modified Kickstart File¶/stephano/yocto/poky/scripts/lib/wic/canned-wks/directdisk-gpt.wks \ /home/stephano/yocto/poky/scripts/lib/wic/canned-wks 3.17.7.3 Using a Modified Kickstart File and Running in Raw Mode¶ This next example manually specifies each build artifact (runs in Raw Mode) and uses a modified kickstart file. The example also uses the -o option to cause Wic to create the output somewhere other than the default output directory, which is the current directory: $ wic create test.wks -o /home/stephano/testwic \ --rootfs-dir /home/stephano/yocto/build/tmp/work/qemux86-poky-linux/core-image-minimal/1.0-r0/rootfs \ --bootimg-dir /home/stephano/yocto/build/tmp/work/qemux86-poky-linux/core-image-minimal/1.0-r0/recipe-sysroot/usr/share \ --kernel-dir /home/stephano/yocto/build/tmp/deploy/images/qemux86 \ --native-sysroot /home/stephano/yocto/build/tmp/work/i586-poky-linux/wic-tools/1.0-r0/recipe-sysroot-native INFO: Creating image(s)... INFO: The new image(s) can be found here: /home/stephano/testwic/test-201710091445: test.wks For this example, MACHINE did not have to be specified in the local.conf file since the artifact is manually specified. 3.17.7.4 Using Wic to Manipulate an Image¶. Note In order to use Wic to manipulate a Wic image as in this example, your development machine must have the mtools package installed. The following example examines the contents of the Wic image, deletes the existing kernel, and then inserts a new kernel: List the Partitions: Use the wic lscommandimage. Examine a Particular Partition: Use the wic lscommand again but in a different form to examine a particular partition. Note You can get command usage on any Wic command using the following form: $ wic help command For example, the following command shows you the various ways to use the wic ls command: $being the kernel. Note If you see the following error, you need to update or create a ~/command to remove the vmlinuzfile (kernel): $ wic rm tmp/deploy/images/qemux86/core-image-minimal-qemux86.wic:1/vmlinuz Add In the New Kernel: Use the wic cpcommand to add the updated kernel to the Wic image. Depending on how you built your kernel, it could be in different places. If you used devtooland an SDK to build your kernel, it resides in the tmp/workdirectory of the extensible SDK. If you used maketo build the kernel, the kernel will be in the workspace/sourcesarea. The following example assumes devtoolwas used to build the kernel: $ wiccommand or bmaptool to flash your wic image onto an SD card or USB stick and test your target. Note Using bmaptoolis generally 10 to 20 times faster than using dd. 3.18 Flashing Images Using bmaptool¶ A fast and easy way to flash an image to a bootable device is to use Bmaptool, which is integrated into the OpenEmbedded build system. Bmaptool is a generic tool that creates a file’s block map (bmap) and then uses that map to copy the file. As compared to traditional tools such as dd or cp, Bmaptool can copy (or flash) large files like raw system image files much faster. Note If you are using Ubuntu or Debian distributions, you can install the bmap-toolspackage using the following command and then use the tool without specifying PATHeven from the root account: $ sudo apt install bmap-tools If you are unable to install the bmap-toolspackage, you will need to build Bmaptool before using it. Use the following command: $ bitbake bmap-tools-native Following, is an example that shows how to flash a Wic image. Realize that while this example uses a Wic image, you can use Bmaptool to flash any type of image. Use these steps to flash an image using Bmaptool: Update your local.conf File: You need to have the following set in your local.conffile before building your image: IMAGE_FSTYPES += "wic wic.bmap" Get Your Image: Either have your image ready (pre-built with the IMAGE_FSTYPES setting previously mentioned) or take the step to build the image: $ bitbake image Flash the Device: Flash the device with the image by using Bmaptool depending on your particular setup. The following commands assume the image resides in the Build Directory’s deploy/images/area: If you have write access to the media, use this command form: $ oe-run-native bmap-tools-native bmaptool copy build-directory/tmp/deploy/images/machine/image.wic /dev/sdX If you do not have write access to the media, set your permissions first and then use the same command form: $ sudo chmod 666 /dev/sdX $ oe-run-native bmap-tools-native bmaptool copy build-directory/tmp/deploy/images/machine/image.wic /dev/sdX For help on the bmaptool command, use the following command: $ bmaptool --help 3.19 Making Images More Secure¶. Note Because the security requirements and risks are different for every type of device, this section cannot provide a complete reference on securing your custom OS. It is strongly recommended that you also consult other sources of information on embedded Linux system hardening and on security. 3.19.1 General Considerations¶ There are general considerations that help you create more secure images. You should consider the following suggestions to. 3.19.2 Security Flags¶ The Yocto Project has security flags that you can enable that help make your build output more secure. The security flags are in the meta/conf/distro/include/security_flags.inc file in your Source Directory (e.g. poky). Note Depending on the recipe, certain security flags are enabled and disabled by default. Use the following line in your local.conf file or in your custom distribution configuration file to enable the security compiler and linker flags for your build: require conf/distro/include/security_flags.inc 3.19.3 Considerations Specific to the OpenEmbedded Build System¶ You can take some steps that are specific to the OpenEmbedded build system to make your images more secure: Ensure “debug-tweaks” is not one of your selected IMAGE_FEATURES. When creating a new project, the default is to provide you with an initial local.conffile that enables this feature using the EXTRA_IMAGE_FEATURES variable with the line: EXTRA_IMAGE_FEATURES = "debug-tweaks" To disable that feature, simply comment out that line in your local.conffile,. Note When adding extra user accounts or setting a root password, be cautious about setting the same password on every device. If you do this, and the password you have set is exposed, then every device is now potentially compromised. If you need this access but want to ensure security, consider setting a different, random password for each device. Typically, you do this as a separate step after you deploy the image onto the device. Consider enabling a Mandatory Access Control (MAC) framework such as SMACK or SELinux and tuning it appropriately for your device’s usage. You can find more information in the meta-selinux layer. 3.19.4 Tools for Hardening Your Image¶ The Yocto Project provides tools for making your image more secure. You can find these tools in the meta-security layer of the Yocto Project Source Repositories. 3.20 Creating Your Own Distribution¶configuration file makes it easier to reproduce the same build configuration when using multiple build machines. See the “Creating a General Layer Using the bitbake-layers Script” section for information on how to quickly set up a layer. Create the distribution configuration file: The distribution configuration file needs to be created in the conf/distrodirectory/includedirectory of your layer. A common example usage of include files would be to separate out the selection of desired version and revisions for individual recipes. Your configuration file needs to set the following required variables: These following variables are optional and you typically set them from the distribution configuration file: Tip If you want to base your distribution configuration file on the very basic configuration from OE-Core, you can use conf/distro/defaultsetup.confas a reference and just include variables that differ as compared to defaultsetup.conf. Alternatively, you can create a distribution configuration file from scratch using the defaultsetup.conffile or configuration files from another distribution such as Poky as a reference. Provide miscellaneous variables: Be sure to define any other variables for which you want to create a default or enforce as part of the distribution configuration. You can include nearly any variable from the local.conffile. The variables you use are not limited to the list in the previous bulleted item. Point to Your distribution configuration file: In your local.conffile “Following Best Practices When Creating Layers” sections. Add any image recipes that are specific to your distribution. Add a psplashappend file for a branded splash screen. For information on append files, see the “Appending Other Layers Metadata With Your Layer” section. Add any other append files to make custom changes that are specific to individual recipes. 3.21 Creating a Custom Template Configuration Directory¶ conf directory. By default, TEMPLATECONF is set as follows in the poky repository: TEMPLATECONF=${TEMPLATECONF:-meta-poky-poky/conf directory. The script that sets up the build environment (i.e. oe-init-build-env) uses meta-ide-support Changing the listed common targets is as easy as editing your version of conf-notes.txt in your custom template configuration directory and making sure you have TEMPLATECONF set to your directory. 3.22 Conserving Disk Space¶ 3.22.1 Conserving Disk Space During Builds¶ To help conserve disk space during builds, you can add the following statement to your project’s local.conf configuration file found in the Build Directory: INHERIT += "rm_work" Adding this statement deletes the work directory used for building a recipe once the recipe is built. For more information on “rm_work”, see the rm_work class in the Yocto Project Reference Manual. 3.23 Working with Packages¶ This section describes a few tasks that involve packages: Excluding Packages from an Image Incrementing a Package Version Handling Optional Module Packaging Using Runtime Package Management Generating and Using Signed Packages Setting up and running package test (ptest) Creating Node Package Manager (NPM) Packages Adding custom metadata to packages 3.23.1 Excluding Packages from an Image¶, not for Debian packages.. 3.23.2 Incrementing a Package Version¶. Note Technically, a third component, the “epoch” (i.e. there is a number of version components that support that linear progression. For information on how to ensure package revisioning remains linear, see the “Automatically Incrementing a Package Version Number” section. The following three sections provide related information on the PR Service, the manual method for “bumping” PR and/or PV, and on how to ensure binary package revisioning remains linear. 3.23.2.1 Working With a PR Service¶ As mentioned, attempting to maintain revision numbers in the Metadata is error prone, inaccurate, and causes problems for people submitting recipes. Conversely, the PR Service automatically generates increasing numbers, particularly the revision field, which removes the human element. Note For additional information on using a PR Service, you can see the PR Service wiki page.,. Note The OpenEmbedded build system does not maintain PR information as part of the shared state (sstate) packages. If you maintain an sstate feed, it’s Overview and Concepts Manual. 3.23.2.2 Manually Bumping PR¶. 3.23.2.3 Automatically Incrementing a Package Version Number¶placeholder. 3.23.3 Handling Optional Module Packaging¶. 3.23.3.1 Making Sure the Packaging is Done¶ lig recipe: python populate_packages:prepend () { lig = d.expand('${libdir}') do_split_packages(d, lig '^mod_(.*).so$', 'lig 'Lig module for %s', extra_depends='') } The previous example specifies a number of things in the call to do_split_packages. A directory within the files installed by your recipe through do_install lig Thus, if a file in ${libdir}called mod_alias.sois found, a package called lig created for it and the DESCRIPTION is set to “Lig module for alias”. Often, packaging modules is as simple as the previous example. However, there are more advanced options filename. 3.23.3.2 Satisfying Dependencies¶ The second part for handling optional module packaging is to ensure that any dependencies on optional modules from other recipes are satisfied by your recipe. You can be sure these dependencies are satisfied by using the PACKAGES_DYNAMIC variable. Here is an example that continues with the lig recipe shown earlier: PACKAGES_DYNAMIC = "lig The name specified in the regular expression can of course be anything. In this example, it is lig. 3.23.4 Using Runtime Package Management¶ During a build, BitBake always transforms a recipe into one or more packages. For example, BitBake takes the bash recipe and produces a number of packages (e.g. bash, bash-bashbug, bash-completion, bash-completion-dbg, bash-completion-dev, bash-completion-extra, bash-dbg, and so forth). or). In fact, doing so is advantageous for a production environment as getting the packages away from the development system’s build directory prevents accidental overwrites.ux86 device produces the following three package databases: noarch, i586, and qemux86. If you wanted your qemux86. 3.23.4.1 Build Considerations¶ This section describes build considerations of which you need to be aware in order to provide support for runtime package management. When BitBake generates packages, it needs to know what format or formats to use. In your configuration, you use the PACKAGE_CLASSES variable to specify the format: Open the local.conffile inside your Build Directory (e.g. poky/build/conf/local.conf). Select the desired package format as follows: PACKAGE_CLASSES ?= "package_packageformat" where packageformat can be “ipk”, “rpm”, “deb”, or “tar” which are the supported package formats. Note Because the Yocto Project supports four different package formats, you can set the variable with more than one argument. However, the OpenEmbedded build system only uses the first argument when creating an image or Software Development Kit (SDK). If you would like your image to start off with a basic package database containing the packages in your current build as well as existing package, it is always a good idea to re-generate the package index after the build by using the following command: $ bitbake package-index It might be tempting to build the package and the package index at the same time with a command such as the following: $ bitbake some-package package-index Do not do this as BitBake does not schedule the package index for after the completion of the package you are building. Consequently, you cannot be sure of the package index including information for the package you just built. Thus, be sure to run the package update step separately after building any packages./packageformat directory. For example, if tmp and your selected package type is RPM, then your RPM packages are available in tmp/deploy/rpm. ${TMPDIR }is 3.23.4.2 Host or Server Machine Setup¶ Although other protocols are possible, a server using HTTP typically serves packages. If you want to use HTTP, then set up and configure a web server such as Apache 2, lig or Python web server on the machine serving the packages. To keep things simple, this section describes how to set up a Python web server to share package feeds from the developer’s machine. Although this server might not be the best for a production environment, the setup is simple and straight forward. Should you want to use a different server more suited for production (e.g. Apache 2, Lig3 -m 3.23.4.3 Target Setup¶ Setting up the target differs depending on the package management system. This section provides information for RPM, IPK, and DEB. 3.23.4.3.1 Using RPM¶ The use these variables as part of the build and your image is now running on the target, you need to perform the steps in this section if you want to use runtime package management. Note For information on the PACKAGE_FEED_* variables, see PACKAGE_FEED_ARCHS, PACKAGE_FEED_BASE_PATHS, and PACKAGE_FEED_URIS in. Note For development purposes, you can point the web server to the build system’s deploy directory. However, for production use, it is better to copy the package directories to a location outside of the build area and use that location. Doing so avoids situations where the build system overwrites or changes the deploy directory.. Note See the DNF documentation for additional information. 3.23.4.3.2 Using IPK¶/ directory,. 3.23.4.3.3 Using DEB¶: $ sudo apt update After this step, apt is able to find, install, and upgrade packages from the specified repository. 3.23.5 Generating and Using Signed Packages¶ In order to add security to RPM packages used during a build, you can take steps to securely sign them. Once a signature is verified, the OpenEmbedded build system can use the package in the build. If security fails for a signed package, the build system stops the build. This section describes how to sign RPM packages during a build and how to use signed package feeds (repositories) when doing a build. 3.23.5.1 Signing RPM Packages¶" Note Be sure to supply appropriate values for both key_name and passphrase. Aside from the RPM_GPG_NAME and RPM_GPG_PASSPHRASE variables in the previous example, two optional variables related to signing are available: GPG_BIN: Specifies a gpgbinary/wrapper that is executed when the package is signed. GPG_PATH: Specifies the gpghome directory used when the package is signed. 3.23.5.2 Processing Package Feeds¶ be specified are available: GPG_BIN Specifies a gpgbinary/wrapper that is executed when the package is signed. GPG_PATH: Specifies the gpghome directory used when the package is signed. PACKAGE_FEED_GPG_SIGNATURE_TYPE: Specifies the type of gpgsignature. This variable applies only to RPM and IPK package feeds. Allowable values for the PACKAGE_FEED_GPG_SIGNATURE_TYPEare “ASC”, which is the default and specifies ascii armored, and “BIN”, which specifies binary. 3.23.6 Testing Packages With ptest¶. For a list of Yocto Project recipes that are already enabled with ptest, see the Ptest wiki page. 3.23.6.1 Adding ptest to Your Build¶. 3.23.6.2 Running ptest¶ The ptest-runner package installs a shell script that loops through all installed ptest test suites and runs them in sequence. Consequently, you might want to add this package to your image. 3.23.6.3 Getting Your Package Ready¶ host make checkbuilds and runs on the same computer, while cross-compiling requires that the package is built on the host but executed for the target architecture (though often, as in the case for ptest, the execution occurs on the host). The built version of Automake that ships with the Yocto Project includes a patch that separates building and execution. Consequently, packages that use the unaltered, patched version of make checkautomatically cross-compiles. Regardless, you still must add a do_compile_ptestfunctionfunction into the recipe. Install the test suite: The ptestclass automatically copies the file run-ptestto the target and then runs make install-ptestto run the tests. If this is not enough, you need to create a do_install_ptestfunction and make sure it gets called after the “make install-ptest” completes. 3.23.7 Creating Node Package Manager (NPM) Packages¶ NPM is a package manager for the JavaScript programming language. The Yocto Project supports the NPM fetcher. You can use this fetcher in combination with devtool to create recipes that produce NPM packages. There are two workflows that allow you to create NPM packages using devtool: the NPM registry modules method and the NPM project code method. Note While it is possible to create NPM recipes manually, using devtool is far simpler. Additionally, some requirements and caveats exist. 3.23.7.1 Requirements and Caveats¶ You need to be aware of the following before using devtool to create NPM packages: Of the two methods that you can use devtoolto-npmpackage, which is part of the OpenEmbedded environment. You need to get the package by cloning the repository out of GitHub. Be sure to add the path to your local copy to your bblayers.conffile. devtoolcannot detect native libraries in module dependencies. Consequently, you must manually add packages to your recipe. While deploying NPM packages, devtoolcannot. 3.23.7.2 Using the Registry Modules Method¶ This section presents an example that uses the cute-files module, which is a file browser web application. Note You must know the cute-files module. Note A package is created for each sub-module. This policy is the only practical way to have the licenses for all of the dependencies represented in the license manifest of the image." Here are three key points in the previous example: SRC_URI uses the NPM scheme so that the NPM fetcher is used. recipetoolcollects all the license information. If a sub-module’s license is unavailable, the sub-module’s name appears in the comments. The inherit npmstatement: Note Because of a known issue, you cannot simply run cute-files. 3.23.7.3 Using the NPM Projects Code Method¶= \ npmsw://${THISDIR}/${BPN}/npm-shrinkwrap.json \ " In this example, the main module is taken from the Git repository and dependencies are taken from the NPM registry. Other than those differences, the recipe is basically the same between the two methods. You can build and deploy the package exactly as described in the previous section that uses the registry modules method. 3.23.8 Adding custom metadata to packages¶ The variable PACKAGE_ADD_METADATA can be used to add additional metadata to packages. This is reflected in the package control/spec file. To take the ipk format for example, the CONTROL file stored inside would contain the additional metadata as additional lines. The variable can be used in multiple ways, including using suffixes to set it for a specific package type and/or package. Note that the order of precedence is the same as this list: PACKAGE_ADD_METADATA_<PKGTYPE>:<PN> PACKAGE_ADD_METADATA_<PKGTYPE> PACKAGE_ADD_METADATA:<PN> - <PKGTYPE> is a parameter and expected to be a distinct name of specific package type: IPK for .ipk packages DEB for .deb packages RPM for .rpm packages <PN> is a parameter and expected to be a package name. The variable can contain multiple [one-line] metadata fields separated by the literal sequence ‘\n’. The separator can be redefined using the variable flag separator. Here is an example that adds two custom fields for ipk packages: PACKAGE_ADD_METADATA_IPK = "Vendor: CustomIpk\nGroup:Applications/Spreadsheets" 3.24 Efficiently Fetching Source Files During a Build¶ shows you how you can use mirrors to speed up fetching source files and how you can pre-fetch files all of which leads to more efficient use of resources and time. 3.24.1 Setting up Effective Mirrors¶ A good deal that goes into a Yocto Project build is simply downloading all of the source tarballs. Maybe you have been working with another build system. 3.24.2 Getting Source Files and Suppressing the Build¶ target --runall=fetch This variation of the BitBake command guarantees that you have all the sources for that BitBake target should you disconnect from the Internet and want to do the build later offline. 3.25 Selecting an Initialization Manager¶ By default, the Yocto Project uses SysVinit as the initialization manager. However, there is also support”. Note Each runlevel has a dependency on the previous runlevel. This dependency allows the services to work properly. init. 3.25.1 Using systemd Exclusively¶ Set. 3.25.2 Using systemd for the Main Image and Using SysVinit for the Rescue Image¶. 3.25.3 Using systemd-journald without a traditional syslog daemon¶ Counter-intuitively, systemd-journald is not a syslog runtime or provider, and the proper way to use systemd-journald as your sole logging mechanism is to effectively disable syslog entirely by setting these variables in your distribution configuration file: VIRTUAL-RUNTIME_syslog = "" VIRTUAL-RUNTIME_base-utils-syslog = "" Doing so will prevent rsyslog / busybox-syslog from being pulled in by default, leaving only journald. 3.26 Selecting a Device Manager¶ The Yocto Project provides multiple ways to manage the device manager ( /dev): Persistent and Pre-Populated /dev: For this case, the /devdirectory is persistent and the required device nodes are created during the build. Use devtmpfswith a Device Manager: For this case, the /devdirectory is provided by the kernel as an in-memory file system and is automatically populated by the kernel at runtime. Additional configuration of device nodes is done in user space by a device manager like udevor busybox-mdev. 3.26.1 Using Persistent and Pre-Populated /dev¶: 3.26.2 Using devtmpfs and a Device Manager¶" 3.27 Using an External SCM¶ ?= "${AUTOREV}" . . . These lines allow you to experiment with building a distribution that tracks the latest development source for numerous packages. Note The poky-bleeding distribution is not tested on a regular basis. Keep this in mind if you use it. 3.28 Creating a Read-Only Root Filesystem¶. Note Supporting a read-only root filesystem requires that the system and applications do not try to write to the root filesystem. You must configure all parts of the target system to write elsewhere, or to gracefully fail in the event of attempting to write to the root filesystem. 3.28.1 Creating the Root Filesystem¶ To create the read-only root filesystem, simply add the “read-only-rootfs” feature to your image, in: EXTRA_IMAGE_FEATURES = "read-only-rootfs" For more information on how to use these variables, see the “Customizing Images Using Custom IMAGE_FEATURES and EXTRA_IMAGE_FEATURES” section. For information on the variables, see IMAGE_FEATURES and EXTRA_IMAGE_FEATURES. 3.28.2 Post-Installation Scripts and Read-Only Root Filesystem¶ the first boot on the target device. With the “read-only-rootfs” feature enabled, the build system makes sure that all post-installation scripts succeed at file system creation time. $Dis, which run on the host system, to accomplish the same tasks, or by alternatively running the processes under QEMU, which has the qemu_run_binaryfunction. For more information, see the qemu class. 3.28.3 Areas With Write Access¶). 3.29 Maintaining Build Output Quality¶ helps 3.29.1 Enabling and Disabling Build History¶ Build history is disabled by default. To enable it, add the following. Note Enabling build history increases your build times slightly, particularly for images, and increases the amount of disk space used during the build. You can disable build history by removing the previous statements from your conf/local.conf file. 3.29.2 Understanding What the Build History Contains¶ Build history information is kept in }/buildhistory in the Build Directory as defined by the BUILDHISTORY_DIR variable. Here is an example abbreviated listing: ${TOPDIR At the top level, there is a metadata-revs file that lists the revisions of the repositories for the enabled layers when the build was produced. The rest of the data splits into separate packages, images and sdk directories, the contents of which are described as follows. 3.29.2.1 Build History Package Information¶- busybox-udhcpd busybox-udhcpc \ busybox-syslog busybox-mdev busybox-hwclock busybox-dbg \ busybox-staticdev busybox-dev busybox-doc busybox-locale busybox Finally, for those recipes fetched from a version control system (e.g., Git), there is a file that lists source revisions that are specified in the recipe and the actual revisions used during the build. Listed and actual revisions might differ when SRCREV is set to ${AUTOREV}. Here is an example assuming buildhistory/packages # all-poky-linux SRCREV:pn-ca-certificates = "07de54fdcc5806bde549e1edf60738c6bccf50e8" SRCREV:pn-update-rc.d = "8636cf478d426b568c1be11dbd9346f67e03adac" # core2-64-poky-linux SRCREV:pn-binutils = "87d4632d36323091e731eb07b8aa65f90293da66" SRCREV:pn-btrfs-tools = "8ad326b2f28c044cb6ed9016d7c3285e23b673c8" SRCREV_bzip2-tests:pn-bzip2 = "f9061c030a25de5b6829e1abf373057309c734c0" SRCREV:pn-e2fsprogs = "02540dedd3ddc52c6ae8aaa8a95ce75c3f8be1c0" SRCREV:pn-file = "504206e53a89fd6eed71aeaf878aa3512418eab1" SRCREV_glibc:pn-glibc = "24962427071fa532c3c48c918e9d64d719cc8a6c" SRCREV:pn-gnome-desktop-testing = "e346cd4ed2e2102c9b195b614f3c642d23f5f6e7" SRCREV:pn-init-system-helpers = "dbd9197569c0935029acd5c9b02b84c68fd937ee" SRCREV:pn-kmod = "b6ecfc916a17eab8f93be5b09f4e4f845aabd3d1" SRCREV:pn-libnsl2 = "82245c0c58add79a8e34ab0917358217a70e5100" SRCREV:pn-libseccomp = "57357d2741a3b3d3e8425889a6b79a130e0fa2f3" SRCREV:pn-libxcrypt = "50cf2b6dd4fdf04309445f2eec8de7051d953abf" SRCREV:pn-ncurses = "51d0fd9cc3edb975f04224f29f777f8f448e8ced" SRCREV:pn-procps = "19a508ea121c0c4ac6d0224575a036de745eaaf8" SRCREV:pn-psmisc = "5fab6b7ab385080f1db725d6803136ec1841a15f" SRCREV:pn-ptest-runner = "bcb82804daa8f725b6add259dcef2067e61a75aa" SRCREV:pn-shared-mime-info = "18e558fa1c8b90b86757ade09a4ba4d6a6cf8f70" SRCREV:pn-zstd = "e47e674cd09583ff0503f0f6defd6d23d8b718d3" # qemux86_64-poky-linux SRCREV_machine:pn-linux-yocto = "20301aeb1a64164b72bc72af58802b315e025c9c" SRCREV_meta:pn-linux-yocto = "2d38a472b21ae343707c8bd64ac68a9eaca066a0" # x86_64-linux SRCREV:pn-binutils-cross-x86_64 = "87d4632d36323091e731eb07b8aa65f90293da66" SRCREV_glibc:pn-cross-localedef-native = "24962427071fa532c3c48c918e9d64d719cc8a6c" SRCREV_localedef:pn-cross-localedef-native = "794da69788cbf9bf57b59a852f9f11307663fa87" SRCREV:pn-debianutils-native = "de14223e5bffe15e374a441302c528ffc1cbed57" SRCREV:pn-libmodulemd-native = "ee80309bc766d781a144e6879419b29f444d94eb" SRCREV:pn-virglrenderer-native = "363915595e05fb252e70d6514be2f0c0b5ca312b" SRCREV:pn-zstd-native = "e47e674cd09583ff0503f0f6defd6d23d8b718d3" Note Here are some notes on using the buildhistory-collect-srcrevs command: By default, only values where the SRCREV was not hardcoded (usually when AUTOREV is used) are reported. Use the -aoption to see all SRCREV values. The output statements might not have any effect if overrides are applied elsewhere in the build system configuration. Use the -foption to add the forcevariableoverride to each output line if you need to work around this restriction. The script does apply special handling when building for multiple machines. However, the script does place a comment before each set of values that specifies which triplet to which they belong as previously shown (e.g., i586-poky-linux). 3.29.2.2 Build History Image Information¶. *. Note Installed package information is able to be gathered and produced even if package management is disabled for the final image. Here is an example of image-info.txt: DISTRO = poky DISTRO_VERSION = 3.4+snapshot-a0245d7be08f3d24ea1875e9f8872aa6bbff93be USER_CLASSES = buildstats IMAGE_CLASSES = qemuboot qemuboot license_image IMAGE_FEATURES = debug-tweaks IMAGE_LINGUAS = IMAGE_INSTALL = packagegroup-core-boot speex speexdsp BAD_RECOMMENDATIONS = NO_RECOMMENDATIONS = PACKAGE_EXCLUDE = ROOTFS_POSTPROCESS_COMMAND = write_package_manifest; license_create_manifest; cve_check_write_rootfs_manifest; ssh_allow_empty_password; ssh_allow_root_login; postinst_enable_logging; rootfs_update_timestamp; write_image_test_data; empty_var_volatile; sort_passwd; rootfs_reproducible; IMAGE_POSTPROCESS_COMMAND = buildhistory_get_imageinfo ; IMAGESIZE = 9265. 3.29.2.3 Using Build History to Gather Image Information Only¶ As you can see, build history produces image information, including dependency graphs, so you can see why something was pulled into the image. If you are just interested in this information and not interested in collecting. 3.29.2.4 Build History SDK Information¶roottasks have a total size). The sstate-task-sizes.txtfile exists only when an extensible SDK is created. sstate-package-sizes.txt:A text file containing name-value pairs with information for the shared-state packages and sizes in the SDK. The sstate-package-sizes.txtfile exists only when an extensible SDK is created. sdk-files:A folder that contains copies of the files mentioned in BUILDHISTORY_SDK_FILESif the files are present in the output. Additionally, the default value of BUILDHISTORY_SDK_FILESdirectory. The following information appears under each of the hostand targetdirectories for the portions of the SDK that run on the host and on the target, respectively: Note The following files for the most part are empty when producing an extensible SDK because this type of SDK is not constructed from packages as is the standard SDK.-gl. 3.29.2.5 Examining Build History Information¶). There is a command-line tool called buildhistory-diff, though, that queries the Git repository and prints just the differences that might be significant in human-readable form. Here is an example: $ poky/poky/scripts/buildhistory-diff . HEAD^ Changes to images/qemux86_64/glibc/core-image-minimal (files-in-image.txt): /etc/anotherpkg.conf was added /sbin/anotherpkg was added * (installed-package-names.txt): * anotherpkg was added Changes to images/qemux86_64/gl" Note The buildhistory-diff tool requires the GitPython package. Be sure to install it using Pip3 as follows: $ pip3 install GitPython --user Alternatively, you can install python3-git using the appropriate distribution package manager (e.g. apt, dnf, or zipper). To see changes to the build history using a web interface, follow the instruction in the README file here. Here is a sample screenshot of the interface: 3.30 Performing Automated Runtime Testing¶. For information on the test and QA infrastructure available within the Yocto Project, see the “Testing and Quality Assurance” section in the Yocto Project Reference Manual. 3.30.1 Enabling Tests¶ Depending on whether you are planning to run tests using QEMU or on the hardware, you have to take different steps to enable the tests. See the following subsections for information on how to enable both types of tests. 3.30.1.1 Enabling Runtime Tests on QEMU¶ In order to run tests, you need to do the following: Set up to avoid interaction with sudo for networking: To accomplish this, you must do one of the following: Add NOPASSWDfor your user in /etc/sudoerseither for all commands or just for runqemu-ifup. You must provide the full path as that can change if you are using multiple clones of the source repository. Note On some distributions, you also need to comment out “Defaults requiretty” in /etc/sudoers. Manually configure a tap interface for your system. Run as root the script in scripts/runqemu-gen-tapdevs, which should generate a list of tap devices. This is the option typically chosen for Autobuilder-type environments. Note Be sure to use an absolute path when calling this script with sudo. The package recipe qemu-helper-nativeis required to run this script. Build the package using the following command: $ bitbake qemu-helper-native Set the DISPLAY variable: You need to set this variable so that you have an X server available (e.g. start vncserverfor a headless machine). Be sure your host’s firewall accepts incoming connections from 192.168.7.0/24: Some of the tests (in particular DNF tests) start an HTTP server on a random high number port, which is used to serve files to the target. The DNF module serves ${WORKDIR}/oe-rootfs-reposoand iproute2 openSUSE: sysstatand iproute2 Fedora: sysstatand iproute CentOS: sysstatand iproute Once you start running the tests, the following happens: A copy of the root filesystem is written to ${WORKDIR}/testimage. The image is booted under QEMU using the standard runqemuscript. A default timeout of 500 seconds occurs to allow for the boot process to reach the login prompt. You can change the timeout period by setting TEST_QEMUBOOT_TIMEOUT in the local.conffile.in the task log at ${WORKDIR}/temp/log.do_testimage. 3.30.1.2 Enabling Runtime Tests on Hardware¶ The OpenEmbedded build system can run tests on real hardware, and for certain devices it can also deploy the image to be tested onto the device beforehand. For automated deployment, a “controller image” is installed onto the hardware once as part of setup. Then, each time tests are to be run, the following occurs: The controlleru”. For running tests on hardware, the following options are available: Target” if your hardware is an EFI-based machine with systemd-bootas bootloader and core-image-testmaster(or something similar) is installed. Also, your hardware under test must be in a DHCP-enabled network that gives it the same IP address for each reboot. If you choose “SystemdbootTarget”, there are additional requirements and considerations. See the “Selecting SystemdbootTarget” section, which follows, for more information. “BeagleBoneTarget”: Choose “BeagleBoneTarget” if you are deploying images and running tests on the BeagleBone “Black” or original “White” hardware. For information on how to use these tests, see the comments at the top of the BeagleBoneTarget meta-yocto-bsp/lib/oeqa/controllers/beaglebonetarget.pyfile. “EdgeRouterTarget”: Choose “EdgeRouterTarget” if you are deploying images and running tests on the Ubiquiti Networks EdgeRouter Lite. For information on how to use these tests, see the comments at the top of the EdgeRouterTarget meta-yocto-bsp/lib/oeqa/controllers/edgeroutertarget.pyfile. “GrubTarget”: Choose “GrubTarget” if you are deploying images and running tests on any generic PC that boots using GRUB. For information on how to use these tests, see the comments at the top of the GrubTarget meta-yocto-bsp/lib/oeqa/controllers/grubtarget.pyfile. /. 3.30.1.3 Selecting SystemdbootTarget¶ If you did not set TEST_TARGET to “SystemdbootTarget”, then you do not need any information in this section. You can skip down to the “Running Tests” section. If you did set TEST_TARGET to “SystemdbootTarget”, you also need to perform a one-time setup of your controller image by doing the following: Set EFI_PROVIDER: Be sure that EFI_PROVIDER is as follows: EFI_PROVIDER = "systemd-boot" Build the controller image: Build the core-image-testmasterimage. The core-image-testmasterrecipe is provided as an example for a “controller” image and you can customize the image recipe as you would any other recipe. Here are the image recipe requirements: Inherits core-imageso filesystem partition. This image uses another installer that creates a specific partition layout. Not all Board Support Packages (BSPs) can use an installer. For such cases, you need to manually create the following partition layout on the target: First partition mounted under /boot, labeled “boot”. The main root filesystem partition where this image gets installed, which is mounted under Another partition labeled “testrootfs” where test images get deployed. Install image: Install the image that you just built on the target system. The final thing you need to do when setting TEST_TARGET to “SystemdbootTarget” is to set up the test image: Set up your local.conf file: Make sure you have the following statements in your local.conffile: IMAGE_FSTYPES += "tar.gz" INHERIT += "testimage" TEST_TARGET = "SystemdbootTarget" TEST_TARGET_IP = "192.168.2.3" Build your test image: Use BitBake to build the image: $ bitbake core-image-sato 3.30.1.4 Power Control¶ For most hardware targets other than “simpleremote”,.conffile:. Note You need to customize TEST_POWERCONTROL_CMD and TEST_POWERCONTROL_EXTRA_ARGS for" 3.30.1.5 Serial Console Connection¶_SERIAL" 3.30.2 Running Tests¶ You can start the tests automatically or manually: Automatically running tests: To run the tests automatically after the OpenEmbedded build system successfully creates an image, first set the TESTIMAGE_AUTO variable to “1” in your local.conffile in the Build Directory: TESTIMAGE_AUTO = "1" Next, build your image. If the image successfully builds, the tests run: bitbake core-image-sato Manually running tests: To manually run the tests, first globally inherit the testimage class by editing your local.conffile:. Note Be sure that module names do not collide with module names used in the default set of test modules in. Note Each module can have multiple classes with multiple test methods. And, Python unittest rules apply. Here are some things to keep in mind when running tests: The default tests for the image are defined as: DEFAULT_TEST_SUITES:pn-image = "ping ssh df connman syslog xorg scp vnc date rpm dnf. 3.30.3 Exporting Tests¶: 3.30.4 Writing New Tests¶ the following: Filenames need to map directly to test (module) names. Do not use module names that collide with existing core tests. Minimally, an empty __init__.pyfile must be present in the runtime directory. To create a new test, start by copying an existing module (e.g. syslog.py or gcc.py are good ones to use). Test modules can use code from meta/lib/oeqa/utils, which are helper classes. Note Structure shell commands such that you rely on them and they return a single code for success. Be aware that sometimes you will need to parse the output. See the df.py and date.py modules for examples. You will notice that all test classes inherit oeRuntimeTest, which is found in meta/lib/oetest.py. This base class offers some helper attributes, which are described in the following sections: 3.30.4.1 Class Methods¶ Class methods are as follows: hasPackage(pkg): Returns “True” if pkgis in the installed package list of the image, which is based on the manifest file that is generated during the do_rootfstask. hasFeature(feature): Returns “True” if the feature is in IMAGE_FEATURES or DISTRO_FEATURES. 3.30.4.2 Class Attributes¶ Class attributes are as follows: pscmd: Equals “ps -ef” if procpsis installed in the image. Otherwise, pscmdequals “ps” (busybox). tc: The called testu, SimpleRemote, and SystemdbootTarget). Tests usually use the following: ip: The target’s IP address. server_ip: The host’s IP address, which is usually used by the DNF. 3.30.4.3 Instance Attributes¶ There is a single instance attribute,). 3.30.5 Installing Packages in the DUT Without the Package Manager¶. Note This method uses scp to } ] } 3.31 Debugging Tools and Techniques¶ given a variety of situations. Note A useful feature for debugging is the error reporting tool. Configuring the Yocto Project to use this tool causes the OpenEmbedded build system to produce error reporting commands as part of the console output. You can enter the commands after the build completes to log error information into a common database, that can help you figure out what might be going wrong. For information on how to enable and use this feature, see the “Using the Error Reporting Tool” section. The following list shows the debugging topics in the remainder of this section: “Viewing Logs from Failed Tasks” describes how to find and view logs from tasks that failed during the build process. “Viewing Variable Values” describes how to use the BitBake -eoption to examine variable values after a recipe has been parsed. “Viewing Package Information with oe-pkgdata-util” describes how to use the oe-pkgdata-utilutility to query PKGDATA_DIR and display package-related information for built packages. “Viewing Dependencies Between Recipes and Tasks” describes how to use the BitBake -goption to display recipe dependency information used during the build. “Viewing Task Variable Dependencies” describes how to use the bitbake-dumpsigcommanddebug output option to reveal more about what BitBake is doing during the build. “Building with No Dependencies” describes how to use the BitBake -boption. 3.31.1 Viewing Logs from Failed Tasks¶ You can find the log for a task in the file }. ${WORKDIR log.do_taskname and run.do_taskname are actually symbolic links to log.do_taskname log.run_taskname .pid and .pid, where pid is the PID the task had when it ran. The symlinks always point to the files corresponding to the most recent run. 3.31.2 Viewing Variable Values¶ Note overridden.and :prepend, then the final assembled function body appears in the output. 3.31.3 Viewing Package Information with oe-pkgdata-util¶. Note You can use the standard * and ? globbing wildcards as part of package names and paths. oe-pkgdata-util list-pkgs [pattern]: Lists all packages that have been built, optionally limiting the match to packages that match pattern. oe-pkgdata-util list-pkg-files package ...: Lists the files and directories contained in the given packages. Note A different way to view the contents of a package is to look at the }/packages-splitdirectory of the recipe that generates the package. This directory is created by the do_package task and has one subdirectory for each package the recipe generates, which contains the files stored in that package. If you want to inspect the ${WORKDIR}/packages-splitdirectory, make sure that rm_work is not enabled when you build the recipe. oe-pkgdata-util find-path path ...: Lists the names of the packages that contain the given paths. For example, the following tells us that /usr/share/man/man1/make.1is contained in the make-docpackage: $ --help $ oe-pkgdata-util subcommand --help 3.31.4 Viewing Dependencies Between Recipes and Tasks¶). Note DOT files use a plain text format. The graphs generated using the bitbake -gcommand are often so large as to be difficult to read without special pruning (e.g. with Bitbake’s -Ioption) and processing. Despite the form and size of the graphs, the corresponding .dotfiles can still be possible to read and provide useful information. As an example, the task-depends.dotfile contains lines such as the following: "libxslt.do_configure" -> "libxml2.do_populate_sysroot" The above example line reveals that the do_configure task in libxsltdepends on the do_populate_sysroot task in libxml2, which is a normal DEPENDS dependency between the two recipes. For an example of how .dotfiles can be processed, see the scripts/contrib/graph-toolPython. 3.31.5 Viewing Task Variable Dependencies¶files contain a pickled Python database of all the metadata that went into creating the input checksum for the task. As an example, for the do_fetch task of the dbrecipe, the sigdatafileinfofile is written into SSTATE_DIR along with the cached task output. The siginfofiles contain exactly the same information as sigdatafiles. Run bitbake-dumpsigon the sigdataor siginfofile.'] Note Functions (e.g. base_do_fetch) also count as variable dependencies. These functions in turn depend on the variables they reference. The output of bitbake-dumpsigalso includes the value each variable had, a list of dependencies for each variable, and BB_BASEHASH_IGNORE_VARS Note Two common values for SIGNATURE_HANDLER are . 3.31.8 Running Specific Tasks¶. Note The reason -f is never required when running the do_devshell task. Note This option is upper-cased and is separate from the -c option,. Note BitBake explicitly keeps track of which tasks have been tainted in this fashion, and will print warnings such as the following for builds involving such tasks: WARNING: /home/ulf/poky/meta/recipes-sato/matchbox-desktop/matchbox-desktop_2.1.bb.do_compile is tainted from a forced run The. 3.31.9 General BitBake Problems¶. 3.31.10 Building with No Dependencies¶ To build a specific recipe ( .bb file), you can use the following command form: $ bitbake -b somepath/somerecipe.bb This command form does not check for dependencies. Consequently, you should use it only when you know existing dependencies have been met. Note You can also specify fragments of the filename. In this case, BitBake checks for a unique match. 3.31.11 Recipe Logging Mechanisms¶ The Yocto Project provides several logging functions for producing debugging output and reporting errors and warnings. For Python functions, the following logging functions are available. “Usage and syntax” option in the BitBake User Manual for more information. bb.warn(msg): Writes “WARNING: msg” to the log while also logging to stdout. bb.error(msg): Writes “ERROR: msg” to the log while also logging to standard out (stdout). Note Calling this function does not cause the task to fail. bb.fatal(msg): This logging function is similar to bb.error(msg)but also causes the calling task to fail. Note. 3.31.11.1 Logging With Python¶. See the “do_listtasks” section for additional information:") } 3.31.11.2 Logging With Bash¶" } 3.31.12 Debugging Parallel Make Races¶, there are some simple tips and tricks that can help you debug and fix them. This section presents a real-world example of an error encountered on the Yocto Project autobuilder and the process used to fix it. Note If you cannot properly fix a make race condition, you can work around it by clearing either the PARALLEL_MAKE or PARALLEL_MAKEINST variables. 3.31.12.1 The Failure¶ For this example, assume that you are building an image that depends on the “neard” package. And, during the build, BitBake runs into problems and creates the following output. Note This example log file has longer lines artificially broken to make the listing easier to read. If you examine the output or the log file, you see the failure during make: | DEBUG: SITE files ['endian-little', 'bit-32', 'ix86-common', 'common-linux', 'common-glibc', 'i586-linux', 'common'] | DEBUG: Executing shell function do_compile | NOTE: make -j 16 | make --no-print-directory all-am | /types.h include/near/types.h | ln -s /home/pokybuild/yocto-autobuilder/nightly-x86/build/build/tmp/work/i586-poky-linux/neard/ 0.14-r0/neard-0.14/include/log.h include/near/log.h | ln -s /home/pokybuild/yocto-autobuilder/tag.h include/near/tag.h | /bin/mkdir -p include/near | ln -s /home/pokybuild/yocto-autobuilder/nightly-x86/build/build/tmp/work/i586-poky-linux/neard/ 0.14-r0/neard-0.14/include/adapter.h include/near/adapter.h | /bin/mkdir -p include/near | ln -s /home/pokybuild/yocto-autobuilder/nightly-x86/build/build/tmp/work/i586-poky-linux/neard/ 0.14-r0/neard-0.14/include/ndef.h include/near/ndef.h | ln -s /home/pokybuild/yocto-autobuilder/device.h include/near/device.h | ln -s /home/pokybuild/yocto-autobuilder/nightly-x86/build/build/tmp/work/i586-poky-linux/neard/ 0.14-r0/neard-0.14/include/nfc_copy.h include/near/nfc_copy.h | ln -s /home/pokybuild/yocto-autobuilder/nightly-x86/build/build/tmp/work/i586-poky-linux/neard/ 0.14-r0/neard-0.14/include/snep.h include/near/snep.h | ln -s /home/pokybuild/yocto-autobuilder/nightly-x86/build/build/tmp/work/i586-poky-linux/neard/ 0.14-r0/neard-0.14/include/version.h include/near/version.h | ln -s /home/pokybuild/yocto-autobuilder --sysroot=/home/pokybuild/yocto-autobuilder/nightly-x86/ build/build/tmp/sysroots/qemux86 -DHAVE_CONFIG_H -I. -I./include -I./src -I./gdbus -I/home/pokybuild/ yocto-autobuilder/nightly-x86/build/build/tmp/sysroots/qemux86/usr/include/glib-2.0 -I/home/pokybuild/yocto-autobuilder/nightly-x86/build/build/tmp/sysroots/qemux86/usr/ lib/glib-2.0/include -I/home/pokybuild/yocto-autobuilder/nightly-x86/build/build/ tmp/sysroots/qemux86/usr/include/dbus-1.0 -I/home/pokybuild/yocto-autobuilder/ nightly-x86/build/build/tmp/sysroots/qemux86/usr/lib/dbus-1.0/include -I/home/pokybuild/yocto-autobuilder/ 3.31.12.2 Reproducing the Error¶, there is a missing dependency for the neard Makefile target. Here is some abbreviated, sample output with the missing dependency clearly visible at the end: i586-poky-linux-gcc -m32 -march=i586 --sys $ 3.31.12.3 Creating a Patch for the Fix¶ Quilt in Your is created, 3.31.12.4 Testing the Build¶ “Submitting a Change to the Yocto Project” section for more information. 3.31.13 Debugging With the GNU Project Debugger (GDB) Remotely¶ Note For best results, install debug ( , there are two methods you can use: running a debuginfod server and using gdbserver. 3.31.13.1 Using the debuginfod server method¶ debuginfod from elfutils is a way to distribute debuginfo files. Running a debuginfod server makes debug symbols readily available, which means you don’t need to download debugging information and the binaries of the process being debugged. You can just fetch debug symbols from the server. To run a debuginfod server, you need to do the following: Ensure that debuginfodis present in DISTRO_FEATURES (it already is in OpenEmbedded-coredefaults and pokyreference distribution). If not, set in your distro config file or in local.conf: DISTRO_FEATURES:append = " debuginfod" This distro feature enables the server and client library in elfutils, and enables debuginfodsupport in clients (at the moment, gdband binutils). Run the following commands to launch the debuginfodserver on the host: $ oe-debuginfod To use debuginfodon the target, you need to know the ip:port where debuginfodis listening on the host (port defaults to 8002), and export that into the shell environment, for example in qemu: root@qemux86-64:~# export DEBUGINFOD_URLS=" Then debug info fetching should simply work when running the target gdb, readelfor objdump, for example: root@qemux86-64:~# gdb /bin/cat ... Reading symbols from /bin/cat... Downloading separate debug info for /bin/cat... Reading symbols from /home/root/.cache/debuginfod_client/923dc4780cfbc545850c616bffa884b6b5eaf322/debuginfo... It’s also possible to use debuginfod-findto just query the server: root@qemux86-64:~# debuginfod-find debuginfo /bin/ls /home/root/.cache/debuginfod_client/356edc585f7f82d46f94fcb87a86a3fe2d2e60bd/debuginfo 3.31.13.2 Using the gdbserver method¶ gdbserver, which following steps show you how to debug using the GNU project debugger. Configure your build system to construct the companion debug filesystem: In your local.conffile,packages..conffile or in an image recipe: IMAGE_INSTALL:append = " gdbserver" The change makes sure the gdbserverpackagedbyou can use for debugging during development. While this is the quickest approach, the two previous methods in this step are better when considering long-term maintenance strategies. Note If you run PATHenvironment variable. If you are using the build system, Gdb is located in build-dir /tmp/sysroots/host /usr/bin/architecture -gdb Boot the target: For information on how to run QEMU, see the QEMU Documentation. Note Be sure to verify that your host can access the target via TCP.). Note The Gdbcontents and corresponding /usr/src/debugfiles) from the work directory. Here is an example: $ bitbake bash $ bitbake -c devshell bash $ cd .. $ scp packages-split/bash/bin/bash target:/bin/bash $ cp -a packages-split/bash-dbg/\* path/debugfs 3.31.14 Debugging with the GNU Project Debugger (GDB) on the Target¶" Note To improve the debug information accuracy, you can reduce the level of optimization used by the compiler. For example, when adding the following line to your local.conf file, you will reduce optimization from FULL_OPTIMIZATION of “-O2” to DEBUG_OPTIMIZATION of “-O -fno-omit-frame-pointer”: DEBUG_BUILD = "1" Consider that this will reduce the application’s performance and is recommended only for debugging purposes. 3.31.15 Other Debugging Tips¶boot splashscreen, add psplash=falseto the kernel command line. Doing so prevents psplashfrom loading and thus allows you to see the console.directories,. Note The manuals might not be the right place to document variables that are purely internal and have a limited scope (e.g. internal variables used to implement a single .bbclassfile). 3.32 Making Changes to the Yocto Project¶ Because the Yocto Project is an open-source, community-based project, you can effect changes to the project. This section presents procedures that show you how to submit a defect against the project and how to submit a change. 3.32.1 Submitting a Defect Against the Yocto Project¶-intellayer, you would choose “Build System, Metadata & Runtime”, “BSPs”, and “bsps-meta-intel”, respectively. Choose the “Version” of the Yocto Project for which you found the bug (e.g. 4.0.999).. 3.32.2 Submitting a Change to the Yocto Project¶, there is a mailing listor scriptsdirectories should be sent to this mailing list. BitBake: For changes to BitBake (i.e. anything under the bitbakedirectory), send your patch to the bitbake-devel mailing list. “meta-*” trees: These trees contain Metadata. Use the poky mailing list. Documentation: For changes to the Yocto Project documentation, use the docs mailing list. For changes to other layers hosted in the Yocto Project source repositories (i.e. yoctoproject.org) and tools use the Yocto Project general mailing list. Note Sometimes a layer’s documentation specifies to use a particular mailing list. If so, use that “Git Workflows and the Yocto Project” section in the Yocto Project Overview and Concepts Manual for additional concepts on working in the Yocto Project development environment. Maintainers commonly use -next branches to test submissions prior to merging patches. Thus, you can get an idea of the status of a patch based on whether the patch has been merged into one of these branches. The commonly used testing branches for OpenEmbedded-Core are as follows: openembedded-core “master-next” branch: This branch is part of the openembedded-core repository and contains proposed changes to the core metadata. poky “master-next” branch: This branch is part of the poky repository and combines proposed changes to bitbake, the core metadata and the poky distro. Similarly, stable branches maintained by the project may have corresponding -next branches which collect proposed changes. For example, kirkstone-next and honister-next branches in both the “openembdedded-core” and “poky” repositories. Other layers may have similar testing branches but there is no formal requirement or standard for these so please check the documentation for the layers you are contributing to. The following sections provide procedures for submitting a change. 3.32.2.1 Preparing Changes for Submission¶ Make Your Changes Locally: Make your changes in your local Git repository. You should make small, controlled, isolated changes. Keeping changes small and isolated aids review, makes merging/rebasing easier and keeps the change history clean should anyone need to refer to it in future. Stage Your Changes: Stage your changes by using the git addcommand on each file you changed. Commit Your Changes: Commit the change by using the git commitcommand. Make sure your commit information follows standards by following these accepted conventions: Be sure to include a “Signed-off-by:” line in the same style as required by the Linux kernel. This can be done by using the git commit -scommand.. also be helpful if you mention how you tested the change. Provide as much detail as you can in the body of the commit message. Note You do not need to provide a more detailed explanation of a change if the change is minor to the point of the single line summary providing all the information.. Be sure to use the actual bug-tracking ID from Bugzilla for bug-id: Fixes [YOCTO #bug-id] detailed description of change 3.32.2.2 Using Email to Submit a Patch¶ Depending on the components changed, you need to submit the email to a specific mailing list. For some guidance on which mailing list to use, see the list at the beginning of this section. For a description of all the available mailing lists, see the “Mailing Lists” section in the Yocto Project Reference Manual. Here is the general procedure on how to submit a patch through email without using the scripts once the steps in Preparing Changes for Submission have been followed: Format the Commit: Format the commit into an email message. To format commits, use the git format-patchcommand.file for the commit. If you provide several commits as part of the command, the git format-patchcommand produces a series of numbered files in the current directory – one for each commit. If you have more than one patch, you should also use the --coveroption with the command, which generates a cover letter as the first “patch” in the series. You can then edit the cover letter to provide a description for the series of patches. For information on the git format-patchcommand, see GIT_FORMAT_PATCH(1)displayed using the man git-format-patchcommand. Note If you are or will be a frequent contributor to the Yocto Project or to OpenEmbedded, you might consider requesting a contrib area and the necessary associated rights. Send the patches via email: Send the patches to the recipients and relevant mailing lists by using the git send-emailcommand. Note In order to use git send-email, you must have the proper Git packages installed on your host. For Ubuntu, Debian, and Fedora the package is git-email. The git send-emailcommand sends email by using a local or remote Mail Transport Agent (MTA) such as msmtp, sendmail, or through a direct smtpconfiguration in your Git ~/.gitconfigfile.command is the preferred method for sending your patches using emailcommand, see GIT-SEND-EMAIL(1)displayed using the man git-send-emailcommand. The Yocto Project uses a Patchwork instance to track the status of patches submitted to the various mailing lists and to support automated patch testing. Each submitted patch is checked for common mistakes and deviations from the expected patch format and submitters are notified by patchtest if such mistakes are found. This process helps to reduce the burden of patch review on maintainers. Note This system is imperfect and changes can sometimes get lost in the flow. Asking about the status of a patch or change is reasonable if the change has been idle for a while with no feedback. 3.32.2.3 Using Scripts to Push a Change Upstream and Request a Pull¶ from openembedded-core to create and send a patch series with a link to the branch for review. Follow this procedure to push a change to an upstream “contrib” Git repository once the steps in Preparing Changes for Submission have been followed: Note You can find general Git information on how to push a change upstream in the Git Communityrepository and you are working in a local branch named your_name /README. The following command pushes your local commits to the meta-intel-contribupstreamfile, that you have pushed a change by making a pull request. The Yocto Project provides two scripts that conveniently let you generate and send pull requests to the Yocto Project. These scripts are create-pull-requestand send-pull-request. You can find these scripts in the scriptsdirectory within the Source Directory files in a folder named pull-PID in the current directory. One of the patch files is a cover letter. Before running the send-pull-requestscript,@lists.yoctoproject.org You need to follow the prompts as the script is interactive. Note For help on using these scripts, simply provide the -hargument as follows: $ poky/scripts/create-pull-request -h $ poky/scripts/send-pull-request -h 3.32.2.4 Responding to Patch Review¶ You may get feedback on your submitted patches from other community members or from the automated patchtest service. If issues are identified in your patch then it is usually necessary to address these before the patch will be accepted into the project. In this case you should amend the patch according to the feedback and submit an updated version to the relevant mailing list, copying in the reviewers who provided feedback to the previous version of the patch. The patch should be amended using git commit --amend or perhaps git rebase for more expert git users. You should also modify the [PATCH] tag in the email subject line when sending the revised patch to mark the new iteration as [PATCH v2], [PATCH v3], etc as appropriate. This can be done by passing the -v argument to git format-patch with a version number. Lastly please ensure that you also test your revised changes. In particular please don’t just edit the patch file written out by git format-patch and resend it. 3.32.2.5 Submitting Changes to Stable Release Branches¶ The process for proposing changes to a Yocto Project stable branch differs from the steps described above. Changes to a stable branch must address identified bugs or CVEs and should be made carefully in order to avoid the risk of introducing new bugs or breaking backwards compatibility. Typically bug fixes must already be accepted into the master branch before they can be backported to a stable branch unless the bug in question does not affect the master branch or the fix on the master branch is unsuitable for backporting. The list of stable branches along with the status and maintainer for each branch can be obtained from the Releases wiki page. Note Changes will not typically be accepted for branches which are marked as End-Of-Life (EOL). With this in mind, the steps to submit a change for a stable branch are as follows: Identify the bug or CVE to be fixed: This information should be collected so that it can be included in your submission. See Checking for Vulnerabilities for details about CVE tracking. Check if the fix is already present in the master branch: This will result in the most straightforward path into the stable branch for the fix. If the fix is present in the master branch - Submit a backport request by email: You should send an email to the relevant stable branch maintainer and the mailing list with details of the bug or CVE to be fixed, the commit hash on the master branch that fixes the issue and the stable branches which you would like this fix to be backported to. If the fix is not present in the master branch - Submit the fix to the master branch first: This will ensure that the fix passes through the project’s usual patch review and test processes before being accepted. It will also ensure that bugs are not left unresolved in the master branch itself. Once the fix is accepted in the master branch a backport request can be submitted as above. If the fix is unsuitable for the master branch - Submit a patch directly for the stable branch: This method should be considered as a last resort. It is typically necessary when the master branch is using a newer version of the software which includes an upstream fix for the issue or when the issue has been fixed on the master branch in a way that introduces backwards incompatible changes. In this case follow the steps in Preparing Changes for Submission and Using Email to Submit a Patch but modify the subject header of your patch email to include the name of the stable branch which you are targetting. This can be done using the --subject-prefixargument to git format-patch, for example to submit a patch to the dunfell branch use git format-patch --subject-prefix='honister][PATCH' .... 3.33 Working With Licenses¶ disabled. 3.33.1 Tracking License Changes¶. 3.33.1.1 Specifying the LIC_FILES_CHKSUM Variable¶ \ ..." Note”. 3.33.1.2 Explanation of Syntax¶. Note If you specify an empty or invalid “md5” parameter, BitBake returns an md5 mis-match error and displays the correct “md5” parameter value during the build. The correct parameter is also captured in the build log. If the whole file contains only license text, you do not need to use the “beginline” and “endline” parameters. 3.33.2 Enabling Commercially Licensed Recipes¶ By default, the OpenEmbedded build system disables components that have commercial or other special licensing requirements. Such requirements are defined on a recipe-by-recipe basis through the LICENSE_FLAGS variable definition in the affected recipe. For instance, the_ACCEPTED variable, which is a variable typically defined in your local.conf file. For example, to enable the poky/meta/recipes-multimedia/gstreamer/gst-plugins-ugly package, you could add either the string “commercial_gst-plugins-ugly” or the more general string “commercial” to LICENSE_FLAGS_ACCEPTED. See the “License Flag Matching” section for a full explanation of how LICENSE_FLAGS matching works. Here is the example: LICENSE_FLAGS_ACCEPTED = _ACCEPTED = "commercial_gst-plugins-ugly license_emgd_1.10" As a convenience, you do not need to specify the complete license string value will also match both of the packages previously mentioned as well as any other packages that have licenses starting with “commercial” or “license”. LICENSE_FLAGS_ACCEPTED = "commercial license" 3.33.2.1 License Flag Matching¶ License flag matching allows you to control what recipes the OpenEmbedded build system includes in the build. Fundamentally, the build system attempts to match LICENSE_FLAGS strings found in recipes against strings found in LICENSE_FLAGS_ACCEPTED. entries of LICENSE_FLAGS_ACCEPTED, the expanded string _${PN} is appended to the flag. This expansion makes each LICENSE_FLAGS value recipe-specific. After expansion, the string is then matched against the entries. Thus, specifying LICENSE_FLAGS = "commercial" in recipe “foo”, for example, results in the string "commercial_foo". And, to create a match, that string must appear among the entries of LICENSE_FLAGS_ACCEPTED. Judicious use of the LICENSE_FLAGS strings and the contents of the LICENSE_FLAGS_ACCEPTED variable allows you a lot of flexibility for including or excluding recipes based on licensing. For example, you can broaden the matching capabilities by using license flags string subsets in LICENSE_FLAGS_ACCEPTED. Note When using a string subset, be sure to use the part of the expanded string that precedes the appended underscore character (e.g. usethispart_1.3, usethispart_1.4, and so forth). For example, simply specifying the string “commercial” in the LICENSE_FLAGS_ACCEPTED variable list and allow only specific recipes into the image, or you can use a string subset that causes a broader range of matches to allow a range of recipes into the image. This scheme works even if the LICENSE_FLAGS string already has _${PN} appended. For example, the build system turns the license flag “commercial_1.2_foo” into “commercial_1.2_foo_foo” and would match both the general “commercial” and the specific “commercial_1.2_foo” strings found in the LICENSE_FLAGS_ACCEPTED variable, as expected. Here are some other scenarios: You can specify a versioned string in the recipe such as “commercial_foo_1.2” in a “foo” recipe. The build system expands this string to “commercial_foo_1.2_foo”. Combine this license flag with a LICENSE_FLAGS_ACCEPTED variable that has the string “commercial” and you match the flag along with any other flag that starts with the string “commercial”. Under the same circumstances, you can add “commercial_foo” in the LICENSE_FLAGS_ACCEPTED variable and the build system not only matches “commercial_foo_1.2” but also matches any license flag with the string “commercial_foo”, regardless of the version. You can be very specific and use both the package and version parts in the LICENSE_FLAGS_ACCEPTED list (e.g. “commercial_foo_1.2”) to specifically match a versioned recipe. 3.33.3 Maintaining Open Source License Compliance During Your Product’s Lifecycle¶ there are three main areas of concern: Source code must be provided. License text for the software must be provided. Compilation scripts and modifications to the source code must be provided. spdx files can. Note The Yocto Project generates a license manifest during image creation that is located in ${DEPLOY_DIR}/licenses/image_name -datestamp to assist with any audits. 3.33.3.1 Providing the Source Code¶ -ge 1 ]; then echo Archiving $p mkdir -p $src_release_dir/$p/source cp $d/* $src_release_dir/$p/source 2> /dev/null mkdir -p $src_release_dir/$p/license cp tmp/deploy/licenses/$p/* $src_release_dir/$p/license 2> /dev/null fi done. 3.33.3.2 Providing License Text¶" LICENSE_CREATE_PACKAGE = "1" Adding these statements to the configuration file ensures that the licenses collected during package generation are included on your image. Note. 3.33.3.3 Providing Compilation Scripts and Source Code Modifications¶ At this point, we have addressed all we need to dunfell branch of the poky repo $ git clone -b dunfell-poky/conf/bblayers.conf.sample to ensure that when the end user utilizes the released build system to build an image, the development organization’s layers are included in the bblayers.conf file automatically: # POKY_BBLAYERS_CONF_VERSION is increased each time build/conf/bblayers.conf # changes incompatibly POKY_BBLAYERS_CONF_VERSION = "2" BBPATH = "${TOPDIR}" BBFILES ?= "" BBLAYERS ?= " \ ##OEROOT##/meta \ ##OEROOT##/meta-poky \ ##OEROOT##/meta-yocto-bsp \ ##OEROOT##/meta-mylayer \ " Creating and providing an archive of the Metadata layers (recipes, configuration files, and so forth) enables you to meet your requirements to include the scripts to control compilation as well as any modifications to the original source. 3.33.3.4 Providing spdx files¶ The spdx module has been integrated to a layer named meta-spdxscanner. meta-spdxscanner provides several kinds of scanner. If you want to enable this function, you have to follow the following steps: Add meta-spdxscanner layer into bblayers.conf. Refer to the README in meta-spdxscanner to setup the environment (e.g, setup a fossology server) needed for the scanner. Meta-spdxscanner provides several methods within the bbclass to create spdx files. Please choose one that you want to use and enable the spdx task. You have to add some config options in local.conffile in your Build Directory. Here is an example showing how to generate spdx files during bitbake using the fossology-python.bbclass: # Select fossology-python.bbclass. INHERIT += "fossology-python" # For fossology-python.bbclass, TOKEN is necessary, so, after setup a # Fossology server, you have to create a token. TOKEN = "eyJ0eXAiO..." # The fossology server is necessary for fossology-python.bbclass. FOSSOLOGY_SERVER = " # If you want to upload the source code to a special folder: FOLDER_NAME = "xxxx" //Optional # If you don't want to put spdx files in tmp/deploy/spdx, you can enable: SPDX_DEPLOY_DIR = "${DEPLOY_DIR}" //Optional For more usage information refer to the meta-spdxscanner repository. 3.33.3.5 Compliance Limitations with Executables Built from Static Libraries¶ When package A is added to an image via the RDEPENDS or RRECOMMENDS mechanisms as well as explicitly included in the image recipe with IMAGE_INSTALL, and depends on a static linked library recipe B ( DEPENDS += "B"), package B will neither appear in the generated license manifest nor in the generated source tarballs. This occurs as the license and archiver classes assume that only packages included via RDEPENDS or RRECOMMENDS end up in the image. As a result, potential obligations regarding license compliance for package B may not be met. The Yocto Project doesn’t enable static libraries by default, in part because of this issue. Before a solution to this limitation is found, you need to keep in mind that if your root filesystem is built from static libraries, you will need to manually ensure that your deliveries are compliant with the licenses of these libraries. 3.33.4 Copying Non Standard Licenses¶ Some packages, such as the linux-firmware package, have many licenses that are not in any way common. You can avoid adding a lot of these types of common license files, which are only applicable to a specific package, by using the NO_GENERIC_LICENSE variable. Using this variable also avoids QA errors when you use a non-common, non-CLOSED license in a recipe. Here is an example that uses the LICENSE.Abilis.txt file as the license from the fetched source: NO_GENERIC_LICENSE[Firmware-Abilis] = "LICENSE.Abilis.txt" 3.34 Checking for Vulnerabilities¶ 3.34.1 Vulnerabilities in images¶ The Yocto Project has an infrastructure to track and address unfixed known security vulnerabilities, as tracked by the public Common Vulnerabilities and Exposures (CVE) database. To know which packages are vulnerable to known security vulnerabilities, add the following setting to your configuration: INHERIT += "cve-check" This way, at build time, BitBake will warn you about known CVEs as in the example below: WARNING: flex-2.6.4-r0 do_cve_check: Found unpatched CVE (CVE-2019-6293), for more information check /poky/build/tmp/work/core2-64-poky-linux/flex/2.6.4-r0/temp/cve.log WARNING: libarchive-3.5.1-r0 do_cve_check: Found unpatched CVE (CVE-2021-36976), for more information check /poky/build/tmp/work/core2-64-poky-linux/libarchive/3.5.1-r0/temp/cve.log It is also possible to check the CVE status of individual packages as follows: bitbake -c cve_check flex libarchive Note that OpenEmbedded-Core keeps a list of known unfixed CVE issues which can be ignored. You can pass this list to the check as follows: bitbake -c cve_check libarchive -R conf/distro/include/cve-extra-exclusions.inc 3.34.2 Enabling vulnerabily tracking in recipes¶ The CVE_PRODUCT variable defines the name used to match the recipe name against the name in the upstream NIST CVE database. 3.34.3 Editing recipes to fix vulnerabilities¶ To fix a given known vulnerability, you need to add a patch file to your recipe. Here’s an example from the ffmpeg recipe: SRC_URI = " \ \ \ \ \ \ \ \ The cve-check class defines two ways of supplying a patch for a given CVE. The first way is to use a patch filename that matches the below pattern: cve_file_name_match = re.compile(".*([Cc][Vv][Ee]\-\d{4}\-\d+)") As shown in the example above, multiple CVE IDs can appear in a patch filename, but the cve-check class will only consider the last CVE ID in the filename as patched. The second way to recognize a patched CVE ID is when a line matching the below pattern is found in any patch file provided by the recipe: cve_match = re.compile("CVE:( CVE\-\d{4}\-\d+)+") This allows a single patch file to address multiple CVE IDs at the same time. Of course, another way to fix vulnerabilities is to upgrade to a version of the package which is not impacted, typically a more recent one. The NIST database knows which versions are vulnerable and which ones are not. Last but not least, you can choose to ignore vulnerabilities through the CVE_CHECK_SKIP_RECIPE and CVE_CHECK_IGNORE variables. 3.34.4 Implementation details¶ Here’s what the cve-check class does to find unpatched CVE IDs. First the code goes through each patch file provided by a recipe. If a valid CVE ID is found in the name of the file, the corresponding CVE is considered as patched. Don’t forget that if multiple CVE IDs are found in the filename, only the last one is considered. Then, the code looks for CVE: CVE-ID lines in the patch file. The found CVE IDs are also considered as patched. Then, the code looks up all the CVE IDs in the NIST database for all the products defined in CVE_PRODUCT. Then, for each found CVE: - If the package name (PN) is part of CVE_CHECK_SKIP_RECIPE, it is considered as patched. - If the CVE ID is part of CVE_CHECK_IGNORE, it is considered as patched too. - If the CVE ID is part of the patched CVE for the recipe, it is already considered as patched. - Otherwise, the code checks whether the recipe version (PV) is within the range of versions impacted by the CVE. If so, the CVE is considered as unpatched. The CVE database is stored in DL_DIR and can be inspected using sqlite3 command as follows: sqlite3 downloads/CVE_CHECK/nvdcve_1.1.db .dump | grep CVE-2021-37462 3.35 Using the Error Reporting Tool¶. There is a live instance of the error reporting server at When you want to get help with build failures, you can submit all of the information on the failure easily and then point to the URL in your bug report or send an email to the mailing list. Note If you send error reports to this server, the reports become publicly visible. 3.35.1 Enabling and Using the Tool¶ By default, the error reporting tool is disabled. You can enable it by inheriting the report-error class by adding the following statement to the end of your local.conf file in your Build Directory. INHERIT += "report-error" By default, the error reporting feature stores information in }/error-report. However, you can specify a directory to use by adding the following to your local.conf file: ${LOG_DIR that corresponds to your entry in the database. For example, here is a typical link: Following the link takes you to a web interface where you can browse, query the errors, and view statistics. 3.35.2 Disabling the Tool¶ To disable the error reporting feature, simply remove or comment out the following statement from the end of your local.conf file in your Build Directory. INHERIT += "report-error" 3.35.3 Setting Up Your Own Error Reporting Server¶ If you want to set up your own error reporting server, you can obtain the code from the Git repository at Instructions on how to set it up are in the README document. 3.36 Using Wayland and Weston¶ Wayland is a computer display server protocol that its release. You can find Embedded Media and Graphics Driver (Intel EMGD) that overrides Mesa DRI. Note Due to lack of EGL support, Weston 1.0.3 will not run directly on the emulated QEMU hardware. However, this version of Weston will run under X emulation without issues. This section describes what you need to do to implement Wayland and use the Weston compositor when building an image for a supporting target. 3.36.1 Enabling Wayland in an Image¶ To enable Wayland, you need to enable it to be built and enable it to be included (installed) in the image. 3.36.1.1 Building Wayland¶ To cause Mesa to build the wayland-egl platform and Weston to build Wayland with Kernel Mode Setting (KMS) support, include the “wayland” flag in the DISTRO_FEATURES statement in your local.conf file: DISTRO_FEATURES:append = " wayland" Note If X11 has been enabled elsewhere, Weston will build Wayland with X11 support 3.36.1.2 Installing Wayland and Weston¶ To install the Wayland feature into an image, you must include the following CORE_IMAGE_EXTRA_INSTALL statement in your local.conf file: CORE_IMAGE_EXTRA_INSTALL += "wayland weston" 3.36.2 Running Weston¶
https://docs.yoctoproject.org/dev-manual/common-tasks.html
2022-05-16T11:42:42
CC-MAIN-2022-21
1652662510117.12
[]
docs.yoctoproject.org
What’s new in Celery 3.0 (Chiastic Slide)¶. Highlights¶ Overview A new and improved API, that-flows. Event-loop_FORCE_EXECV setting is enabled by default if the event-loop isn’t used. New celery umbrella command¶ All Celery’s command-line programs are now available from a single celery umbrella command. You can see a list of sub-commands and options by running: $ celery help Commands include: celery worker(previously celeryd). celery beat(previously celerybeat). celery amqp(previously camqadm). The old programs are still available ( celeryd, celerybeat, etc), but you’re discouraged from using them. Now depends on billiard¶. Issue #625 Issue #627 Issue #640 django-celery #122 < django-celery #124 < celery.app.task no longer a package¶. Last version to support Python 2.5¶ aren’t compatible with Celery versions prior to 2.5. You can disable UTC and revert back to old local time by setting the CELERY_ENABLE_UTC setting. Redis: Ack emulation improvements¶ Reducing the possibility of data loss. Acks are now implemented by storing a copy of the message when the message is consumed. The copy isn’t removed until the consumer acknowledges or rejects it. This means that unacknowledged messages will be redelivered either when the connection is closed, or when the visibility timeout is exceeded. - Visibility timeout This is a timeout for acks, so that if the consumer doesn’t ack the message within this time limit, the message is redelivered to another consumer. The timeout is set to one hour by default, but can be changed by configuring a transport option:BROKER_TRANSPORT_OPTIONS = {'visibility_timeout': 18000} # 5 hours Note Messages that haven’ll take a long time for messages to be redelivered in the event of a power failure, but if so happens you could temporarily set the visibility timeout lower to flush out messages when you start up the systems again. News¶ Chaining Tasks¶and link_errorkeywordinstances. AsyncResult.iterdeps Recursively iterates over the tasks dependencies, yielding (parent, node) tuples. Raises IncompleteStream if any of the dependencies hasn’t returned yet. AsyncResult.graph A DependencyGraphof the tasks dependencies. With this you can also convert to dot format: with open('graph.dot') as fh: result.graph.to_dot(fh) then produce an image of the graph: $ dot -Tpng graph.dot -o graph.png A new special subtask called chain) Redis: Priority support¶ isn’t as reliable as priorities on the server side, which is why the feature is nicknamed “quasi-priorities”; Using routing is still the suggested way of ensuring quality of service, as client implemented priorities fall short in a number of ways, for example. Redis: Now cycles queues so that consuming is fair¶ together, since it was very difficult to migrate the TaskSetclass to become a subtask. A new shortcut has been added to tasks: >>> task.s(arg1, arg2, kw=1) as a shortcut to: >>> task.subtask((arg1, arg2), {'kw': 1}) Tasks can be chained by using the >>> (add.s(2, 2), pow.s(2)).apply_async() Subtasks can be “evaluated” using the >>> ~add.s(2, 2) 4 >>> ~(add.s(2, 2) | pow.s(2)) is the same as: >>> chain(add.s(2, 2), pow.s(2)).apply_async().get() A new subtask_type key has been added to the subtask dictionary.) New remote control commands¶ These commands were previously experimental, but they’ve proven stable and is now documented as part of the officialenabled.control’s can now be immutable, which means that the arguments won’t be modified when calling callbacks: >>> chain(add.s(2, 2), clear_static_electricity.si()) means it’ll not receive the argument of the parent task, and .si() is a shortcut to: >>> clear_static_electricity.subtask(immutable=True) Logging Improvements¶’s used throughout. All loggers inherit from a common logger called “celery”. Before task.get_loggerwould setup a new logger for every task, and even set the log level. This is no longer the case. Instead all task loggers now inherit from a common “celery.task” logger that’s set up when programs call setup_logging_subsystem. Instead of using LoggerAdapter to augment the formatter with the task_id and task_name field, the task base logger now use a special formatter adding these values at run-time from the currently executing task. In fact, task.get_logger Task registry no longer global¶ Abstract tasks are now lazily bound¶>> Lazy task decorators¶ The @task decorator is now lazy when used with custom apps. That is, if accept_magic_kwargs is enabled (her by called “compat mode”), the task decorator executes inline like before, however for custom apps the @task decorator now returns a special PromiseProxy object that module named celery’, and get the celery attribute from that module. For example, if you have a project named proj where the celery app is located in from’t be signaled (Issue #595). Contributed by Brendon Crawford. Redis event monitor queues are now automatically deleted (Issue #436). App instance factory methods have been converted to be cached descriptors that creates a new subclass on access. For example, this meanscommand, which is an entry-point for all other commands. The main for this command can be run by calling celery.start(). Annotations now supports decorators if the key starts with ‘@’. For example: def debug_args(fun): @wraps(fun) def _inner(*args, **kwargs): print('ARGS: %r' % (args,)) return _inner CELERY_ANNOTATIONS = { 'tasks.add': {'@__call__': debug_args}, } Also tasks are now always bound by class so that annotated methods end up being bound. Bug-report – for example with_FORCE_EXECVis now enabled by default. If the old behavior is wanted the setting can be set to False, or the new –no-execv option to celery worker. Deprecated module celery.confhas been removed. The CELERY_TIMEZONEnow always require the pytz library to be installed (except if the timezone is set to UTC). The Tokyo Tyrant backend has been removed and is no longer supported. Now uses maybe_declare()to cache queue declarations. There’s no longer a global default for the CELERYBEAT_MAX_LOOP_INTERVALsetting, it is instead set by individual schedulers. Worker: now truncates very long message bodies in error reports. No longer deep-copies/Beat no longer logs the start-upon the remote control command queue twice. Probably didn’t cause any problems, but was unnecessary. Internals¶ app.broker_connectionis now app.connection Both names still work. Compatibility modules are now generated dynamically upon use. These modules are celery.messaging, celery.log, celery.decoratorsand celery.registry. celery.utilsrefactored into multiple modules: Now using kombu.utils.encodinginstead. Experimental¶ celery.contrib.methods: Task decorator for methods¶. Unscheduled Removals¶ Deprecation Time-line Changes¶ See the Celery Deprecation Time-line. don’t modify anything, while idempotent control commands that make changes are on the control objects. Fixes¶ Retry SQLAlchemy backend operations on DatabaseError/OperationalError (Issue #634) Tasks that called retrywasn’t acknowledged if acks late was enabled Fix contributed by David Markey. The message priority argument wasn’t properly propagated to Kombu (Issue #708). Fix contributed by Eran Rundstein
https://docs.celeryq.dev/en/latest/history/whatsnew-3.0.html
2022-05-16T12:49:40
CC-MAIN-2022-21
1652662510117.12
[]
docs.celeryq.dev
GraphQL API spam protection and CAPTCHA support If the model can be modified via the GraphQL API, you must also add support to all of the relevant GraphQL mutations which may modify spammable or spam-related attributes. This definitely includes the Create and Update mutations, but may also include others, such as those related to changing a model’s confidential/public flag. Add support to the GraphQL mutations The main steps are: - Use include Mutations::SpamProtectionin your mutation. - Create a spam_paramsinstance based on the request. Obtain the request from the context via context[:request]when creating the SpamParamsinstance. - Pass spam_paramsto the relevant Service class constructor. - After you create or update the Spammablemodel instance, call #check_spam_action_response!and pass it the model instance. This call: If you use the standard ApolloLink or Axios interceptor CAPTCHA support described above, you can ignore the field details, because they are handled automatically. They become relevant if you attempt to use the GraphQL API directly to process a failed check for potential spam, and resubmit the request with a solved CAPTCHA response. - Performs the necessary spam checks on the model. - If spam is detected: - Raises a GraphQL::ExecutionErrorexception. - Includes the relevant information added as error fields to the response via the extensions:parameter. For more details on these fields, refer to the section in the GraphQL API documentation on Resolve mutations detected as spam. For example: module Mutations module Widgets class Create < BaseMutation include Mutations::SpamProtection def resolve(args) spam_params = ::Spam::SpamParams.new_from_request(request: context[:request]) service_response = ::Widgets::CreateService.new( project: project, current_user: current_user, params: args, spam_params: spam_params ).execute widget = service_response.payload[:widget] check_spam_action_response!(widget) # If possible spam was detected, an exception would have been thrown by # `#check_spam_action_response!`, so the normal resolve return logic can follow below. end end end end Refer to the Exploratory Testing section for instructions on how to test CAPTCHA behavior in the GraphQL API.
https://docs.gitlab.com/ee/development/spam_protection_and_captcha/graphql_api.html
2022-05-16T12:52:35
CC-MAIN-2022-21
1652662510117.12
[]
docs.gitlab.com
Use multiple units of measure when synchronizing items and resources to Dynamics 365 Sales Important This content is archived and is not being updated. For the latest documentation, go to What's new and planned for Dynamics 365 Business Central. For the latest release plans, go to Dynamics 365 and Microsoft Power Platform release plans. Business value Businesses often track inventory for items in one unit of measure, such as pieces, but due to different market needs they may sell the items in several different units of measure, such as boxes or containers. Integration between Business Central and Microsoft Dynamics 365 Sales now allows these services to exchange information about items in multiple units of measure. Feature details When you enable the Feature Update: Multiple Units of Measure Synchronization with Dynamics 365 Sales feature in Feature Management, Business Central will create unit groups for items and resources through the data update process. You can now view two new integration table mappings for Item Units of Measure (ITEM-UOM) and Resource Units Of Measure (RESOURCE-UOM). Item or resource unit groups automatically generate sets of units of measure for items and resources that match the units in Dynamics 365 Sales. On the Item Unit Group List or Resource Unit Group List pages, you can view coupled unit groups by choosing the Unit Group action, you can synchronize unit group data by using the Synchronize action, and couple or delete unit groups by choosing Coupling, Set up coupling or Delete coupling. From the Item Units of Measure and Resource Units of Measure pages, for each units of measure in a unit group, you can now view the coupled unit by choosing the Unit action, use the Synchronize action to synchronize unit data, or couple or delete a coupling by choosing Coupling, Set up coupling or Delete coupling. Note You need to enable Feature Update: Multiple Units of Measure Synchronization with Dynamics 365 Sales in Feature Management Synchronization Rules (docs)
https://docs.microsoft.com/en-us/dynamics365-release-plan/2021wave2/smb/dynamics365-business-central/use-multiple-units-measure-when-synchronizing-items-resources-dynamics-365-sales
2022-05-16T13:48:31
CC-MAIN-2022-21
1652662510117.12
[]
docs.microsoft.com
SQLAlchemy 2.0 Documentation SQLAlchemy 2.0 Documentation in development Home | Download this Documentation Dialects - PostgreSQL - MySQL and MariaDB - SQLite - Oracle¶ - - Microsoft SQL Server Project Versions - Previous: SQLite - Next: Microsoft SQL Server - Up: Home - On this page: - Oracle - Oracle¶ Support for the Oracle database. The following table summarizes current support levels for database release versions. DBAPI Support¶ The following dialect/DBAPI options are available. Please refer to individual DBAPI sections for connect information. Auto Increment Behavior¶ SQLAlchemy Table objects which include integer primary keys are usually assumed to have “autoincrementing” behavior, meaning they can generate their own primary key values upon INSERT. For use within Oracle, two options are available, which are the use of IDENTITY columns (Oracle 12 and above only) or the association of a SEQUENCE with the column. Specifying GENERATED AS IDENTITY (Oracle 12 and above)¶ Starting from version 12 Oracle can make use of identity columns using the Identity to specify the autoincrementing behavior: t = Table('mytable', metadata, Column('id', Integer, Identity(start=3), primary_key=True), Column(...), ... ) The CREATE TABLE for the above Table object would be: CREATE TABLE mytable ( id INTEGER GENERATED BY DEFAULT AS IDENTITY (START WITH 3), ..., PRIMARY KEY (id) ) The Identity object support many options to control the “autoincrementing” behavior of the column, like the starting value, the incrementing value, etc. In addition to the standard options, Oracle supports setting Identity.always to None to use the default generated mode, rendering GENERATED AS IDENTITY in the DDL. It also supports setting Identity.on_null to True to specify ON NULL in conjunction with a ‘BY DEFAULT’ identity column. Using a SEQUENCE (all Oracle versions)¶ Older version of Oracle had no “autoincrement” feature, SQLAlchemy relies upon sequences to produce these values. With the older Oracle versions,_with=engine: t = Table('mytable', metadata, Column('id', Integer, Sequence('id_seq'), primary_key=True), autoload_with=engine ) Transaction Isolation Level / Autocommit¶ The Oracle database supports “READ COMMITTED” and “SERIALIZABLE” modes of isolation. The AUTOCOMMIT isolation level is also supported by the cx_Oracle dialect. To set using per-connection execution options: connection = engine.connect() connection = connection.execution_options( isolation_level="AUTOCOMMIT" ) For READ COMMITTED and SERIALIZABLE, the Oracle dialect sets the level at the session level using ALTER SESSION, which is reverted back to its default setting when the connection is returned to the connection pool. Valid values for isolation_level include: READ COMMITTED AUTOCOMMIT SERIALIZABLE Note The implementation for the Connection.get_isolation_level() method as implemented by the Oracle dialect necessarily forces the start of a transaction using the Oracle LOCAL_TRANSACTION_ID function; otherwise no level is normally readable. Additionally, the Connection.get_isolation_level() method will raise an exception if the v$transaction view is not available due to permissions or other reasons, which is a common occurrence in Oracle installations. The cx_Oracle dialect attempts to call the Connection.get_isolation_level() method when the dialect makes its first connection to the database in order to acquire the “default”isolation level. This default level is necessary so that the level can be reset on a connection after it has been temporarily modified using Connection.execution_options() method. In the common event that the Connection.get_isolation_level() method raises an exception due to v$transaction not being readable as well as any other database-related failure, the level is assumed to be “READ COMMITTED”. No warning is emitted for this initial first-connect condition as it is expected to be a common restriction on Oracle databases. New in version 1.3.16: added support for AUTOCOMMIT to the cx_oracle dialect as well as the notion of a default isolation level New in version 1.3.21: Added support for SERIALIZABLE as well as live reading of the isolation level. Changed in version 1.3.22: In the event that the default isolation level cannot be read due to permissions on the v$transaction view as is common in Oracle installations, the default isolation level is hardcoded to “READ COMMITTED” which was the behavior prior to 1.3.21.. Max Identifier Lengths¶ Oracle has changed the default max identifier length as of Oracle Server version 12.2. Prior to this version, the length was 30, and for 12.2 and greater it is now 128. This change impacts SQLAlchemy in the area of generated SQL label names as well as the generation of constraint names, particularly in the case where the constraint naming convention feature described at Configuring Constraint Naming Conventions is being used. To assist with this change and others, Oracle includes the concept of a “compatibility” version, which is a version number that is independent of the actual server version in order to assist with migration of Oracle databases, and may be configured within the Oracle server itself. This compatibility version is retrieved using the query SELECT value FROM v$parameter WHERE name = 'compatible';. The SQLAlchemy Oracle dialect, when tasked with determining the default max identifier length, will attempt to use this query upon first connect in order to determine the effective compatibility version of the server, which determines what the maximum allowed identifier length is for the server. If the table is not available, the server version information is used instead. As of SQLAlchemy 1.4, the default max identifier length for the Oracle dialect is 128 characters. Upon first connect, the compatibility version is detected and if it is less than Oracle version 12.2, the max identifier length is changed to be 30 characters. In all cases, setting the create_engine.max_identifier_length parameter will bypass this change and the value given will be used as is: engine = create_engine( "oracle+cx_oracle://scott:tiger@oracle122", max_identifier_length=30) The maximum identifier length comes into play both when generating anonymized SQL labels in SELECT statements, but more crucially when generating constraint names from a naming convention. It is this area that has created the need for SQLAlchemy to change this default conservatively. For example, the following naming convention produces two very different constraint names based on the identifier length: from sqlalchemy import Column from sqlalchemy import Index from sqlalchemy import Integer from sqlalchemy import MetaData from sqlalchemy import Table from sqlalchemy.dialects import oracle from sqlalchemy.schema import CreateIndex m = MetaData(naming_convention={"ix": "ix_%(column_0N_name)s"}) t = Table( "t", m, Column("some_column_name_1", Integer), Column("some_column_name_2", Integer), Column("some_column_name_3", Integer), ) ix = Index( None, t.c.some_column_name_1, t.c.some_column_name_2, t.c.some_column_name_3, ) oracle_dialect = oracle.dialect(max_identifier_length=30) print(CreateIndex(ix).compile(dialect=oracle_dialect)) With an identifier length of 30, the above CREATE INDEX looks like: CREATE INDEX ix_some_column_name_1s_70cd ON t (some_column_name_1, some_column_name_2, some_column_name_3) However with length=128, it becomes: CREATE INDEX ix_some_column_name_1some_column_name_2some_column_name_3 ON t (some_column_name_1, some_column_name_2, some_column_name_3) Applications which have run versions of SQLAlchemy prior to 1.4 on an Oracle server version 12.2 or greater are therefore subject to the scenario of a database migration that wishes to “DROP CONSTRAINT” on a name that was previously generated with the shorter length. This migration will fail when the identifier length is changed without the name of the index or constraint first being adjusted. Such applications are strongly advised to make use of create_engine.max_identifier_length in order to maintain control of the generation of truncated names, and to fully review and test all database migrations in a staging environment when changing this value to ensure that the impact of this change has been mitigated. Changed in version 1.4: the default max_identifier_length for Oracle is 128 characters, which is adjusted down to 30 upon first connect if an older version of Oracle server (compatibility version < 12.2) is detected. LIMIT/OFFSET Support¶ Oracle has no direct support for LIMIT and OFFSET until version 12c. To achieve this behavior across all widely used versions of Oracle starting with the 8 series, SQLAlchemy currently makes use of ROWNUM to achieve LIMIT/OFFSET; the exact methodology is taken from . There is currently a single option to affect its behavior: the “FIRST_ROWS()” optimization keyword is not used by default. To enable the usage of this optimization directive, specify optimize_limits=Trueto create_engine(). Changed in version 1.4: The Oracle dialect renders limit/offset integer values using a “post compile” scheme which renders the integer directly before passing the statement to the cursor for execution. The use_binds_for_limits flag no longer has an effect. Support for changing the row number strategy, which would include one that makes use of the row_number() window function as well as one that makes use of the Oracle 12c “FETCH FIRST N ROW / OFFSET N ROWS” keywords may be added in a future release.+cx Warning The status of Oracle 8 compatibility is not known for SQLAlchemy 2.0.+cx DATE which is a subclass of DateTime. This type has no special behavior, and is only present as a “marker” for this type; additionally, when a database column is reflected and the type is reported as DATE, the time-supporting DATE type is used. Changed in version 0.9.4: Added DATE to subclass DateTime. This is a change as previous versions would reflect a DATE column as compression:, NCHAR, \ NUMBER, NVARCHAR, NVARCHAR2, RAW, TIMESTAMP, VARCHAR, \ VARCHAR2 Types which are specific to Oracle, or have Oracle-specific construction arguments, are as follows: - class sqlalchemy.dialects.oracle.BFILE¶ Class signature class sqlalchemy.dialects.oracle.BFILE( sqlalchemy.types.LargeBinary) - method sqlalchemy.dialects.oracle.BFILE.__init__(length: Optional[int] = None)¶ inherited from the sqlalchemy.types.LargeBinary.__init__method of LargeBinary Construct a LargeBinary type. - method sqlalchemy.dialects.oracle.BFILE.static __new__(cls, *args, **kwds)¶ inherited from the typing.Generic.__new__method of Generic - class sqlalchemy.dialects.oracle.DATE¶ Provide the oracle DATE type. This type has no special Python behavior, except that it subclasses DateTime; this is to suit the fact that the Oracle DATEtype supports a time value. New in version 0.9.4. Class signature class sqlalchemy.dialects.oracle.DATE( sqlalchemy.dialects.oracle.base._OracleDateLiteralRender, sqlalchemy.types.DateTime) - method sqlalchemy.dialects.oracle.DATE.__init__(timezone: bool = False.oracle.DATE.static __new__(cls, *args, **kwds)¶ inherited from the typing.Generic.__new__method of Generic - class sqlalchemy.dialects.oracle.FLOAT¶ Oracle FLOAT. This is the same as FLOATexcept that an Oracle-specific FLOAT.binary_precisionparameter is accepted, and the Float.precisionparameter is not accepted. Oracle FLOAT types indicate precision in terms of “binary precision”, which defaults to 126. For a REAL type, the value is 63. This parameter does not cleanly map to a specific number of decimal places but is roughly equivalent to the desired number of decimal places divided by 0.3103. New in version 2.0. Class signature class sqlalchemy.dialects.oracle.FLOAT( sqlalchemy.types.FLOAT) - method sqlalchemy.dialects.oracle.FLOAT.__init__(binary_precision=None, asdecimal=False, decimal_return_scale=None)¶ Construct a FLOAT - Parameters binary_precision¶ – Oracle binary precision value to be rendered in DDL. This may be approximated to the number of decimal characters using the formula “decimal precision = 0.30103 * binary precision”. The default value used by Oracle for FLOAT / DOUBLE PRECISION is 126. asdecimal¶ – See Float.asdecimal decimal_return_scale¶ – See Float.decimal_return_scale - method sqlalchemy.dialects.oracle.FLOAT.static __new__(cls, *args, **kwds)¶ inherited from the typing.Generic.__new__method of Generic - class sqlalchemy.dialects.oracle.INTERVAL¶ Class signature class sqlalchemy.dialects.oracle.INTERVAL( sqlalchemy.types.NativeForEmulated, sqlalchemy.types._AbstractInterval) - method sqlalchemy.dialects.oracle.INTERVAL.__init__(day_precision=None, second_precision=None)¶ Construct an INTERVAL. Note that only DAY TO SECOND intervals are currently supported. This is due to a lack of support for YEAR TO MONTH intervals within available DBAPIs. - method sqlalchemy.dialects.oracle.INTERVAL.static __new__(cls, *args, **kwds)¶ inherited from the typing.Generic.__new__method of Generic - class sqlalchemy.dialects.oracle.NCLOB¶ Class signature class sqlalchemy.dialects.oracle.NCLOB( sqlalchemy.types.Text) - method sqlalchemy.dialects.oracle.NCLOB._.NCLOB.static __new__(cls, *args, **kwds)¶ inherited from the typing.Generic.__new__method of Generic - class sqlalchemy.dialects.oracle.NUMBER¶ Class signature class sqlalchemy.dialects.oracle.NUMBER( sqlalchemy.types.Numeric, sqlalchemy.types.Integer) - method sqlalchemy.dialects.oracle.NUMBER.__init__(precision=None, scale=None, asdecimal=None)¶ Construct a Numeric. - Parameters precision¶ – the numeric precision for use in DDL CREATE TABLE. scale¶ – the numeric scale for use in DDL CREATE TABLE. asdecimal¶ – default True. Return whether or not values should be sent as Python Decimal objects, or as floats. Different DBAPIs send one or the other based on datatypes - the Numeric type will ensure that return values are one or the other across DBAPIs consistently. decimal_return_scale¶ – Default scale to use when converting from floats to Python decimals. Floating point values will typically be much longer due to decimal inaccuracy, and most floating point database types don’t have a notion of “scale”, so by default the float type looks for the first ten decimal places when converting. Specifying this value will override that length. Types which do include an explicit “.scale” value, such as the base Numericas well as the MySQL float types, will use the value of “.scale” as the default for decimal_return_scale, if not otherwise specified. When using the Numerictype, care should be taken to ensure that the asdecimal setting is appropriatewill at least remove the extra conversion overhead. - method sqlalchemy.dialects.oracle.NUMBER.static __new__(cls, *args, **kwds)¶ inherited from the typing.Generic.__new__method of Generic - class sqlalchemy.dialects.oracle.LONG¶ Class signature class sqlalchemy.dialects.oracle.LONG( sqlalchemy.types.Text) - method sqlalchemy.dialects.oracle.LONG._.LONG.static __new__(cls, *args, **kwds)¶ inherited from the typing.Generic.__new__method of Generic - class sqlalchemy.dialects.oracle.RAW¶ Class signature class sqlalchemy.dialects.oracle.RAW( sqlalchemy.types._Binary) - method sqlalchemy.dialects.oracle.RAW.__init__(length: Optional[int] = None)¶ inherited from the sqlalchemy.types._Binary.__init__method of sqlalchemy.types._Binary - method sqlalchemy.dialects.oracle.RAW.static __new__(cls, *args, **kwds)¶ inherited from the typing.Generic.__new__method of Generic cx_Oracle¶ Support for the Oracle database via the cx-Oracle driver. DBAPI¶ Documentation and download information (if applicable) for cx-Oracle is available at: Connecting¶ Connect String: oracle+cx_oracle://user:pass@hostname:port[/dbname][?service_name=<service>[&key=value&key=value...]] DSN vs. Hostname connections¶ cx_Oracle provides several methods of indicating the target database. The dialect translates from a series of different URL forms. Hostname Connections with Easy Connect Syntax¶ Given a hostname, port and service name of the target Oracle Database, for example from Oracle’s Easy Connect syntax, then connect in SQLAlchemy using the service_name query string parameter: engine = create_engine("oracle+cx_oracle://scott:tiger@hostname:port/?service_name=myservice&encoding=UTF-8&nencoding=UTF-8") The full Easy Connect syntax is not supported. Instead, use a tnsnames.ora file and connect using a DSN. Connections with tnsnames.ora or Oracle Cloud¶ Alternatively, if no port, database name, or service_name is provided, the dialect will use an Oracle DSN “connection string”. This takes the “hostname” portion of the URL as the data source name. For example, if the tnsnames.ora file contains a Net Service Name of myalias as below: myalias = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = mymachine.example.com)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = orclpdb1) ) ) The cx_Oracle dialect connects to this database service when myalias is the hostname portion of the URL, without specifying a port, database name or service_name: engine = create_engine("oracle+cx_oracle://scott:tiger@myalias/?encoding=UTF-8&nencoding=UTF-8") Users of Oracle Cloud should use this syntax and also configure the cloud wallet as shown in cx_Oracle documentation Connecting to Autononmous Databases. SID Connections¶ To use Oracle’s obsolete SID connection syntax, the SID can be passed in a “database name” portion of the URL as below: engine = create_engine("oracle+cx_oracle://scott:tiger@hostname:1521/dbname?encoding=UTF-8&nencoding=UTF-8") Above, the DSN passed to cx_Oracle is created by cx_Oracle.makedsn() as follows: >>> import cx_Oracle >>> cx_Oracle.makedsn("hostname", 1521, sid="dbname") '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=hostname)(PORT=1521))(CONNECT_DATA=(SID=dbname)))' Passing cx_Oracle connect arguments¶ Additional connection arguments can usually be passed via the URL query string; particular symbols like cx_Oracle.SYSDBA are intercepted and converted to the correct symbol: e = create_engine( "oracle+cx_oracle://user:pass@dsn?encoding=UTF-8&nencoding=UTF-8&mode=SYSDBA&events=true") Changed in version 1.3: the cx_oracle dialect now accepts all argument names within the URL string itself, to be passed to the cx_Oracle DBAPI. As was the case earlier but not correctly documented, the create_engine.connect_args parameter also accepts all cx_Oracle DBAPI connect arguments. To pass arguments directly to .connect() without using the query string, use the create_engine.connect_args dictionary. Any cx_Oracle parameter value and/or constant may be passed, such as: import cx_Oracle e = create_engine( "oracle+cx_oracle://user:pass@dsn", connect_args={ "encoding": "UTF-8", "nencoding": "UTF-8", "mode": cx_Oracle.SYSDBA, "events": True } ) Note that the default value for encoding and nencoding was changed to “UTF-8” in cx_Oracle 8.0 so these parameters can be omitted when using that version, or later. Options consumed by the SQLAlchemy cx_Oracle dialect outside of the driver¶ There are also options that are consumed by the SQLAlchemy cx_oracle dialect itself. These options are always passed directly to create_engine() , such as: e = create_engine( "oracle+cx_oracle://user:pass@dsn", coerce_to_decimal=False) The parameters accepted by the cx_oracle dialect are as follows:_decimal- see Precision Numerics for detail. encoding_errors- see Encoding Errors for detail. Using cx_Oracle SessionPool¶ The cx_Oracle library provides its own connection pool implementation that may be used in place of SQLAlchemy’s pooling functionality. This can be achieved by using the create_engine.creator parameter to provide a function that returns a new connection, along with setting create_engine.pool_class to NullPool to disable SQLAlchemy’s pooling:" ) engine = create_engine("oracle+cx_oracle://", creator=pool.acquire, poolclass=NullPool) The above engine may then be used normally where cx_Oracle’s pool handles connection pooling: with engine.connect() as conn: print(conn.scalar("select 1 FROM dual")) As well as providing a scalable solution for multi-user applications, the cx_Oracle session pool supports some Oracle features such as DRCP and Application Continuity. Using Oracle Database Resident Connection Pooling (DRCP)¶ When using Oracle’s DRCP, the best practice is to pass a connection class and “purity” when acquiring a connection from the SessionPool. Refer to the cx_Oracle DRCP documentation. This can be achieved by wrapping pool.acquire():" ) def creator(): return pool.acquire(cclass="MYCLASS", purity=cx_Oracle.ATTR_PURITY_SELF) engine = create_engine("oracle+cx_oracle://", creator=creator, poolclass=NullPool) The above engine may then be used normally where cx_Oracle handles session pooling and Oracle Database additionally uses DRCP: with engine.connect() as conn: print(conn.scalar("select 1 FROM dual")) Unicode¶ As is the case for all DBAPIs under Python 3, all strings are inherently Unicode strings. In all cases however, the driver requires an explicit encoding configuration. Ensuring the Correct Client Encoding¶ The long accepted standard for establishing client encoding for nearly all Oracle related software is via the NLS_LANG environment variable. cx_Oracle like most other Oracle drivers will use this environment variable as the source of its encoding configuration. The format of this variable is idiosyncratic; a typical value would be AMERICAN_AMERICA.AL32UTF8. The cx_Oracle driver also supports a programmatic alternative which is to pass the encoding and nencoding parameters directly to its .connect() function. These can be present in the URL as follows: engine = create_engine("oracle+cx_oracle://scott:tiger@orclpdb/?encoding=UTF-8&nencoding=UTF-8") For the meaning of the encoding and nencoding parameters, please consult Characters Sets and National Language Support (NLS). See also Characters Sets and National Language Support (NLS) - in the cx_Oracle documentation. Unicode-specific Column datatypes¶ The Core expression language handles unicode data by use of the Unicode and UnicodeText datatypes. These types correspond to the VARCHAR2 and CLOB Oracle datatypes by default. When using these datatypes with Unicode data, it is expected that the Oracle database is configured with a Unicode-aware character set, as well as that the NLS_LANG environment variable is set appropriately, so that the VARCHAR2 and CLOB datatypes can accommodate the data. In the case that the Oracle database is not configured with a Unicode character set, the two options are to use the NCHAR and NCLOB datatypes explicitly, or to pass the flag use_nchar_for_unicode=True to create_engine(), which will cause the SQLAlchemy dialect to use NCHAR/NCLOB for the Unicode / UnicodeText datatypes instead of VARCHAR/CLOB. Changed in version 1.3: The Unicode and UnicodeText datatypes now correspond to the VARCHAR2 and CLOB Oracle datatypes unless the use_nchar_for_unicode=True is passed to the dialect when create_engine() is called. Encoding Errors¶ For the unusual case that data in the Oracle database is present with a broken encoding, the dialect accepts a parameter encoding_errors which will be passed to Unicode decoding functions in order to affect how decoding errors are handled. The value is ultimately consumed by the Python decode function, and is passed both via cx_Oracle’s encodingErrors parameter consumed by Cursor.var(), as well as SQLAlchemy’s own decoding function, as the cx_Oracle dialect makes use of both under different circumstances. New in version 1.3.11. Fine grained control over cx_Oracle data binding performance with setinputsizes¶ The cx_Oracle DBAPI has a deep and fundamental reliance upon the usage of the DBAPI setinputsizes() call. The purpose of this call is to establish the datatypes that are bound to a SQL statement for Python values being passed as parameters. While virtually no other DBAPI assigns any use to the setinputsizes() call, the cx_Oracle DBAPI relies upon it heavily in its interactions with the Oracle client interface, and in some scenarios it is not possible for SQLAlchemy to know exactly how data should be bound, as some settings can cause profoundly different performance characteristics, while altering the type coercion behavior at the same time. Users of the cx_Oracle dialect are strongly encouraged to read through cx_Oracle’s list of built-in datatype symbols at Note that in some cases, significant performance degradation can occur when using these types vs. not, in particular when specifying cx_Oracle.CLOB. On the SQLAlchemy side, the DialectEvents.do_setinputsizes() event can be used both for runtime visibility (e.g. logging) of the setinputsizes step as well as to fully control how setinputsizes() is used on a per-statement basis. New in version 1.2.9: Added DialectEvents.setinputsizes() Example 1 - logging all setinputsizes calls¶ The following example illustrates how to log the intermediary values from a SQLAlchemy perspective before they are converted to the raw setinputsizes() parameter dictionary. The keys of the dictionary are BindParameter objects which have a .key and a .type attribute: from sqlalchemy import create_engine, event engine = create_engine("oracle+cx_oracle://scott:tiger@host/xe") @event.listens_for(engine, "do_setinputsizes") def _log_setinputsizes(inputsizes, cursor, statement, parameters, context): for bindparam, dbapitype in inputsizes.items(): log.info( "Bound parameter name: %s SQLAlchemy type: %r " "DBAPI object: %s", bindparam.key, bindparam.type, dbapitype) Example 2 - remove all bindings to CLOB¶ The CLOB datatype in cx_Oracle incurs a significant performance overhead, however is set by default for the Text type within the SQLAlchemy 1.2 series. This setting can be modified as follows: from sqlalchemy import create_engine, event from cx_Oracle import CLOB engine = create_engine("oracle+cx_oracle://scott:tiger@host/xe") @event.listens_for(engine, "do_setinputsizes") def _remove_clob(inputsizes, cursor, statement, parameters, context): for bindparam, dbapitype in list(inputsizes.items()): if dbapitype is CLOB: del inputsizes[bindparam] RETURNING Support¶ The cx_Oracle dialect implements RETURNING using OUT parameters. The dialect supports RETURNING fully. statements. flambé! the dragon and The Alchemist image designs created and generously donated by Rotem Yaari.Created using Sphinx 4.5.0.
https://docs.sqlalchemy.org/en/20/dialects/oracle.html
2022-05-16T11:48:31
CC-MAIN-2022-21
1652662510117.12
[]
docs.sqlalchemy.org
Customising Form display¶ The React Components provided by newforms can help you get started quickly and offer a degreee of customisation, but you can completely customise the way a form is presented by rendering it yourself. To assist with rendering, we introduce another concept which ties together Widgets, Fields and Forms: BoundField¶ - BoundField A BoundField() is a helper for rendering HTML content for – and related to – a single Field. It ties together the Field itself, the fields’s configured Widget, the name the field is given by the Form, and the raw user input data and validation errors held by a Form. BoundFields provide properties and functions for using these together to render the different components required to display a field – its label, form inputs and validation error messages – as well as exposing the constituent parts of each of these should you wish to fully customise every aspect of form display. Forms provide a number of methods for creating BoundFields. These are: - form.boundFieldsObj() – returns an object whose properties are the form’s field names, with BoundFields as values. - form.boundFields() – returns a list of BoundFields in their form-defined order. - form.boundField(fieldName) – returns the BoundField for the named field. Every object which can generate ReactElement objects in newforms has a default render() method – for BoundFields, the default render() for a non-hidden field calls asWidget(), which renders the Widget the field is configured with. A selection of the properties and methods of a BoundField which are useful for custom field rendering are listed below. For complete details, see the BoundField API. Useful BoundField properties¶ - bf.name - The name of the field in the form. - bf.htmlName The name the field will be represented by when rendered. If each Form and FormSet being used to render user inputs has a unique prefix, this is guaranteed to be a unique name. As such, it’s a good candidate if you need a unique key prop for a React component related to each field. - bf.label - The label text for the field, e.g. 'Email address'. - bf.helpText - Any help text that has been associated with the field. - bf.field The Field() instance from the form, that this BoundField() wraps. You can use it to access field properties directly. Newforms also adds a custom property to the Field API – you can pass this argument when creating a field to store any additional, custom metadata you want to associate with the field for later use. Useful BoundField methods¶ - bf.errors() - Gets an object which holds any validation error messages for the field and has a default rendering to a <ul class="errorlist">. - bf.errorMessage() - Gets the first validation error message for the field as a String, or undefined if there are none, making it convenient for conditional display of error messages. - bf.idForLabel() - Generates the id that will be used for this field. You may want to use this in lieu of labelTag() if you are constructing the label manually. - bf.labelTag() - Generates a <label> containing the field’s label text, with the appropriate htmlFor property. - bf.helpTextTag() By default, generates a <span className="helpText"> containing the field’s help text if it has some configured, but this can be configured with arguments. New in version 0.10. - bf.status() Gets the current validation status of the field as a string, one of: - 'pending' – has a pending async validation. - 'error' – has validation errors. - 'valid' – has neither of the above and data present in form.cleanedData. - 'default' – none of the above (likely hasn’t been interacted with or validated yet). New in version 0.10. - bf.value() - Gets the value which will be displayed in the field’s user input. boundFields() example¶ Using these, let’s customise rendering of our ContactForm. Rendering things in React is just a case of creating ReactElement objects, so the full power of JavaScript and custom React components are available to you. For example, let’s customise rendering to add a CSS class to our form field rows and to put the checkbox for the ccMyself field inside its <label>: function renderField(bf) { var className = 'form-field' if (bf.field instanceof forms.BooleanField) { return <div className={className}> <label>{bf.render()} {bf.label}</label> {bf.helpTextTag()} {bf.errors().render()} </div> } else { return <div className={className}> {bf.labelTag()} {bf.render()} {bf.helpTextTag()} {bf.errors().render()} </div> } } We still don’t need to do much work in our component’s render() method: render: function() { return <form action="/contact" method="POST"> {this.state.form.boundFields.map(renderField)} <div> <input type="submit" value="Submit"/>{' '} <input type="button" value="Cancel" onClick={this.onCancel}/> </div> </form> } Its initial rendered output is now: <form action="/contact" method="POST"> <div class="form-field"><label for="id_subject">Subject:</label> <input maxlength="100" type="text" name="subject" id="id_subject"></div> <div class="form-field"><label for="id_message">Message:</label> <input type="text" name="message" id="id_message"></div> <div class="form-field"><label for="id_sender">Sender:</label> <input type="email" name="sender" id="id_sender"></div> <div class="form-field"><label for="id_ccMyself"><input type="checkbox" name="ccMyself" id="id_ccMyself"> Cc myself</label></div> <div><input type="submit" value="Submit"> <input type="button" value="Cancel"></div> </form> boundFieldsObj() example¶ The following Form and FormSet will be used to take input for a number of items to be cooked: var ItemForm = forms.Form.extend({ name: Forms.CharField(), time: Forms.IntegerField(), tend: Forms.ChoiceField({required: false, choices: ['', 'Flip', 'Rotate']}) }) var ItemFormSet = forms.formsetFactory(ItemForm, {extra: 3}) The list of item forms will be presented as a <table> for alignment and compactness. We could use boundFields() as above and loop over each form’s fields, creating a <td> for each one, but what if we wanted to display a unit label alongside the “time” field and dynamically display some extra content alongside the “tend” field? If every field needs to be rendered slightly differently, or needs to be placed individually into an existing layout, boundFieldsObj() provides a convenient way to access the form’s BoundFields by field name: <tbody> {itemFormset.forms().map(function(itemForm, index) { var fields = itemForm.boundFieldsObj() return <tr> <td>{fields.name.render()}</td> <td>{fields.time.render()} mins</td> <td> {fields.tend.render()} {fields.tend.value() && ' halfway'} </td> </tr> })} </tbody>
https://newforms.readthedocs.io/en/v0.10.0/custom_display.html
2022-05-16T11:23:44
CC-MAIN-2022-21
1652662510117.12
[]
newforms.readthedocs.io
Using Rest PKI on Ruby Rest PKI can be used on Ruby. To get started, see the Ruby on Rails samples project. Client library To use Rest PKI on Ruby applications, use our Ruby Gem rest_pki by declaring on your Gemfile: gem 'rest_pki', '~> 1.0.0' After that, do a bundle install to download the gem and its dependencies (if you don't have Bundler installed, get it here) Alternatively, you can install the gem directly on command-line: gem install rest_pki The gem is open-source, hosted on GitHub. Feel free to fork it if you need to make any customizations.
http://docs.lacunasoftware.com/en-us/articles/rest-pki/ruby/index.html
2019-06-16T05:36:15
CC-MAIN-2019-26
1560627997731.69
[]
docs.lacunasoftware.com
This widget displays the component versions and compliance status of managed products or endpoints on your network. Use this widget to track managed products or endpoints with outdated components. The default view displays the latest versions of components managed by Control Manager and the compliance status of managed products. The Pattern and Engine sections list components in order of the highest rate of non-compliance first. You can click the Rate column to change the sort order. Click any of the components in the Pattern or Engine columns to view a pie chart that displays the number of managed products or endpoints using each component version. Click the counts in the Outdated/All columns to view information about the component versions on outdated managed products, all managed products, outdated endpoints, or all endpoints. Click the settings icon ( > ) to configure the following options: The settings icon ( ) does not display for widgets on the Summary tab. To modify the product scope of the widget, click the double arrow button ( ) in the Scope field and select the products that contribute data. To edit the components that display in the widget, select or clear components from the Pattern or Engine fields. To display compliance information for managed products, endpoints, or both, specify the Source. To specify whether to view data from all components reported by managed products or to view data from only components managed by Control Manager, select the View.
http://docs.trendmicro.com/en-us/enterprise/control-manager-70/getting-started/dashboard/summary-tab/product-component-st.aspx
2019-06-16T05:02:46
CC-MAIN-2019-26
1560627997731.69
[]
docs.trendmicro.com
Custom schema definition On this page: Overview Private eazyBI can be integrated with customer specific databases or applications. Private eazyBI supports MySQL, PostgreSQL, MS SQL Server and Oracle database as data source. To be able to use eazyBI you need to have source data organized in star schema dimension and fact tables. Then on top of your star schema you will need to define eazyBI multidimensional schema (dimensions and measures) using mondrian-olap library. It requires some basic Ruby programming language knowledge as it is using Ruby syntax to define schema. After that you will be able to start to use eazyBI with your data and build reports, charts and dashboards. Mondrian OLAP schema eazyBI uses Mondrian OLAP Java library as a multi-dimensional query engine. This engine is embedded in mondrian-olap JRuby library and eazyBI uses mondrian-olap library for multi-dimensional schema definition and MDX query generation. Read Mondrian OLAP schema documentation to learn about schema design principles and examples. Mondrian schema is defined as XML file but eazyBI uses mondrian-olap Ruby syntax to generate XML schema file. See mondrian-olap schema definition unit tests to see available mondrian-olap methods and what resulting Mondrian XML schema they generate. eazybi.toml configuration After defining custom schema with mondrian-olap you need to configure which eazyBI accounts should use this custom schema. See example in eazybi.toml.sample for this configuration: # Settings for accounts with custom schemas [accounts."FoodMart custom"] # Specify driver if different than main database connection # driver = "postgresql" database = "foodmart" # Specify username and password if different than for main database connection # username = "foodmart" # password = "foodmart" # Specify relative path to custom schema definition file schema_rb = "schemas/foodmart_schema.rb" # OLAP schema cache timeout in minutes # cache_timeout = 10 In accounts section use either account name or account ID (that you can see it in URLs after /accounts/) to identify account. Then specify database connection using driver, database, username and password settings. Then specify relative path (from eazybi_private/config directory) to custom mondrian-olap schema definition file. Mondrian OLAP will cache query results in memory. If your data in database dimension and fact tables are changing then you want to ensure that Mondrian OLAP cache is cleared. There are two options to achive that: - Specify cache_timeoutsetting and then after this timeout results cache will be cleared and Mondrian schema will be reloaded. - Change last modified date of mondrian-olap schema definition file (e.g. on Unix based system for provided example do touch schemas/foodmart_schema.rb). On each request eazyBI will check last modified date of mondrian-olap schema definition file and if it changed then it will force clearing of Mondrian cache and reloading of mondrian-olap schema definition. Recommendations for custom schema design Do not use dots (.) in dimension names – dots are reserved for separating dimension and additional hierarchy names (for example [Time] is Time dimension primary hierarchy and [Time.Weekly] is Time dimension Weekly hierarchy. Enable drill through for custom schema You can enable drill through action in report results cell actions (when you click on results table cell or chart point or area) – it will show detailed fact table rows from which corresponding cell value is calculated. When defining custom schema you need to provide cube annotations which includes drill through configuration options. See config/schemas/foodmart_schema.rb (MySQL backup for the foodmart database that is used in examples can be downloaded here) example configuration: cube 'Sales' do # ... annotations enable_drill_through: true, drill_through_return: [ "[Time].[Year]", "[Time].[Month]", "[Time].[Day]", "[Customers].[Country]", "[Customers].[State Province]", "[Customers].[City]", "[Customers].[Name]", "[Products].[Product Family]", "[Products].[Brand Name]", "[Products].[Product Name]" ].join(","), drill_through_default_measures: [ "[Measures].[Unit Sales]", "[Measures].[Store Sales]", "[Measures].[Store Cost]" ].join(",") enable_drill_through: truewill enable drill through for all cube reports - If optional drill_through_returnis provided (either as array or comma separated values string) then specified fields will be displayed in returned drill through results. Either dimension levels or measures can be included in this list, result sorting will be done by first fields. If return field list is not specified then by default all fields from cube definition will be shown in their schema definition order. - If optional drill_through_default_measureslist is specified then it will be used when drill through is done from cell containing calculated measure. When drill through is done from normal (non-calculated) measure cell then only this measure will be shown in drill through results. Enable drill through by The Drill through by functionality is available starting from the Private eazyBI version 4.2.0. If you want to define different sets of return fields for drill through cell actions then it is recommended to define the drill_through_by annotation instead of the previously described common drill_through_return annotation. At first, define a drill_through_by cube annotation in the custom schema definition. You can specify multiple drill_through_by configurations. See example: cube 'Sales' do # ... annotations drill_through_by: { 'sales' => { display_name: 'sales', drill_through_return: [ "[Time].[Year]", "[Time].[Month]", "[Time].[Day]", "[Customers].[Country]", "[Customers].[State Province]", "[Customers].[City]", "[Customers].[Name]", "[Products].[Product Family]", "[Products].[Brand Name]", "[Products].[Product Name]" ], group_by: true } }.to_json - Define the list of returned fields in drill_through_return– it can include dimension levels, measures, as well as dimension member properties "Property(dimension_level,'property_name')"or dimension member names (instead of keys) "Name(dimension_level)" - If the optional display_nameis specified, then the drill through cell action will be displayed as Drill through display_name instead of Drill through cell. - If the optional group_byis specified, then returned results will be grouped by all fields (except measures) specified in drill_through_return. Then add the drill_through_by annotation for specific measures for which you would like to use these return fields. cube 'Sales' do # ... measure 'Store Sales', column: 'store_sales', aggregator: 'sum', annotations: { drill_through_by: "sales" } Enable drill through dimension levels Instead of drill through to detailed fact table rows you can add drill through to selected dimension levels. Results will be the same as when selecting Drill across dimension level action in a table report but in this case results will open in a new popup. Drill through dimension level is useful with large detailed dimensions (like customers or issues or transactions). In a custom schema definition add a drill_through_dimension_levels cube annotation with one or several dimension levels. By default in the user interface level name will be shown in a popup. If necessary you can override name in the popup with a drill_through_dimension_levels_display_names annotation. See example: cube 'Sales' do # one level annotations drill_through_dimension_levels: "[Customers].[Name]" # or several levels annotations drill_through_dimension_levels: "[Customers].[Name],[Products].[Product Name]" # or specify also display names in the popup annotations drill_through_dimension_levels: "[Customers].[Name],[Products].[Product Name]", drill_through_dimension_levels_display_names: "Customer,Product" Default non empty options for dimensions If you have a large dimension then you can specify that when the dimension is moved to rows or columns then by default - either Nonempty option will selected in the Rows section ( default_nonempty_crossjoinoption) - or Hide empty rows or columns option will be selected ( default_hide_emptyoption). These options for large dimensions can help to avoid creation of too large result sets (which can result in timeout errors) in ad-hoc reports. You can add these options as annotations for a dimension in a custom schema: dimension 'Customers', foreign_key: 'customer_id' do # ... annotations default_nonempty_crossjoin: true, default_hide_empty: true Enable "Go to source" action for level members You can define for dimension level source data page URL that will be added as Go to source link in dimension member actions popup (which is shown when you click on dimension member in report results). In custom schema definition you need to add source_url annotation and URL_ID property for this dimension. See example: dimension 'Customers', foreign_key: 'customer_id' do hierarchy has_all: true, all_member_name: 'All Customers', primary_key: 'customer_id' do table 'customer' level 'Name', column: 'fullname', unique_members: true do property 'URL_ID', column: 'customer_id' annotations source_url: '{{id}}' end end end This definition for each dimension member will generate Go to source link with URL{{id}} where {{id}} will be replaced with URL_ID property value (from column customer_id). In addition if you need to substitute parent and child object IDs in generated URL you can use URL_ID and URL_SUB_ID properties, see example: dimension 'Customers', foreign_key: 'customer_id' do hierarchy has_all: true, all_member_name: 'All Customers', primary_key: 'customer_id' do table 'customer' level 'Name', column: 'fullname', unique_members: true do property 'URL_ID', column: 'account_id' property 'URL_SUB_ID', column: 'customer_id' annotations source_url: '{{id}}/customers/{{sub_id}}' end end end Dimension group annotation If you have many dimensions in the cube and you want to hide some dimensions by default in the Analyze tab report designer, then you can add a group annotation for these dimension: dimension 'Customers', foreign_key: 'customer_id' do # ... annotations group: "Other" All dimensions with a group annotation will be sorted alphabetically in this group and can be shown or hidden when clicking the corresponding link.
https://docs.eazybi.com/eazybiprivate/set-up-and-administer/customization/custom-schema-definition
2019-06-16T05:50:00
CC-MAIN-2019-26
1560627997731.69
[]
docs.eazybi.com
Call Setup Latency Test Cases In the following test cases, maximum capacity was achieved within the constraints of specific thresholds. However, the system was also tested beyond the recommended capacity to determine the extent of performance degradation. The test case in Figure: Port Density versus Call Setup Latency uses the VoiceXML_App1 profile (see VoiceXML Application Profiles) to show how the CSL increases as the PD increases. The rate at which the CSL increases is relatively constant until the system reaches a bottleneck—for example, when the system load is beyond peak capacity. Caller Perceived Latency Test Case The graph in Figure: Port Density versus DTMF shows the DTMF response-to-audio-prompt latency at various port densities (relative to the peak capacity indicated in Table: GVP VOIP VXML/CCXML Capacity Testing). Notice that the TTS prompts produce ~300 ms more latency than the audio file prompts. This is due to the beginning silence played by the TTS engine. When there is speech input, additional latency is usually caused by the ASR engine. In Figure: Port Density versus Speech, the latency result from 1000 words of grammar using the Nuance OSR3 MRCP version 1 (MRCPv1) engine. The result can vary, depending on the type of MRCP engine used, the type of speech grammar used, and the load on the speech engine. The performance results in Figure: Port Density versus Speech were obtained from isolated ASR engines supporting the same number of recognition sessions at all Media Control Platform port densities; the MRCP engines did not cause a bottleneck. Therefore, depending on the load on the Media Control Platform, it can add as much as ~100 ms of latency. Feedback Comment on this article:
https://docs.genesys.com/Documentation/GVP/85/GVP85HSG/CSLTestCases
2019-06-16T05:20:07
CC-MAIN-2019-26
1560627997731.69
[array(['/images/2/22/PD_Versus_CSL.png', None], dtype=object) array(['/images/9/9c/PD_vs._DTMF.png', None], dtype=object) array(['/images/f/f9/PD_vs_Speech.png', None], dtype=object)]
docs.genesys.com
Odbc Data Odbc Adapter. Row Updated Data Odbc Adapter. Row Updated Data Odbc Adapter. Row Updated Data Event Adapter. Row Updated Definition Occurs during an update operation after a command is executed against the data source. public: event System::Data::Odbc::OdbcRowUpdatedEventHandler ^ RowUpdated; public event System.Data.Odbc.OdbcRowUpdatedEventHandler RowUpdated; member this.RowUpdated : System.Data.Odbc.OdbcRowUpdatedEventHandler Public Custom Event RowUpdated As OdbcRowUpdatedEventHandler Remarks.
https://docs.microsoft.com/en-us/dotnet/api/system.data.odbc.odbcdataadapter.rowupdated?view=netframework-4.7.1
2019-06-16T04:50:54
CC-MAIN-2019-26
1560627997731.69
[]
docs.microsoft.com
Form. On Form. Resize Begin(EventArgs) On Form. Resize Begin(EventArgs) On Form. Resize Begin(EventArgs) On Method Resize Begin(EventArgs) The ResizeBegin event will only be raised if the form's CanRaiseEvents property is set to true. Raising an event invokes the event handler through a delegate. For more information, see Handling and Raising Events. The OnResizeBegin method also allows derived classes to handle the event without attaching a delegate. This is the preferred technique for handling the event in a derived class. Notes to Inheritors When overriding OnResizeBegin(EventArgs) in a derived class, be sure to call the base class's OnResizeBegin(EventArgs) method so that registered delegates receive the event. Applies to See also - ResizeBegin ResizeBegin ResizeBegin ResizeBegin - OnResizeEnd(EventArgs) OnResizeEnd(EventArgs) OnResizeEnd(EventArgs) OnResizeEnd(EventArgs) - OnMove(EventArgs) OnMove(EventArgs) OnMove(EventArgs) OnMove(EventArgs) - OnMaximumSizeChanged(EventArgs) OnMaximumSizeChanged(EventArgs) OnMaximumSizeChanged(EventArgs) OnMaximumSizeChanged(EventArgs)
https://docs.microsoft.com/en-us/dotnet/api/system.windows.forms.form.onresizebegin?view=netframework-4.8
2019-06-16T04:48:04
CC-MAIN-2019-26
1560627997731.69
[]
docs.microsoft.com
Recently Viewed Topics Create an Access Group Required User Role: Administrator You can create an access group to group assets based on rules, using information such as an AWS Account ID, FQDN, IP address, and other identifying attributes. You can then assign permissions for users or groups of users to view the assets in the access group. To create an access. Next to Access Groups, click the button. The access group creation plane appears. In the text box, type a name for the access group. Note: The name must be unique within your organization. Add asset rules: Note: You can add up to 1,000 asset rules per access group. Next to Asset Rules, click the button. The Add Asset Rules plane appears. In the Category drop-down box, select an attribute to filter assets. Tip: For IPv4 Address, you can use CIDR notation (e.g., 192.168.0.0/24), a range (e.g., 192.168.0.1-192.168.0.255), or a comma separated list (e.g., 192.168.0.0, 192.168.0.1). In the Operator drop-down box, select an operator. Possible operators include: • is equal to: Tenable.io matches the rule to assets based on an exact match of the specified term. Note: Tenable.io interprets the operator as 'equals' for IPv4 rules that specify a single IP address, but interprets the operator as 'contains' for IPv4 rules that specify an IP range or CIDR range. • contains: Tenable.io matches the rule to assets based on a partial match of the specified term. • starts with: Tenable.io matches the rule to assets that start with the specified term. • ends with: Tenable.io matches the rule to assets that end with the specified term. In the text box, type a valid value for the selected category. Tip: You can enter multiple values separated by commas. - (Optional) To add another asset rule, click Add. To save your asset rules, click Save. The access group creation plane appears. Add users and user groups: Next to Users & Groups, click the button. The Add Users & Groups plane appears. For each user or user group you want to add, in the input box, type the name and select it from the drop-down menu. Click Add. The access group creation plane appears. Tip: To remove a user or group, click the button next to the name. In the access group creation plane, click Create. The access group is created. Note: When you create an access group, Tenable.io may take some time to assign assets to the access group, depending on the system load, the number of matching assets, and the number of vulnerabilities.
https://docs.tenable.com/cloud/Content/Settings/CreateAccessGroup.htm
2019-06-16T05:07:50
CC-MAIN-2019-26
1560627997731.69
[]
docs.tenable.com
TOC & Recently Viewed Recently Viewed Topics Vulnerabilities Page The Vulnerabilities page provides a list of the vulnerabilities detected by the attached Nessus Network Monitors. You can view the vulnerabilities by asset or by plugin: - The By Asset tab shows the number of assets with vulnerabilities and their severity. Here, you can also see their system type, sensor type, manufacturer, and other details. - The By Plugin tab shows the assets that are affected by the specific plugin and the severity of the vulnerability. Here, you can also see the family and vulnerability count. The Vulnerabilites page features the following functionality: - Use the Filter box to filter the Vulnerabilites page. To view a list of filterable plugin attributes, click the down arrow for any filter text field. Select Any or All to view matching results. The search box contains example hints when empty, but if you type an incorrect filter value, the box displays a red border. - The Actions drop-down allows you to export or delete results, apply a label to, or remove a label from one or more vulnerabilities. Note: After deleting results, you must restart Industrial Security to see the most up-to-date information. The Criticality column rates asset criticality from 1-5. 1 is the least critical, while 5 is the most critical.
https://docs.tenable.com/industrialsecurity/1_1/Content/VulnerabilitiesSection.htm
2019-06-16T05:26:45
CC-MAIN-2019-26
1560627997731.69
[]
docs.tenable.com
Understand the goal rating process Once goals have been set in Coach, both managers and employees available to the other person until both have submitted their feedback. The reason why comments are immediately visible and ratings are not is because the goal rating process should be unbiased, yet informed. Managers are busy (aren't we all?), so with multiple direct reports and an infinite amount of tasks to attend to every day, Coach helps managers make an informed rating and give well informed coaching feedback before 1:1 conversations take.. Try to be as thorough as possible in your ratings in case of disparities and misalignment between manager and employee. Also, providing sufficient details helps facilitate 1:1 meetings. If you've taken a few minute employee to align on what level of performance is At, Above, or Below expectations, performance is just one small part of the process. The real value lies in your comments so make sure to write details to support your rating in order to deliver or receive valuable coaching feedback. List To-Do's Coach lets your create To-Do items at any time in the goal's lifecycle. managers rate their employees' goals, they can see which to-do's have been completed that period and those that are still outstanding so they're able to stay up to date on progress. To make things even easier, integrate your Coach Coach as purely a coaching and development tool. That's why rating is optional and you can skip a week (or an eternity) if that works for you and your employees. Add coaching feedback and to-do's to a goal whenever you'd like, but you can leave the actual goal rating blank and simply move to the next one. Rating skips do not affect alignment, but be sure to agree on these skips prior to the rating week. Nudge If you're done with rating but are still waiting for the other person_1<< Prepare for your 1:1 meetings Now that all of the data is in Coach,_2<< Please sign in to leave a comment.
https://docs.tinypulse.com/hc/en-us/articles/115005041514-Rate-performance
2019-06-16T04:45:36
CC-MAIN-2019-26
1560627997731.69
[array(['/hc/article_attachments/360021997514/Screen_Shot_2019-01-14_at_11.07.07.png', 'Screen_Shot_2019-01-14_at_11.07.07.png'], dtype=object) array(['https://downloads.intercomcdn.com/i/o/33556687/6816c60ef46333532952bab2/file-jpeneUIghi.gif', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/33556691/f966d398b6d0daec8994ab10/file-rnETsvH0V3.png', None], dtype=object) ]
docs.tinypulse.com
A'). Index Properties Chaining Methods - Alert - Bottom - Combine - Count - Deadman - Default - - Derivative - Difference - Distinct - Elapsed - Eval - First - Flatten - GroupBy - HoltWinters - HoltWintersWithFit - HttpOut - InfluxDBOut - Join -). node.align() Every How often the current window is emitted into the pipeline. node.every(value time.Duration) Period The period, or length in time, of the window. node.period(value time.Duration) Combine Combine this node with itself. The data are combined on timestamp. node|combine(expressions ...ast.LambdaNode) Returns: CombineNode Count Count the number of points. node|count) InfluxDBOut Create an influxdb output node that will store the incoming data into InfluxDB. node|influxDBOut() Returns: InfluxDBOutNode Join Join this node with other nodes. The data
https://docs.influxdata.com/kapacitor/v1.0/nodes/window_node/
2018-08-14T10:35:07
CC-MAIN-2018-34
1534221209021.21
[]
docs.influxdata.com
3. OpenMV IDE Overview¶ debug output from your OpenMV Cam. Anyway, the IDE is more or less straight forward to use. A lot of effort was put in to stream-line things in the IDE. However, there are a few things you should know about in the IDE. 3.2. Text Editing¶ OpenMV IDE has a professional text editor powered by the QtCreator backend. You’ve got infinite undo and redo across all open files, white-space visualization (important for MicroPython), font-size controls, and best-of-all QtCreator’s find and replace. The find and replace features regular expression matching with the ability to capture text during the find and use that captured text for the replace. Additionally, it also can preserve case while replacing. Finally, the find and replace features works not only on the current file, but, it can work on all files in a folder or all open files in OpenMV IDE. In addition to the nice text editing enviornment OpenMV IDE also provides auto-completion support hover tool-tips on keywords. So, after typing . for example in Python OpenMV IDE will detect that you’re about to write a function or method name and it will show you an auto-completion text box. Once you’ve written the function/method name it will also walk you through the arguments. Finally, if you hover your mouse cursor over any keyword OpenMV IDE will display the documentation for that keyword in a tooltip. These two features make editing scripts a joy! 3.3. Connecting to your OpenMV Cam¶ Connecting to your OpenMV Cam was covered in detail in the previous Hardware Setup tutorial page. Once you’ve gotten past all the startup issues that involve programming firmware and etc. you just have to click the connect button in the bottom left-hand corner of OpenMV IDE to connect to your OpenMV Cam. OpenMV IDE is smart about connecting and automatically filters our all serial ports that aren’t an OpenMV Cam. If there’s only one OpenMV Cam attached to your computer it will find it and connect immediately. If you have two OpenMV Cams it will ask you which serial port to connect to then. Note that OpenMV IDE will remember your pick so that next time if you connect the serial port for the OpenMV Cam you want to connect to will already be selected. After connecting to your OpenMV Cam’s serial port OpenMV IDE will then try to determine the USB flash drive on your computer associated with your OpenMV Cam. OpenMV IDE will do some smart filtering on USB flash drives to automatically connect to the right one if possible. However, it may not be able to determine the correct one automatically and will ask you to help it if it can’t. Like with serial ports above OpenMV IDE will remember your selection so the next time you connect it will automatically highlight your previous choice. Note OpenMV IDE needs to associate the USB flash drive on your OpenMV Cam with your OpenMV Cam’s virtual serial port so that it can parse compile error messages and open the file with the error easily. It also needs the drive information for the Save open script to OpenMV Cam command. That all said, OpenMV IDE can still function without knowing your OpenMV Cam’s USB flash drive and will just disable features that depend on this knowledge. Finally, after connecting to your OpenMV Cam the connect button will be replaced with the disconnect button. Click the disconnect button to disconnect from your OpenMV Cam. Note that disconnecting will stop whatever script is currently executing on your OpenMV Cam. You can also just unplug your OpenMV Cam from your computer without disconnecting and OpenMV IDE will detect that and automatically disconnect from your OpenMV Cam. If your OpenMV Cam crashes OpenMV IDE will detect this too and disconnect from your OpenMV Cam. Note It is possible on Windows/Mac/Linux for the serial port driver to crash in a terrible way if your OpenMV Cam becomes unresponsive. On Windows you’ll notice this as an unkillable OpenMV IDE process without any window in the Task Manager. If this happens the only recourse is to restart your computer as the unkillable process will prevent any new instance of OpenMV IDE from connecting to any OpenMV Cam. This issues exists as a won’t fix bug in Windows that’s been around since Windows XP. On Mac/Linux this same issue can occur but is much harder to run into. Basically what happens on Windows at-least is that when the USB virtual serial port driver crashes from an OpenMV Cam becoming unresponsive it never let’s OpenMV IDE’s serial thread return from a Kernel function call which makes OpenMV IDE’s serial thread unkillable. Anyway, we’ve done a lot of work to make sure this doesn’t happen but please note that it still can. 3.4. Running scripts¶ Once you’ve finished editing your code and are ready to run a script just click the green run button in the bottom left-hand corner of OpenMV IDE. The script will then be sent to your OpenMV Cam to be compiled into Python byte-code and executed by your OpenMV Cam. If there were any errors in your script your OpenMV Cam will send back the compile error in the Serial Terminal which OpenMV IDE automatically parses looking for errors. When OpenMV IDE detects and error it will automatically open the file in error and highlight the line in error along with displaying a nice error message box. This feature saves you tons of time fixing errors. Anyway, if you want to stop the script just click on the stop button which will replace the run button while a script is running. Note that scripts can automatically stop by themselves either due to finishing or a compile error. In either case the run button will re-appear again. Note OpenMV IDE automatically scans imports in your script when you click the run button and copies any externally required scripts that are missing to your OpenMV Cam. OpenMV IDE will also automatically update any out-of-date external modules on your OpenMV Cam when you click the run button too. OpenMV IDE looks for external modules in your personal “OpenMV” Documents Folder first and then the example folder. OpenMV IDE is able to parse both single file modules and directory modules. 3.5. Frame Buffer Viewer¶ What makes OpenMV IDE special is the integrated frame buffer viewer. This let’s you easily see what your OpenMV Cam is looking at while working on your code. The frame buffer viewer displays whatever was in your OpenMV Cam’s frame buffer previously when sensor.snapshot() is called. Anyway, we’ll talk more about that later. For now, here’s what you need to know about the frame buffer viewer: - The Reocordbutton on the top right-hand corner of OpenMV IDE records whatever is in the frame buffer. Use it to quickly make videos of what your OpenMV Cam sees. Note that recording works by recording whatever is in OpenMV IDE’s frame buffer at 30 FPS. However, the frame buffer may be updating faster or slower than this depending on the application. Anyway, after the recording is complete OpenMV IDE will use FFMPEG to transcode the recording to whatever file format you want for sharing. - The Zoombutton on the top right-hand corner of OpenMV IDE controls the zoom-to-fit feature for the frame buffer viewer. Enable or disable the feature as your please. - The Disablebutton on the top right-hand corner of OpenMV IDE controls whether or not your OpenMV Cam will send images to OpenMV IDE. Basically, your OpenMV Cam has to JPEG compress images constantly to stream them to OpenMV IDE. This reduces performance. So, if you want to see how fast your script will run after in the wild without your OpenMV Cam being connected to your computer just click the Disablebutton. While the frame buffer is disabled you won’t be able to see what your OpenMV Cam is looking at anymore but you’ll still see debug output from your OpenMV Cam in the Serial Terminal. Finally, you can right click on whatever image you see in the frame buffer viewer to save that image to disk. Additionally, if you select an area in the frame buffer by clicking and dragging you can save just that area to disk instead. Note that you should stop the script before trying to save the frame buffer to disk. Otherwise, you may not get the exact frame you want. To de-select an area in the frame buffer just click anywhere without dragging to remove the selection. However, it’s possible to create one pixel selections when de-selecting so try to click on the blank area in the frame buffer. 3.6. Histogram Display¶ The integrated histogram display in OpenMV IDE exists primarily to fill the empty space under the frame buffer viewer and provide some eye candy for you. However, it also actually useful for getting feedback about the lighting quality in the room, determining color tracking settings, and in general just giving you an idea about the quality of the image your OpenMV Cam is looking at. You can select between four different color spaces in the histogram. Either RGB, Grayscale, LAB, and YUV. Only Grayscale and LAB are useful for use with programmatically controlling your OpenMV Cam. RGB is nice for looking at as eye candy and YUV is there because we use it for JPEG compression and thought we might as well add it. Anyway, by default the histogram shows information about the whole image. But, if you select an area of the frame buffer by clicking and dragging on it then the histogram will only show the distribution of colors in that area. This features makes the histogram display super useful for determining the correct Grayscale and LAB color channel settings you need to use in your scripts for image.find_blobs() and image.binary(). Finally, the image resolution and ROI (x, y, w, h) of the bounding box you select on the image are displayed above the histogram graphs. 3.7. Serial Terminal¶ To show the Serial Terminal click on the Serial Terminal button located on the bottom of OpenMV IDE. The Serial Terminal is built-into the main window to be easier to use. It just splits with your text editing window. Anyway, all debug text from your OpenMV Cam created by Note that the serial terminal more or less will infinitely buffer text. It will keep the last one million lines of text in RAM. So, you can use it to buffer a large amount of debug output. Additionally, if you press ctrl+f in Windows/Linux or the equivalent shortcut on Mac you’ll be able to search through the debug output. Finally, the Serial Terminal is smart enough not to auto-scroll on you if you want to look at previous debug output making it really nice to use. Auto-scrolling will turn on again if you scroll to the bottom of the text output. 3.8. Status Bar¶ On the status bar OpenMV IDE will display your OpenMV Cam’s Firmware Version, Serial Port, Drive, and FPS. The Firmware Version label is actually a button you can click to update your OpenMV Cam if your OpenMV Cam’s firmware is out of date. The Serial Port label just displays your OpenMV Cam’s serial port and nothing else. The Drive label is another button which you can click on to change the drive linked to your OpenMV Cam. Finally, the FPS label displays the FPS OpenMV IDE is getting from your OpenMV Cam. Note The FPS displayed by OpenMV IDE may not match the FPS from your OpenMV Cam. The FPS label on OpenMV IDE is the FPS OpenMV IDE is getting from your OpenMV Cam. But, your OpenMV Cam can actually run faster than OpenMV IDE sometimes and OpenMV IDE is only sampling some of the frames from your OpenMV Cam and not all. Anyway, OpenMV IDE’s FPS will never be faster than your OpenMV Cam’s FPS, but, it may be slower. 3.9. Tools¶ You’ll find useful tools for your OpenMV Cam under the Tools Menu in OpenMV IDE. In particular, the Save open script to your OpenMV Cam and Reset OpenMV Cam tools are useful for using your OpenMV Cam when developing an application. - The Configure OpenMV Cam Settings fileallows you to modify an .inifile stored on your OpenMV Cam using OpenMV IDE which your OpenMV Cam will read on bootup for particular hardware configurations. - The Save open script to your OpenMV Camcommand saves whatever script you’re currently looking at to your OpenMV Cam. Additionally, it can also automatically strip white-space and comments from your script so it takes up less space. Use this command once you think your program is ready to deploy without OpenMV IDE. Note that this command will save your script as main.pyon your OpenMV Cam’s USB flash drive. main.pyis the script your OpenMV Cam will run once it finishes booting up. - The Reset OpenMV Camcommand resets and then disconnects from your OpenMV Cam. You’ll want to use this option if you run a script that creates files on your OpenMV Cam as Windows/Mac/Linux won’t show you any files on your OpenMV Cam created programmatically in a Python Script after your OpenMV Cam’s USB flash drive is mounted. Next, under the Tools Menu you can invoke the boot-loader to re-program your OpenMV Cam. The boot-loader can only be invoked while OpenMV IDE is disconnected for your OpenMV Cam. You can either give it a binary .bin file to re-program your OpenMV Cam with or a .dfu file. The boot-loader feature is only for advanced users who plan on changing the default OpenMV Cam firmware. 3.10. Open Terminal¶ The Open Terminal feature allows you to create new serial terminals using OpenMV IDE to remotely debug your OpenMV Cam while it is not connected to your computer. The Open Terminal feature can also be used to program any MicroPython development board. You can use the Open Terminal feature to open terminals connected to serial ports, tcp ports, or udp ports. Note that serial ports can be bluetooth ports for wireless connectivity. 3.11. Machine Vision¶ The Machine Vision submenu includes a lot of Machine Vision tools for your OpenMV Cam. For example, the you can use the color threshold editor to get the best color tracking thresholds for image.find_blobs(). We’ll make new machine vision tools available regularly to make life easier for you. 3.12. Video Tools¶ If you need to compress a .gif file produced by your OpenMV Cam or convert a .mjpeg or ImageWriter .bin video file to .mp4 you can use the convert video file action to do this. Alternatively, if you’d just like to play these videos instead you can do that too using the play video file action. Note that you should copy video files from your OpenMV Cam’s flash drive to your computer first before playing them since disk I/O over USB on your OpenMV Cam is slow. Finally, FFMPEG is used to provide conversion and video player support and may be used for any non-OpenMV Cam activites you like. FFMPEG can convert/play a large number of file formats. - To convert a video file into a set of pictures select the video file as the source and make the target a “%07d.bmp”/”%07d.jpg”/etc. file name. FFMPEG understands printf()like format statements with an image file format extension to mean it should break the video file up into still images of the target format. - To convert a series of still images into a video set the source file name to “%7d.bmp”/”%7d.jpg”/etc. where all the images in a directory have a name like 1.bmp, 2.bmp, etc. FFMPEG understands printf()like format statements with an image file format extension to mean it should join those images files together into a video. - To convert an ImageWriterfile into any other video format select the file as the source and target to be whatever file format you want. - To convert a video file of any format into an ImageWriterfile select the video file you want to convert as the source and set the target to be a .binfile. FFMPEG will then break the video into JPGs and OpenMV IDE will turn these JPGs into RAW Grayscale or RGB565 frames saves to the .binfile using the ImageWriterfile format. - To optimize a .giffile saved by your OpenMV Cam for the web set the source file to be that .gifand the target file to be another .gif. - To convert an MJPEG file saved by your OpenMV Cam to another format set the MJPEG file as the source and another file using another format (like .mp4) as the target.
http://docs.openmv.io/openmvcam/tutorial/openmvide_overview.html
2018-08-14T10:42:06
CC-MAIN-2018-34
1534221209021.21
[]
docs.openmv.io
changes.mady.by.user Erica Johnson Saved on Feb 17, 2015 Saved on Jun 29, 2017 We offer complete documentation for all of our PMS products including the latest versions of every application and downloadable forms. Just click on your BookingCenter system below and you will find complete user guides, release notes manuals and further resources. For Troubleshooting, FAQ and best practices, go to our Knowledge Base. If you need immediate assistance, please submit a Support Ticket. Need help? Submit a ticket Image ModifiedArticles and FAQ Step-by-step tutorials
https://docs.bookingcenter.com/pages/diffpagesbyversion.action?pageId=557083&originalVersion=172&revisedVersion=201
2018-08-14T11:10:51
CC-MAIN-2018-34
1534221209021.21
[]
docs.bookingcenter.com
Authorization and Authentication Token Create a permanent Access Token for a user Access Token for a user Follow these instructions.
https://developer-docs.bynder.com/API/Authorization-and-authentication/
2018-12-09T22:15:46
CC-MAIN-2018-51
1544376823183.3
[]
developer-docs.bynder.com
Problem You created an on-host integration with New Relic Infrastructure, but you are not seeing data in the New Relic Infrastructure UI. Solution To troubleshoot and resolve the problem: - Verify that you have an Infrastructure Pro subscription. With the exception of the AWS EC2 integration, all other integrations require Infrastructure Pro. - Check to see if the integration is activated. Go to Infrastructure > On host integrations and look for the integration name under Active integrations. - Verify that your on-host integration meets New Relic Infrastructure's integration requirements: - Troubleshoot integration requirements If you are not receiving data from your on-host integration, verify that your integration follows these requirements. - After ruling out common problems with integration requirements, follow the more in-depth troubleshooting procedures for error logs and integration loading: - Check the integration log file for error messages, try these steps: -\":[]}
https://docs.newrelic.com/docs/integrations/host-integrations/troubleshooting/not-seeing-host-integration-data
2018-12-09T21:56:29
CC-MAIN-2018-51
1544376823183.3
[]
docs.newrelic.com
TOTP and HOTP¶ One-time passwords (OTPs) are commonly used as a form of two-factor authentication. Crypto can be used to generate both TOTP and HOTP in accordance with RFC 6238 and RFC 4226 respectively. - TOTP: Time-based One-Time Password. Generates password by combining shared secret with unix timestamp. - HOTP: HMAC-Based One-Time Password. Similar to TOTP, except an incrementing counter is used instead of a timestamp. Each time a new OTP is generated, the counter increments. Generating OTP¶ OTP generation is similar for both TOTP and HOTP. The only difference is that HOTP requires the current counter to be passed. import Crypto // Generate TOTP let code = TOTP.SHA1.generate(secret: "hi") print(code) "123456" // Generate HOTP let code = HOTP.SHA1.generate(secret: "hi", counter: 0) print(code) "208503" View the API docs for TOTP and HOTP for more information. Base 32¶ TOTP and HOTP shared secrets are commonly transferred using Base32 encoding. Crypto provides conveniences for converting to/from Base32. import Crypto // shared secret let secret: Data = ... // base32 encoded secret let encodedSecret = secret.base32EncodedString() See Crypto's Data extensions for more information.
http://docs.vapor.codes/3.0/crypto/otp/
2018-12-09T21:47:46
CC-MAIN-2018-51
1544376823183.3
[]
docs.vapor.codes
The SmartExcelCollector works in conjunction with the SpreadSheetFeedConfig bean that is used to describe how the rows and columns of data are to be interpreted. Data is generally processed column by column. For example, one column might be defined as containing identifiers, another column might be defined as containing resource quantities, and a third column might be defined as containing the usage date. In summary, columns can contain the following data types: The feed configuration is responsible for describing the input of the XLS file. In general, rows from the file are transferred to CC Record files. An Excel spreadsheet feed can be any data contained in a spreadsheet document. You can map a set of columns from a single worksheet to an output CC Record file for processing. Within the feed configuration area, you describe what columns and time and date, identifiers, and resources. If the Excel file contains a header row, the OutputField can reference a value from the header row as .input . Otherwise, InputField objects can be used to name the input columns for mapping to output. Another option to name identifiers and resources when no header exists is to set the input property of OutputField to a column number, and set the label property to a name or labelInput property to a column number. Column numbers begin at 1 and the first column is not always equivalent to column 'A'. Empty columns before or after data in the spreadsheet are ignored. Empty rows before or after data in the sheet are also ignored. This should be taken into account when specifying input column numbers or setting a value for the SpreadSheetFeedConfig ’s linesToSkip property. Blank rows in the middle of the data set can cause processing errors. Columns in the Excel file can contain time and date data types. The following shows an example of the types of usage that can be output and mapped. Columns in the Excel file contain identifier and resource data types. The following shows an example of the types of data can be output and mapped.
https://docs.consumption.support.hpe.com/CC3/03Setting_Up_Collection/02Universal_collectors/Excel_Collector/Data_mapping
2018-12-09T21:36:50
CC-MAIN-2018-51
1544376823183.3
[]
docs.consumption.support.hpe.com
Problem You do not see transaction traces in the New Relic APM UI. Solution If you are not seeing a transaction trace in New Relic APM, there are several possible reasons. - Your New Relic APM subscription does not include transaction tracing. If you want to take advantage of transaction tracing, upgrade your New Relic APM subscription level. - Transaction tracing was disabled. Transaction traces are enabled by default, but the setting may have been manually disabled. To solve this problem, change the setting back to enable transaction trace settings. - The transaction does not meet criteria for tracing. The transaction is not meeting the criteria for being traced. For example: - It is responding more quickly than the 4-times-Apdex threshold or a custom number-of-seconds threshold. - It is not being selected as the slowest trace during the minute-long harvest cycle. - App activity is not being captured as a transaction. If you don't see the transaction listed on the New Relic APM Transactions page, your New Relic APM agent is not capturing the activity as a transaction. In this situation, you must set up custom instrumentation to monitor the activity as a transaction. - The custom transaction name includes brackets. If you used custom instrumentation to manually create a transaction, make sure you follow New Relic's naming rules. Do not use brackets [suffix]at the end of your transaction name. New Relic automatically strips brackets from the name. Instead, use parentheses (suffix)or other symbols if needed. - No traces appear for any transactions. Change the Threshold value temporarily to 0.0001so that transactions will definitely exceed the threshold. Be sure to return this to your original setting after you start receiving transaction traces.
https://docs.newrelic.com/docs/apm/transactions/transaction-traces/troubleshooting-not-seeing-transaction-traces
2018-12-09T22:22:32
CC-MAIN-2018-51
1544376823183.3
[]
docs.newrelic.com
Spec PackedPairStores two arbitrary objects. Saves memory by disabling memory alignment. Stores two arbitrary objects. Saves memory by disabling memory alignment. Template Parameters Member Function Overview Member Functions Inherited From ComparableConcept Member Functions Inherited From EqualityComparableConcept Member Functions Inherited From LessThanComparableConcept Interface Function Overview Interface Functions Inherited From Pair Interface Functions Inherited From ComparableConcept Detailed Description Useful for external storage. Memory access could be slower. Direct access to members by pointers is not allowed on all platforms. Functions value() is not implemented yet since there it would require using a proxy. Use getValue(), assignValue(), moveValue(), setValue() instead.
http://docs.seqan.de/seqan/2.1.0/specialization_PackedPair.html
2018-12-09T21:21:27
CC-MAIN-2018-51
1544376823183.3
[]
docs.seqan.de
The RhoConfig API allow access to the configuration properties. Refer to Run time configuration for a listing of the configuration properties, and some examples. The properties in the rhoconfig.txt file for a Rhodes application are available through the RhoConfig API. To access the property, use the name of the property as the method name. For example, the following method returns the start path for your Rhodes application: Rho::RhoConfig.start_path For a list of the configuration properties, click here Checks to see if a configuration property exists for this Rhodes application. Rho::RhoConfig.exists?(configuration-property)
http://docs.tau-technologies.com/en/2.2.0/rhodesapi/rhoconfig-api
2018-12-09T21:33:54
CC-MAIN-2018-51
1544376823183.3
[]
docs.tau-technologies.com
Elements 10 Elements 10 ("Intrepid Class") is the current release phase of the Elements compiler. It ships in continuous weekly updates, starting November 2017. Please refer to the weekly change logs as well as the ongoingly updated What's New in Elements page on the Elements Compiler Website for details to what is new and changed. Other Releases Elements 9.3 ← Elements 10
https://docs.elementscompiler.com/Versions/Elements10/
2018-12-09T22:05:03
CC-MAIN-2018-51
1544376823183.3
[]
docs.elementscompiler.com
Open Restaurant 2 is built on Drupal 8. It comes with a menu management system, a reservation system, a customizable blog, events management and a responsive theme. Want to help us test it? See the installation guide. Features - Menu management - A powerful management system for creating menus, uploading menu pictures, and categorization. Support for nutrition information, menu types and prices included. - Multilingual - Support for multiple languages and translation. - Locations - Manage multiple restaurant locations from one dashboard. Create unique menus, address and opening hours for each location. - Events - Create and manage events for your restaurant. - Blog - The distribution comes with a blog/news system that you can easily customize. - Responsive theme - Works on all your devices.
http://docs.open.restaurant/en/2.x/
2018-12-09T21:17:54
CC-MAIN-2018-51
1544376823183.3
[]
docs.open.restaurant
fn() createInvSuffixArrayCreates the inverse suffix array from a given suffix array. Creates the inverse suffix array from a given suffix array. Parameters Detailed Description This function should not be called directly. Please use indexCreate or indexRequire. The size of invSuffixArray must be at least length(suffixArray) before calling this function. The complexity is linear in size of the suffix array. Data Races If not stated otherwise, concurrent invocation is not guaranteed to be thread-safe.
http://docs.seqan.de/seqan/2.4.0/global_function_createInvSuffixArray.html
2018-12-09T21:20:37
CC-MAIN-2018-51
1544376823183.3
[]
docs.seqan.de
You can edit the stored procedures that HPE Consumption Analytics Portal uses to perform various actions in its database. To do this, go to Administration > DB Object Maintenance. You can edit stored procedures directly in this window or download them to your local drive and upload them to the database server after editing them. CAUTION: These procedures are intended for editing by HPE personnel. HPE recommends that you make a backup copy of a stored procedure before modifying it.
https://docs.consumption.support.hpe.com/CC3/05Administering/Managing_stored_procedures
2018-12-09T21:45:05
CC-MAIN-2018-51
1544376823183.3
[]
docs.consumption.support.hpe.com
Properties Properties provide abstraction of a types data by combining the concepts of Fields (to store data in a Class or Record) and Methods (to perform actions on that data) into a single entity. When accessed, a property behaves much like a field – it has a value that can be read and (optionally) updated. But unlike fields, accessing a property does not directly give unfettered access to the stored data in the class or record. Instead, access to a property goes through custom method-like getter and setter code. This provides three main benefits: - Properties can be read/write or read-only (and in rare cases even write-only) - Setter code can validate new values - Setter code can perform additional actions (such as updating related values) Combined, these aspects allow classes (and records) to take control of the data by not allowing outside access to their fields, which any external code could modify in an uncontrolled manner. In fact, it is considered good practise to have all fields of a class marked private, so only the class's code itself can access them, and funnel all external modifications through properties (or regular Methods, of course). A forth benefit of properties is that their getters can generste or modify the returned value dynamically – so not every property necessarily maps directly to a value stored in a field. Property Declaration Syntax A simple property declaration consists of the property keyword, followed by a property name and a (result) type, separated with a colon and optional getter ( read) and setter ( write) statements: property method Name: String read fName write SetNameAndUpdateView; If only a getter or or only a setter is provided, the property will be read-only or write-only, respectively. If neither getter or setter is provided, the compiler will automatically provide a field for storage, and a simple getter and setter that uses that field. Such a property works much like a regular Field, then, from a usage level. The getter can be any Expression that returns the correct type. This could be a simple field access (as in the example above), a method, or a more complex expression: property Name: String read fName; property Name: String read GetName; property Name: String read FirstName+" "+LastName; method GetName: String; The setter can be any Writable Expression (such as a field, another property or even a Discardable), or the name of a method that takes a single parameter of the right type: property Name: String read fName write fName; property Name: String read fName write SetName; property Name: String read fname write nil; method SetName(aNewName: String); Alternatively, a full begin/ end block of statements can also be provided for either the getter or the setter. In this case, the value Expression can be used to access the incoming value, within the setter code: property Value: Double read begin result := GetInternalValue; result := (result + 5) * 8; end write begin SetIntrnalValue(value / 8 - 5); end; Stored Properties As mentioned above, if neither a getter or setter are provided, the property will be read/write, and the compiler will automatically generate getters and setters that store and obtain the value from a (hidden) backing variable. In this case, the property behaves very much like a plain field: property Name: String; // internally stored in a hidden String var Different that an actual field, stored properties still are exposed via getter and setters, so thye can be "upgraded" to use custom getters or setter later, without breaking binary compatibility of a type. Also, they still will support the notify Modifier, and other property-specific features. Stored properties can be marked with the readonly Member Modifier to become read-only. Read-only properties can still be written to from an Initializer or from the class's Constructors – but they cannot be modified once construction of an instance has completed. Initializers Like Fields, Stored Properties can be assigned an initial value right in their declaration by having the property declaration closed off with the := operator followed by an expression. Optionally, they can be marked with the lazy Member Modifier to defer execution of the initializer until the first time the property is accessed. property Name: String := 'Paul'; Please also refer to the Constructors: Initializers topic for more detail on when and how fields get initialized. Indexer Properties While regular properties represent a single value (of an arbitrary type, of course), indexer properties can provide access to a range of values of the same type. This is similar in concept to an Array, but each access – read or write – goes through the proper getter or setter code. An indexer property is declared by providing one or more parameters after the property name, enclosed in square brackets. Indexer properties cannot be stored properties, so either a read or a write statement is required. property Items[aIndex: Integer]: String read ... write ...; The rules for read and write statements are similar to regular properties: The name of the indexer parameter(s) may be used in the statements, and if the name of a getter of setter method is provided, the signature of this method must include parameters that matycxh the property's parameters: property Items[aIndex: Integer]: String read fMyArray[aIndex]; property Items[aIndex: Integer]: String read GetItems wrire SetItems; method GetItems(aIndex: Integer): String; method SetItems(aIndex: Integer; aValue: String); Of course, an indexer property does not necessarily have to be backed to an array-like structurel it can also generate (or store) values more dynamically. In the example below, the IntsAsStrings could be accessed with any arbitrary index, and would return the approriate string. property IntsAsStrings[aIndex: Integer]: String read aIndex.ToString; ... var s := myObject.IntsAsString[42]; Indexer properties can have more than one parameter (i.e. be multi-dimensional), and – different that Arrays – they can be indexed on any arbitrate type, ot just Integers. Note that, also unlike arrays, indexer properties themselves have no concept of a count, or a valid range of parameters. It is up to the type implementing the property to provide clear semantics as to how an indexer can be accessed. For example, a List class indexed with integer indices might expose a separate Count property, while a dictionary would allow arbitrary indexes – and might decide to raise an exception, or return nil for values not in the dictionary. Default Indexers One indexer property per class (optionally overloaded on type) can be marked with the default Member Modifier. The default property can then be accessed by omitting the property name accessing the indexer off an instance (or type) itself; type MyClass = public class public property Items[aIndex: Integer]: String read ...; default; ... var s := myObject[0]; // same as myObject.Items[0]; Property Notifications Non-indexed properties can optionally be markled with the notify Member Modifier. Properties marked with notify will emit special platform-specific events whenever they are changed – allowing other parts of code to be notified about and react to these changes. How these notifications work and how they can be retrieved depends ion the underlying platform. Notifications are used heavily in WPF on .NET or with Key-Value-Observation (KVO) on Cocoa. Please refer to the Property Notifications topic for more details on this. Storage Modifiers (Cocoa) On the Cocoa platform, the type of a stored property declaration can be amended with one of the weak, unretained or strong Storage Modifier keywords, with strong being the implied default. property Value: weak String; To specify a Storage Modifier, the type cannot be inferred, but must be explicitly specified. Inferred types will always be considered strong. Cocoa Only Storage Modifiers are necessary on Cocoa only, to assist with Automatic Reference Counting (ARC). Thye can be specified on all platfroms, but have no effect when using GC. Static Properties Like most type members, properties are by default defined on the instance – that means the property can be called on and will execute in the context of an instance of the class. A property can be marked as static by prefixing the property declaration with the class keyword, or by applying the static Member Modifier: class property Name: String; // static property on the class itself property Name2: String; static; // also a static property on the class itself Visibility The visibility of properties is governed by the Visibility Section of the containing type the property is declared in, or the Visibility Modifiers applied to the property. By default, both getter and setter of the property are accessible on that visibility level, but visibility can be overridden by prefixing either the getter or the setter with a separate visibility keyword: property Name: String read private write; // readonly for external access Virtuality The Virtuality of properties can be controlled by applying one of the Virtuality Member Modifiers. property Name: String read; abstract; Properties can be marked as abstract, if a descendant class must provide the implementation. Abstract properties (and properties in Interfaces may not define a getter or setter, but they can have optionally specify the read and/or write keywords to indicate whether a property can be read, written or both: property A: String; abstract; // read/write property B: String read; abstract; // read-only property C: String write; abstract; // write-only property D: String read write; abstract; // also read/write Other Modifiers A number of other Member Modifiers can be applied to properties. copy default(See Default Indexers, above) deprecated implements ISomeInterface.SomeMember implements ISomeInterface inline lazy(See Lazy Initializers, above) locked locked on Expression mapped to(See Mapped Members) notify(See Property Notifications, above) optional(Interface members only) readonly(See Raad-only Stored Properties, above) raises unsafe See Also - Property Access Expressions - Property Notifications - Fields - Storage Modifier - Arrays - Block Types
https://docs.elementscompiler.com/Oxygene/Members/Properties/
2018-12-09T21:18:02
CC-MAIN-2018-51
1544376823183.3
[]
docs.elementscompiler.com
Resource Paths In some cases, identifying a resource requires information from multiple objects. For example Deployments with the same name may exist in different Applications. While an UUID can be used to identify the resource, this is not very easy to use. In these situations, A Path can be used to identify a resource contained in a sub-tree. Each path element identifies a unique Resource: "/" - Selects the root resource "/Entity[field=value, field=value]" - Selects a resource named 'Entity' queried by the field-value pairs For example, suppose you have two Applications “Hello-World-1” and “Hello-World-2”, and each of these applications has a single Deployment “hello”. The following path will select the Deployment “hello” in the Application “Hello-World-1”: /applications/[name=Hello-World-1]/deployment[name=hello] Since the Application “Hello-World-1” has a single service in it, the path could also be written as: /application[name=Hello-World-1]/deployment Paths can only be used when referring to a related resource (i.e. a reference relationship) as an alternative to using the UUID for the resource. In this form, a path is written as: { "id": "path:/application[name=Hello-World-1]/deployment" ... } Another way to specify the path, instead of a string with the “path:” prefix, is to use a JSON attribute: { "path": "/application[name=Hello-World-1]/deployment" ... }
https://docs.nirmata.io/restapi/resource_paths/
2018-12-09T21:16:49
CC-MAIN-2018-51
1544376823183.3
[]
docs.nirmata.io
Contents - Types of Container Connections - Specifying Container Connections - Syntax for Expressing Container Connections - Container Connection Predicate Filters - How the Server Makes Container Connections - Using Container Connections to Share a Stream - Container Start Order - Schema Matching for Container Connections - Dynamic Container Connections - Synchronous Container Connections - Remote Container Connection Parameters - Related Topics This topic discusses how to establish and maintain connections between streams in different application modules running in different containers. See EventFlow Container Overview for an introduction to StreamBase containers. StreamBase supports the following types of container connections. In this configuration, you set up an application module's input stream in one container to receive tuples from an output stream of another module in another container, possibly running on a separate StreamBase Server. Stream-to-stream connections can be of two types: Asynchronous. In the default form of stream-to-stream container connection, the flow of tuples is set to run asynchronously, with a queue automatically set up by StreamBase to manage the flow through the connection. Synchronous. You can optionally specify low latency, direct connections between container streams without the benefit of queue management. This option offers a possible latency improvement for certain circumstances. In general, asynchronous connections improve throughput, and synchronous connections improve latency. See Synchronous Container Connections. Stream-to-stream connections can also be made from an output stream to an input stream within the same container, where the destination and source container name is the same. Intra-container connections must be Asynchronous, and the same access restrictions apply: that is, streams you wish to connect to or from in sub-modules must be marked as Always exposed in EventFlow modules, or public in StreamSQL modules. In this configuration, you connect a container's output stream to a URI that specifies the absolute path to a CSV file. All tuples sent on this port are then written to the specified file. You can also connect a CSV file's URI to an input stream; in this case, the input stream is read from the CSV file. See Connecting Container Streams To a CSV File. StreamBase provides several methods of specifying a connection between containers. Some methods are interactive (for testing and management of containers on running servers), while some methods specify that the connection is made immediately when the enclosing server starts (or when the enclosing container on that server starts). However, all methods are equally effective. Use the method that works best for your application and its stage of development or deployment, but only specify one method per connection for a given run of StreamBase Server. For example, do not specify a connection between containers A and B in both the Run Configuration dialog and in the Advanced tab of a stream. Each container connection must be specified only once for each connected pair of containers. Do not specify a container connection in the output stream of the sending container and again in the input stream of the receiving container. You can specify the connection in either the sending or receiving container to the same effect. Do not specify the container connection in both places. The methods of specifying container connections are: In a server configuration file (deprecated) In Studio, input and output stream Properties views From the command line, using the sbadmin addContainer or sbadmin modifyContainer command to communicate with a running server. Using StreamBase Manager, with commands in the context menu for servers and containers. In StreamBase Studio, in the Advanced tab of the Properties view for either an input stream or output stream, use the Container connection field to specify a StreamBase expression that resolves to the qualified name of a stream in another container. A qualified stream name follows this syntax: container-name.stream-name When using the Advanced tab for an output stream, you are already in the source stream, so you only need to specify the qualified name of the destination stream. Similarly, in the Advanced tab for an input stream, you are in the destination stream, and only need to specify the qualified name of the source stream. You must express the container-qualified stream name as a StreamBase expression. start time. For the same reason, the container connection expression cannot resolve any value of dynamic variables, including the default value. Specify each container connection only once, in either the sending or the receiving stream. Do not specify the container connection in both places. When you use the Advanced tab to specify a container connection, the connection is stored as part of the EventFlow XML definition of the module, and thus travels with the module. As long as the container names are the same, the same connection will be made whether the module is run in Studio or at the command prompt with the sbd command. The icons for input and output. There are restrictions on the order in which you start connected containers, both at the command line and in Studio. See Container Start Order for details. When you use the sbadmin addContainer or sbadmin modifyContainer commands, either at the command prompt or in a script, you can specify a container connection at the same time. The syntax is: sbadmin addContainer container-name application.[sbapp|ssql|sbar] connection1 connection2 ... The following example cascades tuples through three applications, each running in its own container, A, B, and C: sbadmin addContainer A preprocess.sbapp sbadmin addContainer B mainprocess.sbapp B.InputMain=A.OutputPre sbadmin addContainer C postprocess.sbapp C.InputPost=B.OutputMain A tuple enqueued into container A has the following itinerary: Processed by preprocess.sbappin container A. Exits container A through output stream OutputPre. Queued to container B's input stream InputMain. Processed by mainprocess.sbappin container B. Exits container B through output stream OutputMain. Queued to container C's input stream InputPost. Processed by postprocess.sbappin container C. The syntax for expressing container connections with sbadmin is always in the order , much like an assignment statement in many programming languages: destination=source dest-container.instream=source-container.outstream See the sbadmin reference page for more on the sbadmin command and its options. When using StreamBase Manager to monitor a running application (either in the SB Manager perspective in Studio or when run as a standalone utility), you can use it to: Add a container and optional container connection to a running server. Remove a container from a running server. Pause, resume, or shut down the application running in a container on a running server. To do so, select a server name in the Servers view on the left, and open the tree to select an entry in the Containers tree. Right-click and select options from the context menu, as described in Context Menu Actions for Servers and Context Menu Actions for Containers. In releases before 7.0, container connections could be specified in the <runtime> element of server configuration files, using a syntax similar to the current deployment file syntax. Such configuration files are still supported for backward compatibility. However, TIBCO strongly recommends migrating the <runtime> portion of existing server configuration files to deployment files. To specify a container connection, you must specify both source and destination stream names, and you must qualify the stream names with their container names. The following syntax is explicitly required when using the sbadmin command, and is expressed in other ways with the other container connection methods: dest-container.instream=source-container.outstream If the incoming or outgoing stream is in a module called by a top-level application in a container, then you must specify the module name as well: dest-container.moduleA.instream=source-container.outstreamProperty Data Type Determines Generated Control dest-container.instream=source-container.moduleB.outstream dest-container.moduleA.instream=source-container.moduleB.outstream If the source container is running on a remote StreamBase Server (or if the server is running on a different port on the same host), specify a full StreamBase URI enclosed in parentheses: dest-container.instream=(sb://remotesbhost:9955/app3.outstream) dest-container.instream=(sb://localhost:10001/default.outstream6) You can use a StreamBase URI for the receiving container, as well. For example: (sb://remotehost:8855/app2.instream)=source-container.outstream (sb://localhost:10001/default.instream4)=source-containter.outstream You can use the same syntax for remote hosts in both the dest and source attributes in a deployment file: <deploy ...> <runtime> <application file="primary.sbapp" container="A" /> <application file="secondary.sbapp" container="B" /> <container-connections> <container-connection <container-connection </container-connections> </runtime> </deploy> The remote host syntax also works in the Advanced tab of input and output streams, but not when using the Run Configuration dialog in Studio or StreamBase Manager. In three of the ways to define a container connection, you can specify a predicate expression filter to limit the tuples that cross to the destination container. The expression to enter must resolve to true or false, and should match against one or more fields in the schema of the connection's stream. You can specify a container connection predicate filter in: With the --where "argument to the sbadmin modifyContainer command (but not using the addContainer command). expression" With the where="attribute of the expression" <container-connection>element in StreamBase deployment files. With the Container connection filter expression field in the Advanced tab of the Properties views for input streams and output streams. There is no mechanism to specify a predicate filter for container connections specified in Studio's Launch Configuration dialog, in StreamBase Manager, or in the server configuration file. For example, a deployment file could have a <container-connection> element like the following: <container-connections> <container-connection </container-connections> When using sbadmin: sbadmin modifyContainer holder2.input=default.output3 --where "tradeID % 2 == 0" When using the filter expression field on the Advanced tab in the Properties view of an Input Stream or Output Stream, enter only the expression: tradeID % 2 == 0 With a valid container connection specified, when the module containing the connection starts, the hosting server locates the specified container and stream, and makes the connection. As the application runs, tuples exiting one container's output stream are consumed by the specified input stream in the other container. The schema of both sides of a container connection must match, as described in Schema Matching for Container Connections. If the hosting server cannot make the container connection for any reason, it stops and writes an error message to the console. The likely reason for a failure to make a container connection is a failure to locate either the specified container or the specified input stream in the module running in that container. In this case, check the spelling of your container connection specification. When a container is removed, the input or output to any dependent container disappears. The dependent container continues to function, but its input or output is removed. Stream-to-stream container connections are not limited to one container on each side of the connection. That is, you can configure: Two or more containers to share the same input stream. Two or more containers to share the same output stream. When input and output streams are shared, the streams are still owned by the original container. This means that any input or output to the shared stream ultimately needs to go through the original container. When tuples are passed between asynchronous container connections, they are passed using the same queuing mechanism used between parallel modules. This means that the tuple is queued for the other containers. When the tuple is queued, the original container must wait for the tuple to be pushed onto the queue, but will not be blocked by the processing of the tuple (which is done in another thread). Because of the queuing architecture, there is no guarantee of the order of tuples when they are shared between multiple containers. In the following example, containers A and B share the stream A.InputStream1. When a tuple is enqueued into A.InputStream1 the tuple is first passed to all containers that share that stream (in this case, container B) and then it is processed to completion by itself. sbadmin addContainer A app.sbapp sbadmin addContainer B app.sbapp B.InputStream1=A.InputStream1 Nothing prevents you from enqueuing directly to an input stream in container B while that stream also receives input from an upstream connection from container A. Because upstream tuples are queued, any input directly into container B is randomly interleaved with input from container A. The use of container connections imposes some restrictions on the startup order of containers, especially when making connections with sbadmin commands. The general rule is: the container that holds the receiving end of a container connection must be running when attempting to connect the containers. When you run a deployment file, it starts all containers first, then tries to make any specified container connections. Because all containers are running, all container connections succeed. Thus, container start order issues do not arise when using deployment files. However, it is possible to specify a container connection as part of an sbadmin addContainer command. In this case, the receiving end of the container connection must be started first. You may need to control the start order of an application's containers. For example, one container might need to contact an external market data feed or an external adapter before other containers can start. Container start order might be significant for ordering the start of Java operators. You can control container start order in the following ways: - Launch Configuration dialog In Studio, in the Containers tab of the launch configuration dialog, use theand buttons to arrange your containers in the desired start order, top to bottom. - Using sbadmin commands Containers specified with sbadmin commands at the command prompt or in scripts are started in the order of the commands, and in the order of arguments to any one sbadmin command. In the following example for UNIX, the sbd server is started without specifying a top-level application, and containers are added in A-B-C sequence: sbd -b sbadmin addContainer A preprocess.sbapp sbadmin addContainer B mainprocess.sbapp B.InputMain=A.OutputPre sbadmin addContainer C postprocess.sbapp C.InputPost=B.OutputMain Because of the way Windows launches processes, you must specify at least one argument when specifying sbd without an application argument. You can specify a port number for this purpose, even if you only re-specify the default port. Run the same commands as above at a StreamBase Command Prompt on Windows by starting with a command like the following: sbd -p 10000 sbadmin addContainer A preprocess.sbapp ... You can emulate the behavior of deployment files by starting all containers first, then making container connections with the modifyContainersubcommand: sbd -b sbadmin addContainer A preprocess.sbapp sbadmin addContainer B mainprocess.sbapp sbadmin addContainer C postprocess.sbapp sbadmin modifyContainer B addConnection B.InputMain=A.OutputPre sbadmin modifyContainer C addConnection C.InputPost=B.OutputMain StreamBase Studio imposes additional restrictions on running or debugging modules with container connections: Studio always starts its primary application in a container named default. You cannot use the launch configuration's Containers tab to specify a container connection for streams in the primary application. However, you can instead: Use the launch configuration Containers tab to specify the same connection from the point of view of the other container in the connection. Specify the same connection in the Advanced tab of an input or output stream in the primary application. Even with these restrictions, it is possible to use Studio to run or debug a pair of modules with a container connection between them, as long as the following conditions are met: Use the Containers tab of the Run Configuration dialog to load your secondary module into a separate container, and give the separate container a name. Specify the container connection in one of two ways, but not both: In the Advanced tab of the Properties view of a stream in either the primary or secondary module, OR In the Run Configuration dialog, for the secondary module only If necessary for your container connection, use theand buttons in the Containers tab to make sure the container that holds the receiving end of the connection starts first. The schemas of the outgoing and incoming streams in a container connection must match. Schema matching is enforced in two different ways for different cases: - For Container Connections on the Same Server For stream-to-stream connections between containers running on the same StreamBase Server, the connected streams must have equivalent schemas. That is, the connected streams must share exactly the same field names, field types, sizes, and field positions between the two streams. If the two streams share the same field types, sizes, and names, but are in different order, StreamBase cannot map the streams together. Tip To make sure your input and output stream schemas exactly match, use a named schema in one module and use the imported schemas feature of the Definitions tab in the EventFlow Editor of the other module. Assign the same named schema to both streams. - For Container Connections Between Servers For stream-to-stream connections between containers running on separate StreamBase Servers, fields in the outgoing stream are matched by name against fields in the incoming stream. Fields whose names match must have the same data type in both streams. Any fields in the outgoing schema whose names don't match the incoming schema are not streamed. Any fields in the incoming schema whose names don't match anything in the incoming schema are set to null. For example, consider an outgoing stream with schema (a int, b (x int, y int, z int)). This stream is connected to an incoming stream with schema (b (y int, z int), c int). The incoming stream does not see fields aor b.x. Fields b.yand b.zare passed from incoming to outgoing, and field cin the incoming stream is set to null. Use sbadmin modifyContainer to dynamically add or remove a container connection from an existing container while StreamBase Server is running: sbadmin modifyContainer container-name [addConnection | removeConnection] connection-expression1 connection-expression2... where each container-expression has the syntax destination=source as in this example: containerA.instream1=containerB.module1.outstream1 By default, connections between containers are set up to run asynchronously, with a queue to manage the flow of tuples through the connection. StreamBase also supports synchronous container connections, which are low latency, direct connections between containers, and do not have queue management. Specify a synchronous container connection with the sbadmin command by using := (colon-equals) instead of = (equals) in your container connection assignment statement. For example: sbadmin addContainer A app1.sbapp sbadmin addContainer B app2.sbapp sbadmin modifyContainer addConnection B.incoming:=A.outgoing You can also specify a synchronous container connection in a deployment file by adding the synchronicity attribute to the <container-connection> element that specifies your connection: ... <container-connection ... Caution Synchronous container connections do not automatically improve the speed of your application. It is even possible to inadvertently degrade your application's speed; for example, by setting up a synchronous connection to a container that blocks. To determine whether a synchronous container connection will improve your application, you must test your application with and without the synchronous setting, using a test input stream that emulates the actual conditions your application will face in production. Use the Rules of StreamBase Execution Order in the Execution Order and Concurrency page as general guidelines to help decide whether to use synchronous container connections. StreamBase detects and prevents any attempt to make a synchronous container connection to: Any stream in the system container An endpoint that would cause a loop in processing When using a StreamBase URI as part of a stream-to-stream container connection string to specify that one side of the container connection is on a remote StreamBase Server, you can optionally specify URI parameters as part of the remote URI. The URI parameters you can specify for connections to remote servers depend on which side of the connection is remote. There are two cases: If the URI for the remote server is on the right of the equals sign, then the remote server is the source of the container connection. For example: boxA.instream1=("sb://remotehost:9900/boxB.outstr4") In this case, you can specify one parameter for the connection: reconnect-interval If the URI for the remote server is on the left of the equals sign, then the remote server is the destination of the container connection. For example: ("sb://remotehost:9900/boxC.instr2")=boxD.outstr6 In this case, you can specify up to six parameters for the connection: reconnect-interval enqueue-buffer-size max-enqueue-buffer-size enqueue-flush-interval ConnectOnInit These parameters have the same meanings for container connections as the similarly named property of the StreamBase to StreamBase adapters described in StreamBase to StreamBase Output Adapter and StreamBase to StreamBase Input Adapter. Append one or more container connection parameters to a StreamBase URI, each preceded by a semicolon. For example, to specify non-default buffer sizes for a destination connection, use a container connection string like the following: ("sb://remotehost:9900/boxC.instr2;enqueue-buffer-size=300;max-enqueue-buffer-size=2000")=boxD.outstr6 To filter data as it is enqueued, you can use a --where predicate, as illustrated below for a field named Price: ("sb://remotehost:9900/boxC.instr2;enqueue-buffer-size=300;max-enqueue-buffer-size=2000")=boxD.outstr6 --where Price > 100 See StreamBase to StreamBase Output Adapter for a discussion of these property settings.
http://docs.streambase.com/latest/topic/com.streambase.tsds.ide.help/data/html/admin/container-connections.html
2018-12-09T21:38:32
CC-MAIN-2018-51
1544376823183.3
[]
docs.streambase.com
Retargeting is effective in multiple stages of a marketing funnel. How Does It Work? The first step of retargeting is the implementation of a custom tracking code or pixel. This pixel is used to capture data & information about the visitor, which can be used to identify the user in a variety of platforms. When a user is on your website, the pre-placed tracking code is triggered, placing that user into a designated audience. This audience is then targeted with specific ads, reminding them to complete the funnel journey. Where Will The Retargeting Ads Appear? Target Destinations We can retarget on thousands of websites! These are just a few examples: *We also have the ability to retarget on local websites. We will provide monthly reporting on impressions and clicks.
https://docs.breaktheweb.agency/article/7-what-is-retargeting
2018-12-09T22:43:20
CC-MAIN-2018-51
1544376823183.3
[array(['https://s3.amazonaws.com/assets.convertfox.com/attachment_images/9b5d5eb116403e4d127d5a3cb30a615363074d173bd650c51caa4fcd881c514da47fd21c67698513dbf1f83f23433a7f.png', None], dtype=object) ]
docs.breaktheweb.agency
Here are the changes to nickserv: Go to line 755 and the line there should look like this: /nickserv set email Password [email address] The added part is the Password Then go to line 771 and add there to the command: "ILovePeanutButter" to represent the PASSWORD. Thats all. For information about it msg me online to clarify it if you have questions.
http://docs.dal.net/pending/corrections/upnickserv.txt
2018-12-09T22:50:48
CC-MAIN-2018-51
1544376823183.3
[]
docs.dal.net
Dependency Management of Asynchronous Operations Batch operations in Amazon ML depend on other operations in order to complete successfully. To manage these dependencies, Amazon ML identifies requests that have dependencies, and verifies that the operations have completed. If the operations have not completed, Amazon ML sets the initial requests aside until the operations that they depend on have completed. There are some dependencies between batch operations. For example, before you can create an ML model, you must have created a datasource with which you can train the ML model. Amazon ML cannot train an ML model if there is no datasource available. However, Amazon ML supports dependency management for asynchronous operations. For example, you do not have to wait until data statistics have been computed before you can send a request to train an ML model on the datasource. Instead, as soon as the datasource is created, you can send a request to train an ML model using the datasource. Amazon ML does not actually start the training operation until the datasource statistics have been computed. The createMLModel request is put into a queue until the statistics have been computed; once that is done, Amazon ML immediately attempts to run the createMLModel operation. Similarly, you can send batch prediction and evaluation requests for ML models that have not finished training. The following table shows the requirements to proceed with different AmazonML actions
https://docs.aws.amazon.com/machine-learning/latest/dg/dependency-management-of-asynchronous-operations.html
2018-01-16T10:07:40
CC-MAIN-2018-05
1516084886397.2
[]
docs.aws.amazon.com
Technology Regions Overview If multiple presentation technologies are used in an application, such as WPF, Win32, or DirectX, they must share the rendering areas within a common top-level window. This topic describes issues that might influence the presentation and input for your WPF interoperation application. Regions Within a top-level window, you can conceptualize that each HWND that comprises one of the technologies of an interoperation application has its own region (also called "airspace"). Each pixel within the window belongs to exactly one HWND, which constitutes the region of that HWND. (Strictly speaking, there is more than one WPF region if there is more than one WPF HWND, but for purposes of this discussion, you can assume there is only one). The. Region Examples The following illustration shows an application that mixes Win32, DirectX, and WPF. Each technology uses its own separate, non-overlapping set of pixels, and there are no region issues. Suppose that this application uses the mouse pointer position to create an animation that attempts to render over any of these three regions. No matter which technology was responsible for the animation itself, that technology would violate the region of the other two. The following illustration shows an attempt to render a WPF circle over a Win32 region. Another violation is if you try to use transparency/alpha blending between different technologies. In the following illustration, the WPF box violates the Win32 and DirectX regions. Because pixels in that WPF box are semi-transparent, they would have to be owned jointly by both DirectX and WPF, which is not possible. So this is another violation and cannot be built. The previous three examples used rectangular regions, but different shapes are possible. For example, a region can have a hole. The following illustration shows a Win32 region with a rectangular hole this is the size of the WPF and DirectX regions combined. Regions can also be completely nonrectangular, or any shape describable by a Win32 HRGN (region). Transparency and Top-Level Windows The window manager in Windows and later. your application is. See Also WPF and Win32 Interoperation Walkthrough: Hosting a WPF Clock in Win32 Hosting Win32 Content in WPF
https://docs.microsoft.com/en-us/dotnet/framework/wpf/advanced/technology-regions-overview
2018-01-16T09:30:25
CC-MAIN-2018-05
1516084886397.2
[array(['media/migrationinteroparchitectarticle01.png', 'MigrationInteropArchitectArticle01 A window that does not have airspace issues'], dtype=object) array(['media/migrationinteroparchitectarticle02.png', 'MigrationInteropArchitectArticle02 Interop diagram'], dtype=object) array(['media/migrationinteroparchitectarticle03.png', 'MigrationInteropArchitectArticle03 Interop diagram'], dtype=object) array(['media/migrationinteroparchitectarticle04.png', 'MigrationInteropArchitectArticle04 Interop diagram'], dtype=object) array(['media/migrationinteroparchitectarticle05.png', 'MigrationInteropArchitectArticle05 Interop diagram'], dtype=object)]
docs.microsoft.com
I was just reading pearls of wisdom from those Norse Gods of retrieval medicine over at Scancrit.com...a nicely laid out blog with snippets and updates of interest to not just the prehospital doctor but anyone who is involved in anaesthesia and EM, whether in the tertiary centre or out in rural Australia. In fact, this crisp blog is perhaps what this blog should be - a useful repository of medical information for use in an emergency at 3am. I reckon I'm quite a long way off target, but time will tell. Anyway, this week’s snippet concerned the ‘Kepler’ robotic intubation system. Now I do think that robots are kind of cool...and it seems I am not alone in this, with the urology surgeons taking up the idea of robotic surgery with enthusiasm. I am still not convinced that a robot is needed to intubate the trachea...but the ScanCrit docs tell me that they will soon be taking over my work on Kangaroo Island and ‘tele-tubing’ my patients for me from the remoteness of Norway. More like teletubbies I reckon, but that’s another story... But it is fair to say that anaesthetists, as a bunch, are ‘propellor heads’. They are the most likely to have an interest in gadgets. Maybe it comes from a training programme that seems to delve uncomfortably deep into concepts such as vaporiser design, laminar flow, Hufners constant and whether or not you can give halothane intravenously (turns out you can, perhaps) Whatever, walk into any theatre and it will be the anaesthetist who has established a personal wi-fi hotspot so that his/her MacBook, iPhone and iPad can integrate seamlessly. As a rule anaesthetists worship at the altar of all things Apple and are as au fait with the ins-and-outs of Siri as they are with describing multicompartmental pharmacokinetic models of anaesthesia. I managed to wind up one of the FANZCAs in NSW last year by convincing him that Steve Job’s last bequest to the world before he died was the imminently due iGas workstation, allowing anaesthetists to not only monitor their patient from the tea room using an iPad and wi-fi connection to the iGas anaesthetic machine, but also to have optional remifentanil module. I swear he came in his pants at the thought. Good thing I didn't mention the ultrasound connectivity... In the spirit of keeping the anaesthetists happy, I have just stumbled across the following range of theatre caps - ideal for the tech-savvy anaesthetist who doesn't mind flaunting his or her knowledge. I wonder if they will catch on? Of course, we all know it’s the orthopods who are really the smart ones. The paper in the Christmas BMJ last year (‘As strong as an ox and amost twice as clever?’) caused some howls of anguish from the gas board who feared loss of the intellectual high ground. It’s not for nothing we refer to the blood-brain barrier. Not me. Now whenever the orthopod growls about ‘too much/little blood pressure’ I have carte blanche to say “Well, you’re the clever one - you fix it” and go back to racking up high-scores on Angry Birds. Let your comments rip!
http://ki-docs.blogspot.com/2012/04/are-anaesthetists-propellor-heads.html
2021-09-16T21:15:59
CC-MAIN-2021-39
1631780053759.24
[]
ki-docs.blogspot.com
This guide takes you through the steps for installing software on CentOS or Red Hat. For more information on supported operating system versions, see Product Support Matrix in the Planning Guide. Before you install software, please review and verify the following. Steps:. The installer for the on CentOS/RHEL requires RPM version 4.11.3-40. Please upgrade if necessary. If you have not done so already, you may download the dependency bundle with your release directly from . For more information, see Install Dependencies without Internet Access. Use the following to add the hosted package repository for CentOS/RHEL, which will automatically install the proper packages for your environment. yum, using root: . For more information, see Install Hadoop Dependencies.. You can deploy the as a desktop client to enable end-users to connect to the without using one of the supported web browsers. See Install Desktop Application.
https://docs.trifacta.com/plugins/viewsource/viewpagesrc.action?pageId=151994994
2021-09-16T22:17:01
CC-MAIN-2021-39
1631780053759.24
[]
docs.trifacta.com
Installation¶ Stable release¶ To install bleak, run this command in your terminal: $ pip install bleak This is the preferred method to install bleak, as it will always install the most recent stable release. If you don’t have pip installed, this Python installation guide can guide you through the process. From sources¶ The sources for bleak can be downloaded from the Github repo. You can either clone the public repository: $ git clone git://github.com/hbldh/bleak $ curl -OL Once you have a copy of the source, you can install it with: $ python setup.py install
https://bleak.readthedocs.io/en/latest/installation.html
2021-09-16T22:53:39
CC-MAIN-2021-39
1631780053759.24
[]
bleak.readthedocs.io
To create a Review filter, please follow these steps. (Please note: You need to enable integration with one of the review apps before creating a Review Filter) Go to your Shopify Admin Store > App > Searchly. Click on Filters and select Manage Filters. Click on Add New Filter. dd Title, select Page & select option from the dropdown (Product Reviews) then click Save. Run data sync by clicking Data Sync.
https://docs.appikon.com/en/articles/4376583-how-do-i-create-a-review-filter
2021-09-16T20:42:00
CC-MAIN-2021-39
1631780053759.24
[array(['https://downloads.intercomcdn.com/i/o/292163329/b3156fc1d2c7108f58856f11/image.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/292163707/7739b4948d277b402fd33bff/image.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/292163943/d558fce062b07b94950ecdeb/image.png', None], dtype=object) ]
docs.appikon.com
Cluster Hardware also, balance network and disk transfer rates to meet the anticipated data rates using multiple NICs per node. It is not necessary to bond or trunk the NICs together. The HPE Ezmeral Data Fabric can take advantage of multiple NICs transparently. Each node should provide raw disks to the data-fabric, with no RAID or logical volume manager, as the data-fabric takes care of formatting and data protection. The following example architecture provides specifications for a recommended standard data-fabric Hadoop compute/storage node for general purposes. This configuration is highly scalable in a typical data center environment. The HPE Ezmeral Data Fabric can make effective use of more drives per node than standard Hadoop, so each node should present enough faceplate area to allow a large number of drives. Standard Compute/Storage Node - data-fabric production clusters must have a minimum of four data nodes except for MapR Edge. A data node is defined as a node running a FileServer process that is responsible for storing data on behalf of the entire cluster. Having additional nodes deployed with control-only services such as CLDB and ZooKeeper is recommended, but they do not count toward the minimum node total because they do not contribute to the overall availability of data. - Erasure coding and rolling updates are not supported for clusters of four nodes or fewer. - Erasure coding is not recommended for five- and six-node clusters. See the Important note in Erasure Coding Scheme for Data Protection and Recovery. - Dedicated control nodes are not needed on clusters with fewer than 10 data nodes. - As the cluster size is reduced, each individual node has a larger proportional impact on cluster performance. As cluster size drops below 10 nodes, especially during times of failure recovery, clusters can begin to exhibit variable performance depending on the workload, network and storage I/O speed, and the amount of data being re-replicated. - For information about fault tolerance, see Priority 1 - Maximize Fault Tolerance and Cluster Design Objectives. To maximize fault tolerance in the design of your cluster, see Example Cluster Designs. Best Practices Hardware recommendations and cluster configuration vary by use case. For example, is the application an HPE Ezmeral Data Fabric Database application? Is the application latency-sensitive? - Disk Drives - Drives should be JBOD, using single-drive RAID0 volumes to take advantage of the controller cache. - SSDs are recommended when using HPE Ezmeral Data Fabric Database JSON with secondary indexes. HDDs can be used with secondary indexes only if the performance requirements are thoroughly understood. Performance can be substantially impaired on HDDs because of high levels of disordered I/O requests. SSDs are not needed for using HPE Ezmeral Data Fabric Event Data Streams. - SAS drives can provide better I/O latency; SSDs provide even lower latency. - Match aggregate drive throughput to network throughput. 20GbE ~= 16-18 HDDs or 5-6 SSDs or 1 NVMe drive. - Cluster Size - In general, it is better to have more nodes. Larger clusters recover faster from disk failures because more nodes are available to contribute. For information fault tolerance, see Example Cluster Designs. - For smaller clusters, all nodes are likely to fit on a single non-blocking switch. Larger clusters require a well-designed Spine/Leaf fabric that can scale. - Operating System and Server Configuration - Red Hat Enterprise Linux, SUSE Linux Enterprise Server, Ubuntu, CentOS, and Oracle Enterprise Linux are supported as described in Operating System Support Matrix (Release 6.x). - Install the minimal server configuration. Use a product like Cobbler to PXE boot and install a consistent OS image. - Install the full JDK (1.11). - For best performance, avoid deploying a data-fabric. - HPE Ezmeral Data Fabric Database benefits from lots of RAM: 256GB per node or more. - Filesystem-only nodes can have fewer, faster cores: 6 cores for the first 10GbE of network bandwidth, and an additional 2 cores for each additional 10GbE. For example, dual 25GbE (50GbE) filesystem-only nodes perform best with at least 6+(4*2)=14 cores. - Filesystem-only nodes should have hyperthreading disabled.
https://docs.datafabric.hpe.com/62/AdvancedInstallation/PlanningtheCluster-hardware.html
2021-09-16T21:24:56
CC-MAIN-2021-39
1631780053759.24
[]
docs.datafabric.hpe.com
Working with the call records API in Microsoft Graph Call records provide usage and diagnostic information about the calls and online meetings that occur within your organization when using Microsoft Teams or Skype for Business. You can use the call records APIs to subscribe to call records and look up call records by IDs. The call records API is defined in the OData sub-namespace, microsoft.graph.callRecords. Key resource types Call record structure The callRecord entity represents a single peer-to-peer call or a group call between multiple participants, sometimes referred to as an online meeting. A peer-to-peer call contains a single session between the two participants in the call. Group calls contain one or more session entities. In a group call, each session is between the participant and a service endpoint. Each session contains one or more segment entities. A segment represents a media link between two endpoints. For most calls, only one segment will be present for each session, however sometimes there may be one or more intermediate endpoints. In the diagram above, the numbers denote how many children of each type can be present. For example, a 1..N relationship between a callRecord and a session means one callRecord instance can contain one or more session instances. Similarly, a 1..N relationship between a segment and a media means one segment instance can contain one or more media streams.
https://docs.microsoft.com/en-us/graph/api/resources/callrecords-api-overview?view=graph-rest-1.0
2021-09-16T23:13:36
CC-MAIN-2021-39
1631780053759.24
[array(['/en-us/graph/images/callrecords-structure.png', 'Image of a the data structure representing a complete call record'], dtype=object) ]
docs.microsoft.com
Resources to review before you revert Contributors Download PDF of this page Before you revert ONTAP, you should confirm hardware support and review resources to understand issues you might encounter or need to resolve. Review the ONTAP 9 Release Notes for the target release. The “Important cautions” section describes potential issues that you should be aware of before downgrading or reverting. Confirm that your hardware platform is supported in the target release. Confirm that your cluster and management switches are supported in the target release. You must verify that the NX-OS (cluster network switches), IOS (management network switches), and reference configuration file (RCF) software versions are compatible with the version of ONTAP to which you are reverting. If your cluster is configured for SAN, confirm that the SAN configuration is fully supported. All SAN components—including target ONTAP software version, host OS and patches, required Host Utilities software, and adapter drivers and firmware—should be supported.
https://docs.netapp.com/us-en/ontap/revert/task_reviewing_pre_reversion_resources.html
2021-09-16T22:33:15
CC-MAIN-2021-39
1631780053759.24
[]
docs.netapp.com
Grid The grid with all placeholders is available at Admin > Templates-Master > Easy Banner > Manage Placeholders page. The placeholders shown above in screenshot, were configured in order to show banners in the most popular places of Magento store. You can also add a new placeholder from this page. Use Add Placeholder button in the upper right corner of the page to create a new placeholder. Form
https://docs.swissuplabs.com/m1/extensions/easybanners/backend/manage-placeholders/
2021-09-16T21:01:20
CC-MAIN-2021-39
1631780053759.24
[array(['/images/m1/easy-banners/backend/placeholder/grid.png', 'Placeholders grid'], dtype=object) array(['/images/m1/easy-banners/backend/placeholder/form.png', 'Placeholder form'], dtype=object) ]
docs.swissuplabs.com
An Alpha is a grayscale intensity map. It can be used to represent intensity, masking, and similar things. For example, bump maps and displacement maps (both in ZBrush and in other programs) are both alphas; the gray intensity represents the height or depth of the bump or displacement. In ZBrush, alphas are used for much more than just bump or displacement maps. They can affect masking (which parts of an model or painting you work with), brush appearance, how colors, or materials are laid down, and the shape of sculpts. And probably a few other things I can’t think of right now. In addition, you can make your own alphas, and also turn alphas into other tools, such as Stencils (which are masking tools that offer a different, and powerful, set of capabilities). Below, we describe the most common ways of obtaining and using alphas. We also give links to pages which describe material significantly related to alphas. Using Alphas - Many of the standard drawing tools use alphas to control their shape. This affects the depth of pixols on the canvas. - Alphas may be used with 3D sculpting brushes to affect the geometry of 3D models. - Alphas are the means by which displacement and bump maps are exported from ZBrush. Obtaining Alphas ZBrush comes with a large selection of useful alphas, which can be selected from the Alpha Palette or from the pop-up palette that appears after clicking the large Current Alpha thumbnail. You can of course load your own images for use as alphas using the Load Alpha button in the Alpha palette. Colored images will be converted to grayscale. You may find it more convenient to simply paint a pattern on the screen, and then use the GrabDoc control to convert it into an alpha. The depth of the scene you created will be converted to the alpha (color will be ignored). Since ZBrush supports 16-bit depths, you will get a true 16-bit alpha.
http://docs.pixologic.com/user-guide/3d-modeling/sculpting/sculpting-brushes/alphas/
2021-09-16T21:56:37
CC-MAIN-2021-39
1631780053759.24
[]
docs.pixologic.com
In IJC 5.4 initial scripting support was added. This functionality was experimental and incomplete. Improvements were made in each version since, and further improvements are planned. The current status should still be considered to be somewhat experimental. That said, scripting can and has been used for several purposes. Its main purpose is to allow easy extension of IJC - to make it do things it can't do out of the box, or to make things easier to perform. {primary} Since the 15.10.5.0 release the user has to have the ROLE_EDIT_SCRIPT assigned to be able to create a new script or edit an existing one. This allows the administrator to improve the database security. The user roles are described here. Currently scripts can be run against a particular data tree or schema. This means that the data tree or schema is passed into the script context as a variable, and the script can operate on this item using the IJC API. This potentially provides a very powerful way to extend IJC, for example: Performing complex operations not possible through the IJC user interface Automating functions, simple and extensive Accessing ChemAxon's extensive chemical language Performing specialized or batch queries Automating table populating The developer's section has a library of example scripts, plugins, and scriptlets. These provide an excellent learning tool, and can be mixed and matched to suit your needs. Additionally, button widgets can be added to a form to run scripts on command. The users scripting options are significantly improved with the new 5.11 version. Now a script can be executed after the double click event of several commonly used widgets. The script text can be edited in the code tab and can access table data. The widgets that are supported are: MolMatrix, MolPane, text area, text field and the table widget. An example use case is a custom input dialog with distinct predefined input values used for updating the the textual field in the database - it will modify the field to one of it's predefined values. Please follow the link to see the sample script with drop-down menu. Another scripts can be performed during schema connection or disconnection ( Edit Schema -> Miscellaneous tab ). The language for the scripts is Groovy , a dynamic programming language that is closely linked to the Java language that most of IJC is written in. Groovy provides a powerful programming environment, but also a simple syntax that avoids some of the complexity of programming. Look at the documentation on the Groovy web site for more details. For advanced usage you definitely need to understand Groovy, but for simple cases you should just be able to follow the examples described here. We may support additional languages such as Python and JavaScript in future, but this is undecided. Because of its close alignment with the Java language that IJC is written in Groovy is generally going to be a better choice. To install a script, right click on the schema or data tree node in the Projects window and choose 'New script'. You will be prompted for a name of the new Groovy script, and a very simple example script will open in the editor window. This script will be saved to a 'Scripts' folder under the schema or data tree. Simply delete the demo script and copy in an existing script to use it. If you do, pay close attention to where the script is located (data tree or schema). A script installed to the wrong location will not work. To edit or run an existing script locate the script in the Scripts folder and double click it to open it. To run a script, click on the 'Run script' button in the editor toolbar (the first button). Any output from the script will be written to an output window. If you want your script to run as a button in a form then add a button widget in design mode. One of the button's properties is the script. Edit this, and then in browse mode clicking on the button will execute the script. As default, the scripts are user specific items which can be shared in a similar way as views, lists or queries. When shared, the script can be used also by other users that its owner. The best way to learn how to use or write scripts is by starting with some examples. The developer's section has an extensive array of example scripts, scriptlets and plugins which can be used or tweaked for your own purposes. If you wish to learn more about writing scripts, please visit the developer's section for more detail.
https://docs.chemaxon.com/display/docs/scripting
2021-09-16T20:50:45
CC-MAIN-2021-39
1631780053759.24
[]
docs.chemaxon.com
8.5.100.12 Solution Control Server 8.5.x Release Notes Helpful Links Releases Info Product Documentation Genesys Products What's New This release includes only resolved issues. Resolved Issues This release contains the following resolved issue: SCS is now more robust when processing malformed messages. SCS now properly rejects messages with a size that does not correspond to the message content. Previously, SCS sometimes terminated when receiving such a message. (MFWK-17307) Upgrade Notes No special procedure is required to upgrade to release 8.5.100.12.CATEGORY:RNDRAFT This page was last edited on April 29, 2016, at 18:24.
https://docs.genesys.com/Documentation/RN/latest/scs85rn/scs8510012
2021-09-16T22:12:37
CC-MAIN-2021-39
1631780053759.24
[]
docs.genesys.com
Audio effects detection (preview) Audio Effects Detection is one of Azure Video Analyzer for Media AI capabilities. It can detects a various of acoustics events and classify them into different acoustic categories (such as Gunshot, Screaming, Crowd Reaction and more). Audio Events Detection can be used in many domains. Two examples are: - Using Audio Effects Detection is the domain of Public Safety & Justice. Audio Effects Detection can detect and classify Gunshots, Explosion and Glass-Shattering. Therefore, it can be implemented in a smart-city system or in other public environments that include cameras and microphones. Offering a fast and accurate detection of violence incidents. - In the Media & Entertainment domain, companies with a large set of video archives can easily improve their accessibility scenarios, by enhancing their video transcription with non-speech effects to provide more context for people who are hard of hearing. Example of the Video Analyzer for Media Audio Effects Detection output Supported audio categories Audio Effect Detection can detect and classify 8 different categories. In the next table, you can find the different categories split in to the different VI presets, divided to Standard and Advanced. For more information, see pricing. Result formats The audio effects are retrieved in the insights JSON that includes the category ID, type, name, and set of instances per category along with their specific timeframe and confidence score. The name parameter will be presented in the language in which the JSON was indexed, while the type will always remain the same. audioEffects: [{ id: 0, type: "Gunshot", name: "Gunshot", instances: [{ confidence: 0.649, adjustedStart: "0:00:13.9", adjustedEnd: "0:00:14.7", start: "0:00:13.9", end: "0:00:14.7" }, { confidence: 0.7706, adjustedStart: "0:01:54.3", adjustedEnd: "0:01:55", start: "0:01:54.3", end: "0:01:55" } ] }, { id: 1, type: "CrowdReactions", name: "Crowd Reactions", instances: [{ confidence: 0.6816, adjustedStart: "0:00:47.9", adjustedEnd: "0:00:52.5", start: "0:00:47.9", end: "0:00:52.5" }, { confidence: 0.7314, adjustedStart: "0:04:57.67", adjustedEnd: "0:05:01.57", start: "0:04:57.67", end: "0:05:01.57" } ] } ], How to index Audio Effects In order to set the index process to include the detection of Audio Effects, the user should chose one of the Advanced presets under "Video + audio indexing" menu as can be seen below. Closed Caption When Audio Effects are retrieved in the closed caption files, they will be retrieved in square brackets the following structure: Audio Effects in closed captions file will be retrieved with the following logic employed: Silenceevent type will not be added to the closed captions - Maximum duration to show an event I 5 seconds - Minimum timer duration to show an event is 700 milliseconds Adding audio effects in closed caption files Audio effects can be added to the closed captions files supported by Azure Video Analyzer via the Get video captions API by choosing true in the includeAudioEffects parameter or via the video.ai portal experience by selecting Download -> Closed Captions -> Include Audio Effects. Note When using update transcript from closed caption files or update custom language model from closed caption files, audio effects included in those files will be ignored. Limitations and assumptions - The model works on non-speech segments only. - The model is currently working for a single category at a time. For example, a crying and speech on the background or gunshot + explosion are not supported for now. - The model is currently not supporting cases when there is a loud music on background. - Minimal segment length – 2 seconds.
https://docs.microsoft.com/en-us/azure/azure-video-analyzer/video-analyzer-for-media-docs/audio-effects-detection
2021-09-16T22:55:15
CC-MAIN-2021-39
1631780053759.24
[array(['media/audio-effects-detection/audio-effects.jpg', 'Audio Effects image'], dtype=object) array(['media/audio-effects-detection/index-audio-effect.png', 'Index Audio Effects image'], dtype=object) array(['media/audio-effects-detection/close-caption.jpg', 'Audio Effects in CC'], dtype=object) ]
docs.microsoft.com
The Configuration preference page is an interface for creating, activating, and managing test configurations locally. It allows you to: - Review and run built-in and user-defined test configurations, as well as configurations served from DTP. - Create, edit, delete, and activate new test configurations. - Manage test configurations stored in the Favorites menu. - Set up a custom location for user-defined test configurations (use absolute paths). - Set up a custom location for rule documentation (use absolute paths). - Set up a location for storing custom rule documentation and rule mapping. To access the test configuration preference page, click Parasoft in the menu bar and choose Preferences (Eclipse), Options (NetBeans) or Settings (IntelliJ). Then select Configuration. Viewing Test Configuration Details The Configurations page allows you to view details of the built-in test configurations. Right-click a test configuration and choose View: A web interface will display the information how the test configuration is configured. Click the tabs to view more details. The built-in test configurations are not editable, but you can create and customize a new test configuration; see Creating Custom Test Configurations.
https://docs.parasoft.com/pages/?pageId=51933728&sortBy=createddate
2021-09-16T21:23:37
CC-MAIN-2021-39
1631780053759.24
[]
docs.parasoft.com
SlimerJS runs on any platform on which Firefox is available: Linux (32bits and 64bits), Windows, MacOs X. On windows, you should open a terminal. You can use the classical cmd.exe, or the recent PowerShell.exe. You can also install Cygwin and use its terminal. You cannot use the MingW32 environment on Windows because there are some issues with ir (no output in the console, and it lacks on some commands like mktemp). Note: version 0.9 and lower of SlimerjS were provided with XulRunner, the Firefox runtime. After release of XulRunner 40.0, Mozilla ceased to build it and even remove its source code from Firefox’s source tree. So you have to install Firefox to use SlimerJS 0.10 and higher. On Linux, if you don’t have an X environment, and if you want to install Firefox from binaries provided directly by Mozilla, you need to know that Firefox needs these libraries: libpthread.so, libdl.so, libstdc++.so, libm.so, libgcc_s.so, libc.so, ld-linux-x86-64.so, libXrender1.so, libasound.so.2, libgtk-x11-2.0.so.0, libdbus-glib-1.so.2. This list may not be complete, depending of your distribution and the version of Firefox. On Ubuntu/Debian, you can install/verify it by doing: sudo apt-get install libc6 libstdc++6 libgcc1 libgtk2.0-0 libasound2 libxrender1 libdbus-glib-1-2 If Firefox or SlimerJS does not work, add --debug=true to the command line of Slimerjs, to see if there is no errors about libraries missing. If so, install packages that provide missing libraries. Search missing files in the debian repository or the the ubuntu repository. Probably the best thing is to install the package of Firefox provided by your distribution : it will install all dependencies. And next, you can install and use binaries from Mozilla. To install SlimerJS, you need to download its package. This is a zip package containing SlimerJS and it targets all operating system. You have to install Firefox separately (version 40+ is recommanded) and probably you’ll need to set an environment variable. This package can be downloaded from slimerjs.org. Or it can be installed from a repository like the Arch Linux’s repository or Homebrew. See the download page to know the places from where you can retrieve SlimerJS. During its launch, SlimerJS tries to discover itself the path of Firefox. In case it fails, or if you want to launch SlimerJS with a specific version of Firefox, you should create an environment variable containing the path of the Firefox binary. To create this environment variable from a command line: export SLIMERJSLAUNCHER=/usr/bin/firefox SET SLIMERJSLAUNCHER="c:\Program Files\Mozilla Firefox\firefox.exe export SLIMERJSLAUNCHER="/cygdrive/c/program files/mozilla firefox/firefox.exe" export SLIMERJSLAUNCHER=/Applications/Firefox.app/Contents/MacOS/firefox You can of course set this variable in your .bashrc, .profile or in the computer properties on Windows. By default, SlimerJS is configured to be compatible only with specific stable versions of Firefox. It’s because internal API of Firefox can be changed between versions, and so SlimerJS may not work as expected. Stranges behaviors or even fatal errors may appears with unsupported versions. SlimerJS has only been tested with specific versions of Firefox. However, you can change this limitation, by modifying the maxVersion parameter (and/or the minVersion) in the application.ini of SlimerJS. But remember you do it at your own risk. If you found issues with unsupported versions of Firefox, please discuss about it in the mailing-list, especially if it is about an unstable version of Firefox. From a command line, call the slimerjs executable (or slimerjs.bat for Windows) with the path of a javascript file. /somewhere/slimerjs-1.2.3/slimerjs myscript.js # or if SlimerJS is in your $PATH: slimerjs myscript.js On Windows: c:\somewhere\slimerjs-1.2.3\slimerjs.bat myscript.js The js script should contain your instructions to manipulate a web page... You can indicate several options on the command line. See the “configuration” chapter. Starting with Firefox 56, (and 55 on linux), you can add the command line option --headless, so you don’t need a graphical environment, even on Linux. See the Mozilla documentation about it. ./slimerjs --headless myscript.js Instead of using this --headless flag, you can set an environment variable MOZ_HEADLESS to 1. MOZ_HEADLESS=1 ./slimerjs myscript.js If you are using Firefox 54 and lower, the only solution to have an “headless” SlimerJS, is to use xvfb and it works only on Linux. Xvfb allows to launch any “graphical” programs without the need of an X-Windows environment. Windows of the application won’t be shown and will be drawn only in memory. Install it from your prefered repository (sudo apt-get install xvfb with debian/ubuntu). Then launch SlimerJS like this: xvfb-run ./slimerjs myscript.js You won’t see any windows. If you have any problems with xvfb, see its documentation. Note: xvfb is also available on MacOS, however Firefox for MacOs does not using X11 backend so it does not work. The possibility to use Flash and other NPAPI plugins depends to the version of Firefox you are using. Firefox 52+ is not be able any more to load NPAPI plugins. And future version may not be able to load Flash. So SlimerJS can load Flash content or other NPAPI plugins if Firefox can. In this case, just install them as indicated by the vendor, and it will be theorically recognized by SlimerJS. See details on MDN . For example, on linux, install the corresponding package. Note: plugins are not Firefox/XUL/JS extensions. Plugins and “extensions” are two different things in the gecko world. Extensions for Firefox are pieces of code to extends some features of Gecko and/or to add some UI things in the interface of Firefox. Plugins are black boxes that can only be loaded with the html element <object>, like Flash, to show non-html content inside a web page. See detailed definition of plugins on MDN . Theorically, you can create XUL/JS addons for SlimerJS like you do for Firefox, It is not easy but it is possible. See the dedicated chapter. It is not possible any more if you are using Firefox 57+.
https://docs.slimerjs.org/current/installation.html
2021-09-16T20:42:20
CC-MAIN-2021-39
1631780053759.24
[]
docs.slimerjs.org
To perform a search with List Search Simple, simply enter search terms in one or more search criteria fields and click the Search button. The example below shows a search for tasks in a task list that have not started or where a task status is in progress AND where the start date is before a selected date. Behind the scenes, the web part is using a CAML query to find results based on the search criteria. See also:
https://docs.bamboosolutions.com/document/simple_searching/
2021-09-16T22:43:10
CC-MAIN-2021-39
1631780053759.24
[]
docs.bamboosolutions.com
Notice This document is for a development version of Ceph. Logging and Debugging¶ Typically,. Runtime¶ *} config set config set. Boot Time¶. Accelerating Log Rotation¶. Valgrind¶. Subsystem, Log and Debug Settings¶ In most cases, you will enable debug logging output via subsystems. Ceph Subsystems¶ fatal signal is raised or an assertin source code is triggered or upon requested. Please consult document on admin socket for more details. Settings¶ Logging and debugging settings are not required in a Ceph configuration file, but you may override default settings as needed. Ceph supports the following settings: log_file¶ The location of the logging file for your cluster. - type str - see also log_to_file, log_to_stderr, err_to_stderr, log_to_syslog, err_to_syslog log_max_recent¶ The purpose of this option is to log at a higher debug level only to the in-memory buffer, and write out the detailed log messages only if there is a crash. Only log entries below the lower log level will be written unconditionally to the log. For example, debug_osd=1/5 will write everything <= 1 to the log unconditionally but keep entries at levels 2-5 in memory. If there is a seg fault or assertion failure, all entries will be dumped to the log. - type int - default 500 log_flush_on_exit¶ Determines if Ceph should flush the log files after exit. - type bool - default false clog_to_monitors¶ Determines if clogmessages should be sent to monitors. - type str - default default=true mon_cluster_log_to_syslog¶ Determines if the cluster log should be output to the syslog. - type str - default default=false mon_cluster_log_file¶ The locations of the cluster’s log files. There are two channels in Ceph: clusterand audit. This option represents a mapping from channels to log files, where the log entries of that channel are sent to. The defaultentry is a fallback mapping for channels not explicitly specified. So, the following default setting will send cluster log to $cluster.log, and send audit log to $cluster.audit.log, where $clusterwill be replaced with the actual cluster name. - type str - default default=/var/log/ceph/$cluster.$channel.log cluster=/var/log/ceph/$cluster.log - see also mon_cluster_log_to_file
https://docs.ceph.com/en/latest/rados/troubleshooting/log-and-debug/
2021-09-16T22:11:27
CC-MAIN-2021-39
1631780053759.24
[]
docs.ceph.com
This exam is an online, proctored, performance-based test that requires implementing multiple solutions within a Remote Desktop Linux environment. The exams consist of between 15-20 performance-based tasks. Candidates have 3 hours to complete the exam. The exams tabs. A dedicated environment will be provisioned for your assessment, which will include a desktop workstation (VM Based on Ubuntu 18.04) and nested virtualized infrastructure to support a limited ONAP deployment. The following software and services will also be installed: In addition to the GUI the following terminal access may be available: KubeCTL Openstack CLI Some keys on international keyboards may not function as expected. Please note the following: To access keys modified by using the ALT key, use the ALT key located on the right side of your keyboard. The ALT or ALT GR key located on the left side of your keyboard will not function as a modifier key. An on-screen virtual keyboard is provided for any special characters that may be needed. To access the on-screen keyboard, double-click the “Virtual Keyboard” Icon on the desktop.
https://docs.linuxfoundation.org/tc-docs/certification/important-instructions-cop
2021-09-16T21:22:20
CC-MAIN-2021-39
1631780053759.24
[]
docs.linuxfoundation.org
Collection of elements to configure an algorithm. More... #include <seqan3/core/configuration/configuration.hpp> Collection of elements to configure an algorithm. This class provides a unified interface to create and query such configurations for a specific algorithm. It extends the standard tuple interface with some useful functions to modify and query the user configurations. Constructs a configuration from a single configuration element. Returns a new configuration by appending the given configuration to the current one. This function generates a new configuration object containing the appended configuration elements. The current configuration will not be modified. Returns the stored configuration element if present otherwise the given alternative. Uses the type alternative_t of the given alternative to check if such an configuration element was already stored inside of the configuration. If no suitable candidate can be found the passed value alternative will be returned. If alternative_t is a class template, then any specialisation of this alternative type will be searched and returned if present. alternative_tor the alternative if not present. no-throw guarantee. Constant time. Remove a config element from the configuration. query_t. Combines two configurations and/or configuration elements forming a new seqan3::configuration. lhsand rhs. The two operands can be either a seqan3::configuration object or a seqan3::detail::config_element. The right hand side operand is then appended to the left hand side operand by creating a new seqan3::configuration object. Neither lhs nor rhs will be modified.
https://docs.seqan.de/seqan/3-master-user/classseqan3_1_1configuration.html
2021-09-16T21:41:42
CC-MAIN-2021-39
1631780053759.24
[]
docs.seqan.de
The Arduino provides a total of 18 pins for either digital input or output (labelled 2 to 13 and A0 to A5), including 6 for analogue input (labelled A0 to A5). The Arduino can be accessed using the arduino property of the Robot object. my_arduino = r.arduino You can use the GPIO (General Purpose Input/Output) pins for anything, from microswitches to LEDs. GPIO is only available on pins 2 to 13 and A0 to A5 because pins 0 and 1 are reserved for communication with the rest of our kit. In the simulator, the Arduino’s pins are pre-populated and pre-configured. The first few digital pins are occupied by digital inputs, the next few by digital outputs, and the analogue pins are attached to ultrasound sensors. To find out how many inputs and outputs each type of robot has, check the robot docs. You won’t be able to change pin mode like in a physical robot (see below), but pins 0 and 1 are still unavailable. Each robot has a number of digital inputs, starting from pin 2. If your robot has 5 inputs, those would occupy pins 2-6. These all have a digital state which you can read as a boolean. bumper_pressed = r.arduino.pins[5].digital_state The digital outputs start the pin after the last input. If your robot has 5 inputs and 3 outputs, the outputs would occupy pins 7-9. You can set their state similarly to reading the inputs, and you can also read the last value that was set. led_state = r.arduino.pins[8].digital_state r.arduino.pins[8].digital_state = not led_state # Toggle output Any analogue input devices (e.g. distance sensors) are connected to the Arduino’s analogue input pins starting from pin A0. You can read their values like this: distance = r.arduino.pins[AnaloguePin.A0].analogue_value The value read is returned as a float. GPIO pins have four different modes. A pin can only have one mode at a time, and some pins aren’t compatible with certain modes. These pin modes are represented by an enum which needs to be imported before they can be used. from sbot import GPIOPinMode The input modes closely resemble those of an Arduino. More information on them can be found in their docs. You will need to ensure that the pin is in the correct pin mode before performing an action with that pin. You can read about the possible pin modes below. r.arduino.pins[3].mode = GPIOPinMode.DIGITAL_INPUT_PULLUP GPIOPinMode.DIGITAL_INPUT In this mode, the digital state of the pin (whether it is high or low) can be read. r.arduino.pins[4].mode = GPIOPinMode.DIGITAL_INPUT pin_value = r.arduino.pins[4].digital_state GPIOPinMode.DIGITAL_INPUT_PULLUP Same as GPIOPinMode.DIGITAL_INPUT, but with an internal pull-up resistor enabled. r.arduino.pins[4].mode = GPIOPinMode.DIGITAL_INPUT_PULLUP pin_value = r.arduino.pins[4].digital_state GPIOPinMode.DIGITAL_OUTPUT In this mode, we can set binary values of 0V or 5V to the pin. r.arduino.pins[4].mode = GPIOPinMode.DIGITAL_OUTPUT r.arduino.pins[6].mode = GPIOPinMode.DIGITAL_OUTPUT r.arduino.pins[4].digital_state = True r.arduino.pins[6].digital_state = False GPIOPinMode.ANALOGUE_INPUT Certain sensors output analogue signals rather than digital ones, and so have to be read differently. The Arduino has six analogue inputs, which are labelled A0 to A5; however pins A4 and A5 are reserved and cannot be used. Analogue signals can have any voltage, while digital signals can only take on one of two voltages. You can read more about digital vs analogue signals here. from sbot import AnaloguePin r.arduino.pins[AnaloguePin.A0].mode = GPIOPinMode.ANALOGUE_INPUT pin_value = r.arduino.pins[AnaloguePin.A0].analogue_value The values are the voltages read on the pins, between 0 and 5. Pins A4 and A5 are reserved and cannot be used.
https://docs.sourcebots.co.uk/api/arduino/
2021-09-16T22:25:57
CC-MAIN-2021-39
1631780053759.24
[]
docs.sourcebots.co.uk
This is a step-by-step rundown on how you can use TotalCloud to deploy a 3 Tier Application. The AWS services used as part of this workflow include Elastic Compute Cloud (EC2), Auto Scaling Group, Virtual Private Cloud(VPC), The Internet Gateway, Elastic Load Balancer (ELB), and EC2 Security Groups. Step 1: Setup the Virtual Private Cloud (VPC) Give your VPC a name and a CIDR block of 10.0.0.0/16 Action Node: EC2 Create VPC Step 2: Create 4 Subnets The subnet is a way for us to group our resources within the VPC with their IP range. A subnet can be public or private. For our setup, we shall be creating the following subnets with the corresponding IP ranges. demo-public-subnet-1 | CIDR (10.0.1.0/24) demo-public-subnet-2 | CIDR (10.0.2.0/24) demo-private-subnet-3 | CIDR (10.0.3.0/24) demo-private-subnet-4 | CIDR(10.0.4.0/24) Action Nodes: EC2 Create Subnetx4 Step 3: Setup the Internet Gateway The Internet Gateway allows communication between the EC2 instances in the VPC and the internet. Action Nodes: EC2 Create Internet Gateway Step 4: Attach the VPC to the Internet Gateway Action Nodes: EC2 Attach Internet Gateway Step 5:Create Two Route Tables The route tables are a set of rules that dictates the movement of data within the network. For our architecture, we create two route tables, one private and one public. The public route table will define which subnets that will have direct access to the internet while the private route table will define which subnet goes through the NAT gateway. Action Nodes: EC2 Create Route Tablex2 EC2 Create Route EC2 Create Associate Route Tablex4 Step 6: Create Elastic Load Balancer The idea of the load balancer is to allow the distribution of load across the EC2 instances used for the application. Action Nodes: ELBv2 Create Load Balancer Step 7: Create and Modify the Target Group We need to configure our Target Group to have the Target type of instance. We will give the Target Group a name that will enable us to identify it. This will be needed when we will create our Auto Scaling Group. ELBv2 Create Target Group ELBv2 Modify Target Group Attributes ELBv2 Create Listener Step 8: Auto Scaling Group Auto Scaling Group can automatically adjust the size of the EC2 instances serving the application based on need. This makes it a better approach than directly attaching the EC2 instances to the load balancer. Action Nodes: AutoScaling CreateLaunch configuration Create Auto Scaling Group Step 9: Create an RDS DB Instance Action Nodes: RDS Create DB Subnet RDS Create DB Instance
https://docs.totalcloud.io/workflows/usecase-guides/advanced/create-a-3-tier-application
2021-09-16T22:35:50
CC-MAIN-2021-39
1631780053759.24
[]
docs.totalcloud.io
Fabric-CA Commands¶ The Hyperledger Fabric CA is a Certificate Authority (CA) for Hyperledger Fabric. The commands available for the fabric-ca client and fabric-ca server are described in the links below. Fabric-CA Client¶ The fabric-ca-client command allows you to manage identities (including attribute management) and certificates (including renewal and revocation). More information on fabric-ca-client commands can be found here.
https://hyperledger-fabric.readthedocs.io/en/release-1.4/commands/fabric-ca-commands.html
2021-09-16T20:54:35
CC-MAIN-2021-39
1631780053759.24
[]
hyperledger-fabric.readthedocs.io
When managing a small team it is vital that a manager has all the tools required to do capacity or workload planning. This template which is calendar based allows resource planning for the team by using a matrix of resources, dates and tasks..
https://www.itil-docs.com/products/team-capacity-planner-excel
2021-09-16T22:29:24
CC-MAIN-2021-39
1631780053759.24
[]
www.itil-docs.com
Date: Fri, 6 Jul 2007 07:53:12 -0700 (PDT) From: "Denis R." <[email protected]> To: "Zbigniew Szalbot" <[email protected]> Cc: [email protected] Subject: re: parental control with squid and dansguardian Message-ID: <[email protected]> Next in thread | Raw E-Mail | Index | Archive | Help >>Now, if someone just changes the port in their browser to 3128 (squid proxy port), then all content filtering will be bypassed. I have the same setup at home for my kids. Check the /etc/ipnat.conf file to redirect all web traffic to your FreeBSD_gateway_IP_address:8080 (assuming your FreeBSD box acts as a firewall/squid/gateway). Regards, Den Want to link to this message? Use this URL: <>
https://docs.freebsd.org/cgi/getmsg.cgi?fetch=1466391+0+archive/2007/freebsd-questions/20070708.freebsd-questions
2021-09-17T02:03:55
CC-MAIN-2021-39
1631780053918.46
[]
docs.freebsd.org
Date: Mon, 5 Feb 1996 19:09:44 -0500 (EST) From: Brian Tao <[email protected]> To: Marco Masotti <[email protected]> Cc: Jerry Kendall <[email protected]>, "Paul T. Root" <[email protected]>, [email protected] Subject: Re: IP Masquerading Message-ID: <[email protected]> In-Reply-To: <[email protected]> Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help On Mon, 5 Feb 1996, Marco Masotti wrote: > > As far as I know, IP masquerading is something unique, and not available in > any commercial Unix or not-Unix OS. It should be available in dedicated firewall products though. -- Brian Tao (BT300, [email protected]) Systems Administrator, Internex Online Inc. "Though this be madness, yet there is method in't" Want to link to this message? Use this URL: <>
https://docs.freebsd.org/cgi/getmsg.cgi?fetch=185706+0+archive/1996/freebsd-questions/19960204.freebsd-questions
2021-09-17T02:07:45
CC-MAIN-2021-39
1631780053918.46
[]
docs.freebsd.org
Date: Tue, 1 Oct 1996 17:25:38 -0500 (CDT) From: SysAdmin <[email protected]> To: Nick Liu <[email protected]> Cc: [email protected] Subject: Re: Next thing: Web Site Message-ID: <[email protected]> In-Reply-To: <[email protected]> Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help On Tue, 1 Oct 1996, Nick Liu wrote: > Now that I have subscribed to an ISP that assignes dynamic IP > addresses, I want to set up my Apache and run it. You really need a static IP to do this. Otherwise, people will have to know what IP you are on when they want to look at your server. > > My puzzle is: do I need to talk to ISP about routing traffic to my > machine? I know I need to configure my machine accordingly but there must > be something missing... Again, a static IP would solve this problem. > > I still cannot ping my address from outside world. It feels to me like my > address is not recognized or something. You should be able to ping your machine from the outside world by pinging your IP, not your machine name. In order to ping your machine name, you would need a static IP and it would have to be setup in your ISP's DNS. Mike ~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,-*~: <>
https://docs.freebsd.org/cgi/getmsg.cgi?fetch=332667+0+archive/1996/freebsd-questions/19960929.freebsd-questions
2021-09-17T00:39:18
CC-MAIN-2021-39
1631780053918.46
[]
docs.freebsd.org
Also this is a free integrated app. We want to integrate this application into our themes in order to give our customers the best user experience. In the world of eCommerce, your store’s look is that very experience.. We consider this a required app for your website to be optimized in terms of content, images as well as page loading speed. In addition, when you use this app your products will be easier for users to find. If you wanna know more about What is SEO?, How to SEO affect to your store?, you can refer this article: AVADA SEO AN EXCELLENT SEO OPTIMIZATION APP - OUR NEW INTEGRATION
https://docs.layouthub.com/user-guides/recommend-apps/avada-seo-optimization
2021-09-17T00:04:05
CC-MAIN-2021-39
1631780053918.46
[]
docs.layouthub.com
ID Token encryption¶ ID Token encryption gives the ability to provide confidentiality of the claims within the ID Token. Enabling encryption and configuring it¶ Onegini Access must be configured to enable ID Token encryption. The encryption method and jwks endpoint must be configured to use encryption. You can read more about configuring a web client with encryption support. Choose an encryption method¶ Following OpenID Connect standards, several different encryption methods for CEK (Content Encryption Key) are supported. This CEK is automatically generated based on the method chosen and thus the length will vary. Please refer to the Discovery API to determine which encryption methods are supported. These will need to be configured on the web client and also used in your client application when decrypting the ID Token. Provide a JWKS endpoint for encryption¶ Onegini Access supports a remote key set for encrypting the generated CEK. As the relying party, public keys must be shared via this endpoint so the Token Server can retrieve them in order to encrypt the CEK. Refer to the documentation on OpenID Connect Encryption for help with implementation. The endpoint that is served from your application should return a list of JWKs (JSON Web Keys). Refer to the RFC specification for details on implementation. A proper max-age directive should be included with the Cache-Control header as a part of the response. If none is provided, the TTL will default to the REDIS_DEFAULT_JWKS_URI_RESPONSE_TTL_SECONDS environment variable value. Please be sure to consider key rotation as described in the spec. Onegini Access supports a few different asymmetric algorithms. Please refer to the Discovery API for information on the exact algorithms that are supported. Recommended Library¶ We recommend the Nimbus JOSE+JWT library to help with decryption in your application. It contains constants for the encryption methods and algorithms that are supported by our Onegini Access implementation and simplifies the code necessary for decrypting the JWE and verifying the signed JWT.
https://docs.onegini.com/products/access/topics/oidc/id-token-encryption/id-token-encryption/
2021-09-17T01:53:14
CC-MAIN-2021-39
1631780053918.46
[]
docs.onegini.com
Media Lengua Facts - Language: Media Lengua - Alternate names: - Language code: mue - Language family: Mixed Language, Spanish-Quechua - Number of speakers: 200 - Script: More information: Introduction Media Lengua, also known as Chaupi-shimi, Chaupi-lengua, Chaupi-Quichua, Quichuañol, Chapu-shimi or llanga-shimi, is a mixed language with Spanish vocabulary and Kichwa grammar, most conspicuously in its morphology. In terms of vocabulary, almost all lexemes (89%), including core vocabulary, are of Spanish origin and appear to conform to Kichwa phonotactics. Media Lengua is one of the few widely acknowledged examples of a "bilingual mixed language" in both the conventional and narrow linguistic sense because of its split between roots and suffixes. Such extreme and systematic borrowing is only rarely attested, and Media Lengua is not typically described as a variety of either Kichwa or Spanish. Arends et al., list two languages subsumed under the name Media Lengua: Salcedo Media Lengua and Media Lengua of Saraguro. The northern variety of Media Lengua, found in the province of Imbabura, is commonly referred to as Imbabura Media Lengua and more specifically, the dialect varieties within the province are known as Pijal Media Lengua and Angla Media Lengua. The Media Lengua Verb The Media Lengua verb receives regular suffixes, as in Quechua. Personal Suffixes - Sg.1 -ni - Sg.2 -ngi - Sg.3 -n - Pl.1 -nchi - Pl.2 -ngichi - Pl.3 -naku-n Tense-Aspect-Mood markers - -(r)ka-: past - -shka-: past, mirative - -xu-: progressive - -xu-(r)ka-: progressive, past - -y: imperative - -chun: hortative - -sha: future, sg.1 - -nga: future, sg.3
https://docs.verbix.com/Languages/MediaLengua
2021-09-17T00:40:41
CC-MAIN-2021-39
1631780053918.46
[]
docs.verbix.com
. If you no longer want to manage the DNS records for a domain on DigitalOcean, you can delete the domain. This removes the domain and its DNS records from the account. To delete a domain, log in to the control panel and click Networking in the main menu to go to the Domains tab. Open the More menu of the domain you want to delete, then click Delete. In the confirmation window, click Delete Domain to permanently delete the domain and its records from the account.
https://docs.digitalocean.com/products/networking/dns/how-to/delete-domains/
2021-09-17T00:43:08
CC-MAIN-2021-39
1631780053918.46
[]
docs.digitalocean.com
Start and cancel backup Overview The inSync Client automatically backs up data from your laptop at regular intervals. You can manually start a backup at any time or cancel a backup in progress.. Cancel a backup in progress To cancel a backup when the backup is in progress - Start the inSync client. - Select the Backup & Restore tab, and click the cancel ( ) icon. - Click Yes on the prompt window.
https://docs.druva.com/005_inSync_Client/5.7/003_Backup_and_Restore/Back_up_data_and_monitor_inSync_client/010_Start_and_cancel_backup
2021-09-17T01:20:44
CC-MAIN-2021-39
1631780053918.46
[array(['https://docs.druva.com/@api/deki/files/19847/Tray_backup_5.6.png?revision=1&size=bestfit&width=258&height=250', 'Tray_backup_5.6.png'], dtype=object) ]
docs.druva.com
Databricks Data Science & Engineering concepts Note The CLI feature is unavailable on Databricks on Google Cloud as of this release. This article introduces the set of fundamental concepts you need to understand in order to use Databricks Workspace effectively. Workspace A workspace is an environment for accessing all of your Databricks assets. A workspace organizes objects (notebooks, libraries, dashboards, and experiments) into folders and provides access to data objects and computational resources. This section describes the objects contained in the Databricks workspace folders. A web-based interface to documents that contain runnable commands, visualizations, and narrative text. An interface that provides organized access to visualizations. A package of code available to the notebook or job running on your cluster. Databricks runtimes include many libraries and you can add your own. A folder whose contents are co-versioned together by syncing them to a remote Git repository. A collection of MLflow runs for training a machine learning model. Interface This section describes the interfaces that Databricks supports for accessing your assets: UI and API. UI The Databricks UI provides an easy-to-use graphical interface to workspace folders and their contained objects, data objects, and computational resources. There are two versions of the REST API: REST API 2.0 and REST API 1.2. The REST API 2.0 supports most of the functionality of the REST API 1.2, as well as additional functionality and is preferred. Data management This section describes the objects that hold the data on which you perform analytics and feed into machine learning algorithms. Databricks File System (DBFS) A filesystem abstraction layer over a blob store. It contains directories, which can contain files (data files, libraries, and images), and other directories. DBFS is automatically populated with some datasets that you can use to learn Databricks. A collection of information that is organized so that it can be easily accessed, managed, and updated. A representation of structured data. You query tables with Apache Spark SQL and Apache Spark APIs. The component that stores all the structure information of the various tables and partitions in the data warehouse including column and column type information, the serializers and deserializers necessary to read and write data, and the corresponding files where the data is stored. Every Databricks deployment has a central Hive metastore accessible by all clusters to persist table metadata. You also have the option to use an existing external Hive metastore. Computation management This section describes concepts that you need to know to run computations in Databricks. A set of computation resources and configurations on which you run notebooks and jobs. There are two types of clusters: all-purpose and job. - an job cluster. A set of idle, ready-to-use instances that reduce cluster start and auto-scaling times. When attached to a pool, a cluster allocates its driver and worker nodes from the pool. If the pool does not have sufficient idle resources to accommodate the cluster’s request, the pool expands by allocating new instances from the instance provider. When an attached cluster is terminated, the instances it used are returned to the pool and can be reused by a different cluster. The set of core components that run on the clusters managed by Databricks. Databricks offers several types of runtimes: - Databricks Runtime includes Apache Spark but also adds a number of components and updates that substantially improve the usability, performance, and security of big data analytics. - Databricks Runtime for Machine Learning is built on Databricks Runtime and provides a ready-to-go environment for machine learning and data science. It contains multiple popular libraries, including TensorFlow, Keras, PyTorch, and XGBoost. - Databricks Runtime for Genomics is a version of Databricks Runtime optimized for working with genomic and biomedical data. A non-interactive mechanism for running a notebook or library either immediately or on a scheduled basis. Workload Databricks identifies two types of workloads subject to different pricing schemes: data engineering (job) and data analytics (all-purpose). - Data engineering An (automated) workload runs on a job cluster which the Databricks job scheduler creates for each workload. - Data analytics An (interactive) workload runs on an all-purpose cluster. Interactive workloads typically run commands within a Databricks notebook. However, running a job on an existing all-purpose cluster is also treated as an interactive workload. Execution context The state for a REPL environment for each supported programming language. The languages supported are Python, R, Scala, and SQL. Machine learning This section describes concepts related to machine learning in Databricks. The main unit of organization for tracking machine learning model development. Experiments organize, display, and control access to individual logged runs of model training code. A centralized repository of features. Databricks Feature Store enables feature sharing and discovery across your organization and also ensures that the same feature computation code is used for model training and inference. A trained machine learning or deep learning model that has been registered in Model Registry.
https://docs.gcp.databricks.com/getting-started/concepts.html
2021-09-17T01:44:22
CC-MAIN-2021-39
1631780053918.46
[]
docs.gcp.databricks.com
Once you have successfully linked your luckycloud account to the Sync client, you can start uploading data and synchronizing. You can easily drag and drop entire folder structures into the cloud and synchronize them: As soon as the cloud in the client turns green, the upload or synchronization process is completed. Any changes you make to the synchronized folders from now on will be automatically transferred to the cloud. Please remember that the sync client must be open in the background for the synchronization process. Besides the Sync-Client there are five other ways to load your data into the cloud: Due to the functionality and the security it is recommended to use the Sync-Client. you can find more information about the Sync-Client here continue to step 4: Synchronizing libraries
https://docs.luckycloud.de/en/first-steps/daten-hochladen
2021-09-17T01:45:39
CC-MAIN-2021-39
1631780053918.46
[]
docs.luckycloud.de
Image Tag Last updated on May 29, 2018 You can implement the cookie-based opt-out mechanism through an image tag. Typically, this is the simplest implementation. For example, if you use the following opt-out image tag: <img src="" /> OpenX then performs the previously-described steps and displays a success or failure image to indicate the result of the opt-out. By default, if the opt-out is successful; that is, if OpenX is able to set the user opt-out cookie, then OpenX redirects and loads the image referenced in the following URL: Alternatively, if the opt-out is not successful; that is, if OpenX is unable to set the user opt-out cookie (perhaps because the user does not accept cookies), then OpenX redirects and loads the image referenced in the following URL: (Optional) Add parameters to the opt-out tag if you want OpenX to redirect to particular success or failure images. - Add the s=parameter with URL of the image to load if the opt-out is successful. - Add the f=parameter with the URL of the image to load if the opt-out is not successful. For example, with these parameters your image tag may look like the following: <img src=" s= f=" /> When using more complex inputs to the s= and f= parameters, the values should be properly escaped or encoded.
https://devint-docs.openx.com/resources/opt-out-image/
2021-09-17T00:06:14
CC-MAIN-2021-39
1631780053918.46
[]
devint-docs.openx.com