content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
- Back Up and Restore Deployments > - Manage Backups Manage Backups¶ - Edit a Backup’s Settings - Modify a backup’s schedule, storage engines, and namespaces filter. - Stop, Restart, or Terminate a Backup - Stop, restart, or terminate a deployment’s backups. - View a Backup’s Snapshots - View a deployment’s available snapshots. - Change Snapshot Expiration - Change when individual snapshots expire. - Delete a Snapshot - Manually remove unneeded stored snapshots from Ops Manager. - Resync a Backup - If your Backup oplog has fallen too far behind your deployment to catch up, you must resync the backup. - Disable the Backup Service - Disable Ops Manager Backup.
https://docs.opsmanager.mongodb.com/v4.2/tutorial/nav/backup-use-operations/
2021-09-17T03:20:27
CC-MAIN-2021-39
1631780054023.35
[]
docs.opsmanager.mongodb.com
Analytics help us to collect data about errors or possible slow-downs and help us to identify areas that should be fixed or improved! Next information is sent to ScandiPWA analytics: Time it took to install CSA Any failures that occurred during installation Any failures that occurred during development Any failure occurred in using ScandiPWA development toolkit Extensions installation and creation via CLI Theme files override via CLI After theme build, bundle size information Time it took to install CMA for the first time Usual start-up time Any failures that occurred during installation Any failures that occurred during development CPU model, Ram amount, Operation System Of course, you have an option to disable analytic data collection. To disable it you need to create or extend the existing one system configuration file in your home directory with the name .cmarc and set the analytic flag to false as shown below: {"analytics": false} For more information, you can visit the documentation
https://docs.scandipwa.com/about/data-analytics
2021-09-17T03:04:35
CC-MAIN-2021-39
1631780054023.35
[]
docs.scandipwa.com
Makes an asynchronous HTTP or HTTPS request to a URL. This function returns a handle that can be passed to network.cancel() in order to cancel the request. You cannot execute a network request request saved and, if so, execute it. The Content-Type of requests defaults to text/plain. If you're POST-ing form data, you must set it appropriately Cookies may not be handled in the same way on all devices. For example, some Android devices will require app code to handle certain web login schemes correctly, especially if they use redirects..request( url, method, listener [, params] ) String. The HTTP method; acceptable values are "GET" (default), "HEAD", "PUT", and "DELETE". Listener. The listener function invoked at various phases of the HTTP operation. This is passed a networkRequest event. The listener function can receive events of the following phases: "began"— The first notification, contains the estimated size, if known. "progress"— An intermediate progress notification. "ended"— The final notification, when the request is finished. By default, the listener will only receive "ended" events. If params.progress (see below) is "upload" or "download", then the listener will also receive "began" and "progress" events. If the response body is directed to a file by using params.response and the response was successfully written to the file, event.response will contain a table indicating the filename and baseDirectory for the output file. If the request completes, but produces an error response, then any error response body will be provided as a string in event.response instead. This behavior prevents the destination file from being written/overwritten with an error response instead of the desired payload. Table. A table that specifies HTTP request or response processing options, including custom request headers or body. The following keys are supported: headers— Table specifying request header values with string keys. body— String containing the request body, or alternatively, a table containing the filenameand optionally baseDirectoryfor a file whose contents are to be used as the request body. bodyType— String indicating whether a string request body is "text"or "binary". Default is "text"if params.bodyis a string or "binary"if it's a table specifying a file. progress— String value indicating the type of progess notifications desired, if any. May be "upload"or "download". The notification phases include the "began"and "progress"phase events for the desired direction of progress. Default is nil, indicating that only the "ended"phase event is desired. response— Table value indicating that the response body should be written to a file and specify the filenameand optionally the baseDirectoryfor the response file. If this value is not provided, the response body is provided as a string. timeout— Timeout in seconds. Default is 30 seconds. handleRedirectsA boolean indicating whether automatic redirect handling (the default) is desired. Set this to falseif you want to receive 302 responses and handle them yourself. This may be needed for certain kinds of login schemes or custom cookie handling. Note that if a filename table is specified in params.body or in params.response, baseDirectory is an optional Constant that defaults to system.DocumentsDirectory. In the case of params.response, baseDirectory cannot be set to system.ResourceDirectory, since that directory is -- following code demonstrates sending data via HTTP POST, -- specifying custom request headers and request body. local function networkListener( event ) if ( event.isError ) then print( "Network error: ", event.response ) else print ( "RESPONSE: " .. event.response ) end end local headers = {} headers["Content-Type"] = "application/x-www-form-urlencoded" headers["Accept-Language"] = "en-US" local body = "color=red&size=small" local params = {} params.headers = headers params.body = body network.request( "", "POST", networkListener, params ) -- The following code demonstrates how to download a file, with progress updates. local function networkListener( event ) if ( event.isError ) then print( "Network error: ", event.response ) elseif ( event.phase == "began" ) then if ( event.bytesEstimated <= 0 ) then print( "Download starting, size unknown" ) else print( "Download starting, estimated size: " .. event.bytesEstimated ) end elseif ( event.phase == "progress" ) then if ( event.bytesEstimated <= 0 ) then print( "Download progress: " .. event.bytesTransferred ) else print( "Download progress: " .. event.bytesTransferred .. " of estimated: " .. event.bytesEstimated ) end elseif ( event.phase == "ended" ) then print( "Download complete, total bytes transferred: " .. event.bytesTransferred ) end end local params = {} -- Tell network.request() that we want the "began" and "progress" events: params.progress = "download" -- Tell network.request() that we want the output to go to a file: params.response = { filename = "corona.jpg", baseDirectory = system.DocumentsDirectory } network.request( "", "GET", networkListener, params ) -- print ( "Upload complete!" ) end end local headers = {} headers["Content-Type"] = "application/json" headers["X-API-Key"] = "13b6ac91a2" local params = {} params.headers = headers -- Tell network.request() to get the request body from a file: params.body = { filename = "object.json", baseDirectory = system.DocumentsDirectory } network.request( "", "POST", networkListener, params )
https://docs.coronalabs.com/api/library/network/request.html
2017-01-16T15:03:50
CC-MAIN-2017-04
1484560279189.36
[]
docs.coronalabs.com
A function that accepts x and y components of a linear force, applied at a given point with x and y world coordinates. If the target point is the body's center of mass, it will tend to push the body in a straight line; if the target is offset from the body's center of mass, the body will spin about its center of mass. For symmetrical objects, the center of mass and the center of the object will have the same position (object.x and object.y). Note that the amount of force required to move heavy objects may need to be fairly high. object:applyForce( xForce, yForce, bodyX, bodyY ) -- Create a rectangle local myRect = display.newRect( 0, 0, 100, 100 ) -- Add a body to the rectangle physics.addBody( myRect, "dynamic" ) -- Apply force myRect:applyForce( 50, 200, myRect.x, myRect.y )
https://docs.coronalabs.com/api/type/Body/applyForce.html
2017-01-16T15:00:18
CC-MAIN-2017-04
1484560279189.36
[]
docs.coronalabs.com
Button Sets the current lineweight, sets the lineweight units, controls the display and display scale of lineweights, and sets the DEFAULT lineweight value for layers. For a table of valid lineweights, see Overview of Lineweights in the User's Guide. Displays the available lineweight values. Lineweight values consist of standard settings including BYLAYER, BYBLOCK, and DEFAULT. The. Displays the current lineweight. To set the current lineweight, select a lineweight from the lineweight list and choose OK. Specifies whether lineweights are displayed in millimeters or inches. You can also set Units for Listing by using the LWUNITS system variable. Controls whether lineweights are displayed in the current drawing. If this option is selected, lineweights are displayed in model space and paper space. You can also set Display Lineweight by using the LWDISPLAY system variable. Regeneration time increases with lineweights that are represented by more than one pixel. Clear Display Lineweight if performance slows down when working with lineweights turned on in a drawing. This option does not affect how objects are plotted. Controls the DEFAULT lineweight for layers. The initial DEFAULT lineweight is 0.01 inches or 0.25 mm. (LWDEFAULT system variable) Controls the display scale of lineweights on the Model tab. On the Model tab, lineweights are displayed in pixels. Lineweights are displayed using a pixel width in proportion to the real-world unit value at which they plot. If you are using a high-resolution monitor, you can adjust the lineweight display scale to better display different lineweight widths. The Lineweight list reflects the current display scale. Objects with lineweights that are displayed with a width of more than one pixel may increase regeneration time. If you want to optimize performance when working in the Model tab, set the lineweight display scale to the minimum value or turn off lineweight display altogether.
http://docs.autodesk.com/ACD/2010/ENU/AutoCAD%202010%20User%20Documentation/files/WS1a9193826455f5ffa23ce210c4a30acaf-4a3a.htm
2017-01-16T15:05:47
CC-MAIN-2017-04
1484560279189.36
[]
docs.autodesk.com
134.13(9)(b)3. 3. Estimated bills, if the utility made a reasonable effort to obtain access to the customer's meter, but was unable to gain access. Reasonable effort to gain access means that the utility notified the customer after three consecutive estimated readings that the utility will read the meter at other than standard business hours at the customer's request. PSC 134.13(9)(b)4. 4. Receipt of lump sum payment made from an outside source as the Low Income Home Energy Assistance Program or other like programs. PSC 134.13(9)(c) (c) The rate of interest to be paid shall be calculated in the same manner as provided for in s. PSC 134.061 (9) (b) . Interest shall be paid from the date when the customer overpayment was made until the date when the overpayment is refunded. Interest shall be calculated on the net amount overpaid in each calendar year. PSC 134.13 134.13 History History: 1-2-56 ; r. and recr. Register. February, 1959, No. 38, eff. 3-1-59; am. (6), Register, January, 1965, No. 109 , eff. 2-1-65; r. and recr. (1), Register, August, 1976, No. 248 , eff. 9-1-76; am. Register, March, 1979, No. 279 , eff. 4-1-79; am. (1) and (5), Register, October, 1980, No. 298 , eff. 11-1-80; am. (6), Register, November, 1980, No. 299 , eff. 12-1-80; renum. (1) (d) to be (1) (f) and am. (intro)., cr. (1) (d), (e) and (g) and am. (6) (f), Register, September, 1981, No. 309 , eff. 10-1-81; r. and recr. Register, October, 1989, No. 406 , eff. 11-1-89; correction in (9) (c) made under s. 13.93 (2m) (b) 7., Stats., Register, September, 1997, No. 501 ; CR 06-046 : am. (1) (a) (intro.), renum. (1) (a) 1. to 15. and (b) to (j) to be (1) (b) 1. to 15. and (c) to (k), cr. (1) (a) 1. to 7. and (b) (intro.) Register April 2007 No. 616 , eff. 5-1-07; CR 13-048 : r. (7) Register July 2014 No. 703 , eff. 8-1-14. PSC 134.14 PSC 134.14 Adjustment of bills. PSC 134.14(1) (1) Whenever a meter is found to have a weighted average error of more than 2% fast as tested in the manner specified in s. PSC 134.28 , a recalculation of bills for service shall be made for the period of inaccuracy assuming an inaccuracy equal to the weighted average error. Weighted average error refers to 80% of the open rate plus 20% of the check rate. The recalculation shall be made on the basis that the service meter should be 100% accurate. PSC 134.14(2) (2) If the period of inaccuracy cannot be determined, it shall be assumed that the full amount of inaccuracy existed during the last half of the period since the previous test was made on the meter; however, the period of accuracy shall not exceed one-half the required test period. PSC 134.14 Note Note: If the meter test period is 15 years and the meter had been in service for 16 years, the period of accuracy shall be 7 ½ years, and the period of inaccuracy shall be 8 ½ years. PSC 134.14(3) (3) If the average gas bill of a customer does not exceed $10 per month over the refund period the monthly consumption of which the refund is calculated may be averaged. PSC 134.14(4) shall be a credit to the customer's current bill. If the amount of the credit is greater than the current bill, the amount in excess of the current bill shall, at the discretion of the customer, be made in cash or as credit on future bills. If a refund is due a person no longer a customer of the utility, a notice shall be mailed to the last known address, and the utility shall upon request made within 3 months thereafter refund the amount due. PSC 134.14(5) (5) PSC 134.14(5)(a) (a) Whenever a meter with a rated capacity of 400 cubic feet per hour (CFH) or more is found to have a weighted average error of more than 2% slow, the utility shall bill the customer for the amount the test indicates has been undercharged for the period of inaccuracy, which period shall not exceed the last 2 years the meter was in service unless otherwise ordered by the commission after investigation. No back billing for an inaccurate meter will be sanctioned for the following: PSC 134.14(5)(a)1. 1. The customer has called to the company's attention his or her doubts as to the meter's accuracy and the company has failed within a reasonable time to check it. PSC 134.14(5)(a)2. 2. The rated capacity of the meter is 399 cubic feet per hour (CFH) or less. PSC 134.14(5)(a)3. 3. The amount of the backbill is less than $50. PSC 134.14(5)(b) (b) Backbilling shall be required for any size meter for any of the following circumstances. PSC 134.14(5)(b)1. 1. The meter did not register. PSC 134.14(5)(b)2. 2. An incorrect correction factor or meter constant was applied. PSC 134.14(5)(b)3. 3. The meter or service were tampered with. PSC 134.14(5)(b)4. 4. An incorrect index or gear ratio was applied. PSC 134.14(5)(b)5. 5. Meters were switched between customers. PSC 134.14(5)(b)6. 6. Rates were misapplied. PSC 134.14(6) (6) A classified record shall be kept of the number and amount of refunds and charges made because of inaccurate meters, misapplication of rates, and erroneous billing. A summary of the record for the previous calendar year shall be submitted to the commission by April 1 of each year. PSC 134.14 History History: Cr. Register, 1 -2-56; r. and recr. Register, February, 1959, No. 38 , eff. 3-1-59; am. (1), (2) and (4), renum. (5) to be (5) (a) and am., cr. (5) (b), Register, November, 1989, No. 407 , eff. 12-1-89. PSC 134.15 PSC 134.15 Employees authorized to enter customers' premises. The utility shall keep a record of employees authorized pursuant to s. 196.171 , Stats., to enter customers' premises. PSC 134.15 History History: Cr. Register, February, 1959, No. 38 , eff. 3-1-59. PSC 134.16 PSC 134.16 Maps and diagrams. Each utility shall have maps, records, diagrams, and drawings showing the location of its property, in sufficient detail so that the adequacy of service to existing customers may be checked and facilities located. PSC 134.16 History History: Cr. Register, February, 1959, No. 38 , eff. 3-1-59. PSC 134.17 PSC 134.17 Complaints. Each utility shall investigate and keep a record of complaints received by it from its customers in regard to safety, service, or rates, and the operation of its system. The record shall show the name and address of the complainant, the date and nature of the complaint, and its disposition and the date thereof. A summary of this record for the previous calendar year shall be sent to the commission by April 1 of each year. Each utility also shall document all contacts and actions relative to deferred payment arrangements and disputes. PSC 134.17 History History: Cr. Register, February, 1959, No. 38 , eff. 3-1-59; am. Register, March, 1979, No. 279 , eff. 4-1-79. PSC 134.18 PSC 134.18 Record of interruption of service. PSC 134.18(1) (1) Each utility shall keep a record of all interruptions to service affecting an entire distribution system of any urban area or an important division of a community. The record shall show the date and time of interruption, the cause, the approximate number of customers affected, and the date and time of restoring service. PSC 134.18(2) (2) Each utility shall keep a record of all failures and notifications of difficulty with transmitted gas supply affecting each gate station. The record shall show the date and time of failure or notification, the date and time of resumption of normal supply, the operation of standby equipment including amount of gas produced, the number of customers whose service was interrupted and the maximum and minimum gas supply pressure during the period of difficulty. PSC 134.18(3) (3) A summary of records required by subs. (1) and (2) shall be sent to the public service commission by April 1 of each year. PSC 134.18(4) (4) Each interruption of service which affects more than 100 customers shall be reported by mail, telephone, or telegraph to the commission within 48 hours following the discovery of the interruption. PSC 134.18(5) (5) Any interruption of a principal gas supply shall be immediately reported to the commission by telephone or telegraph by the utility or utilities affected. PSC 134.18 History History: Cr. Register, February, 1959, No. 38 , eff. 3-1-59. PSC 134.19 PSC 134.19 Meter records and reports. PSC 134.19(1) (1) Whenever a gas meter is tested, such record shall be kept until that meter is tested again. This record shall indicate the information that is necessary for identifying the meter, the reason for making the test, the reading of the meter before it was removed from service, the accuracy of measurement, and all the data that were taken at the time of the test. This record must be sufficiently complete to permit convenient checking of the methods and calculations that have been employed. PSC 134.19(2) (2) Another record shall be kept which indicates when the meter was purchased, its size, its identification, its various places of installation, with dates of installation and removal, the dates and results of all tests, and the dates and details of all repairs. The record shall be arranged in such a way that the record for any meter can be readily located. PSC 134.19(3) (3) All utilities shall keep an "as found" high and light load test summary of all meters tested after being in service. This summary shall be made on a calendar year basis and forwarded to this commission by April 1 of the following year. This summary shall be divided according to the length of time since the last test, and meters found within each of the following per cent accuracy classifications: PSC 134.19(3)(a) (a) Over 115; 110.1-115; 105.1-110; 103.1-105; 102.1-103; 101.1-102; 100.1-101; 100; 99-99.9; 98-98.9; 97-97.9; 95-96.9; 90-94.9; 85-89.9; under 85; passing gas does not register; does not pass gas; not tested; grand total average % error of fast meters; average % error of slow meters; total average error; number tested, number in service. PSC 134.19 History History: Cr. Register, February, 1959, No. 38 , eff. 3-1-59. PSC 134.20 PSC 134.20 Preservation of records. The following records shall be preserved and kept available for inspection by the commission for the periods indicated. The list is not to be taken as comprehending all types of utility records. - See PDF for table PSC 134.20 Note Note: See Federal Power Commission Orders 54 and 156 for preservation of records. Public Service Commission's Classification of Accounts, and s. 18.01 , Stats. * Where machine billing is used and meter readings recorded on tabulating cards, the register sheets may be considered the "meter reading sheets" and the "billing records." "Meter reading sheets"and "billing records" or the "register sheets" shall be kept 6 years or until they are no longer needed to adjust bills. This means that the records must be kept 6 years or from the date of one meter test to the next, whichever is longer. PSC 134.20 History History: Cr. Register, February, 1959, No. 38 , eff. 3-1-59. PSC 134.21 PSC 134.21 Heating values and specific gravity. PSC 134.21(1) (1) Each utility which is furnishing gas service shall have on file with this commission for each municipality served the heating value, specific gravity, and composition of each type of gas regularly supplied and also for the gas which may be used for standby purposes and the range of values for peak shaving. The heating value filed shall be the total heating value with the indication whether it is on a wet or dry basis. (See definitions in s. PSC 134.02 .) PSC 134.21(2) (2) All gases whether the regular gas supply, a mixture of gases or a substitute gas used for peak shaving purposes shall operate properly in normal gas utilization equipment. Where used for emergency or standby, the gas shall operate reasonably well in such equipment. (The customer requiring gas of a particular chemical composition shall make such arrangements as may be required to protect against damage by reason of change in composition.) PSC 134.21(3) (3) The monthly average heating value of the gases as delivered to the customers in any service area shall not be less than the heating value standard on file with this commission and the heating value at any time at constant specific gravity shall not be more than 5% above or 4% below this standard. At constant heating value, the specific gravity of the gas shall not vary more than 10% from the standards filed with the commission. If the heating value is varied by a greater amount than specified, the specific gravity shall be varied in such a way that the gas will operate satisfactorily in the customer's utilization equipment. Customers using processes that may be affected by a change in the chemical composition of the gas shall be notified of changes. Agreements with such customers shall specify the allowable variation in composition. PSC 134.21(4) (4) For required periodic heating value tests see s. PSC 134.25 . The specific gravity of the gas shall be determined at least once each month when there is no change in the type or sources of gas and when there is a change in the type of gas. Whenever emergency or peak shaving plants are ran or when mixed gases are used, daily determinations of specific gravity shall be made. PSC 134.21 History History: Cr. Register, February, 1959 . No. 38, eff. 3-1-59. PSC 134.22 PSC 134.22 Purity of gas. PSC 134.22(1) (1) In no case shall gas contain more than 30 grains of sulphur per 100 standard cubic feet, 5 grains of ammonia per 100 standard cubic feet, nor more than 0.1 grain of hydrogen sulphide per 100 standard cubic feet. (Exception. If the gas is not to be placed in pipe or bottle type holders the hydrogen sulphide content may be 0.3 grains per 100 standard cubic feet.) PSC 134.22(2) (2) Utilities supplying gas containing coal or water gas shall make quantitative determinations of total sulphur at least once every 6 months and qualitative hydrogen sulphide tests at intervals of 1 hour to 2 weeks depending upon the probability of this impurity being found. PSC 134.22(3) (3) Utilities supplying liquefied petroleum gas, or liquefied petroleum air mixtures, or natural gas shall test the gas periodically for impurities or periodically obtain data concerning impurities from sources they believe the commission can accept as reliable. PSC 134.22 History History: Cr. Register, February, 1959, No. 38 , eff. 3-1-59. PSC 134.23 PSC 134.23 Pressure variation. PSC 134.23(1) (1) Every utility supplying gas shall file with the commission a standard service pressure by service areas. The service pressure shall be of such a value that the maximum pressure at any outlet as specified below shall not be greater than 12 inches of water column except for customers utilizing high-pressure service. PSC 134.23(2) (2) For customers receiving standard service pressure, the gas pressure at the outlet of the utility's service meters shall meet the following requirements: PSC 134.23(2)(a) (a) At no outlet in the service area shall it ever be greater than one and one-fourth of the standard service pressure nor greater than 12 inches of water nor ever be less than one-half of the standard service pressure nor less than 4 inches of water. PSC 134.23(2)(b) (b) At any single outlet it shall never be greater than twice the actual minimum at the same outlet. PSC 134.23(2)(c) (c) At any one outlet the normal variation of pressure shall not be greater than the following: - See PDF for table PSC 134.23(3) (3) For customers utilizing gas at high pressure, a service pressure shall be agreed upon by the utility and the customer, and the maximum pressure variation shall not exceed 15% of the agreed pressure unless the commission shall authorize a greater variation. PSC 134.23(4) (4) No utility shall furnish gas to any customer at pressures higher than its filed standard service pressure until it has filed with the commission acceptable service rules governing high-pressure service to customers desiring to utilize gas at pressures higher than standard service pressure. Such service rules shall provide that the utility will make high-pressure service available to its customers upon request whenever high pressure gas is available at the customer's premises or may be made available in accordance with the utility's filed extension rules, and when such high pressure is required for proper operation of the customer's present or proposed utilization equipment. PSC 134.23 History History: Cr. Register, February, 1959, No. 38 , eff. 3-1-59. PSC 134.25 PSC 134.25 General use of calorimeter equipment. PSC 134.25(1) (1) Unless specifically directed otherwise a calorimeter shall be maintained at each gas producing or mixing plant whether the plant is in continuous operation or used only for standby or peak shaving purposes. The calorimeter shall be used to check the operation of the plant and shall measure the heating value of the gas going to the gas lines. PSC 134.25(2) (2) Unless specifically directed otherwise calorimeters shall be maintained in operation in locations where the heating value of the gas can be measured from each different supplier. PSC 134.25(3) (3) Unless specifically directed otherwise a calorimeter shall be maintained and used to measure the heating value of the gas actually sold to customers in those cases where mixed gases are used. PSC 134.25(4) (4) Tests of heating value of the gas shall be made daily whenever gas is supplied at the calorimeter location unless specifically directed otherwise by the commission. The original records of the tests shall be dated, labeled and kept on file for 6 years. A copy of the daily average heating value of gas sold to customers shall be sent to the commission each calendar month. PSC 134.25(5) (5) The calorimeter equipment shall be maintained so as to give results within + or - 1%. Recording calorimeters used to test or control the production or mixing of gas or measure the heating value of purchased gas when therm rates are not applicable shall be tested with a gas of known heating value at least 3 times a year or when the accuracy is in question. Recording calorimeters used only with standby or peak shaving production plants shall be tested with a gas of known heating value at least 2 times a year. Non-recording calorimeter equipment such as the Junkers shall be tested with a gas of known heating value at least once a year or tested against another calorimeter of known accuracy at least once a year. PSC 134.25 History History: Cr. Register, February, 1959, No. 38 , eff. 3-1-59; am. (5), Register, January, 1965, No. 109 , eff. 2-1-65. PSC 134.251 PSC 134.251 Use of recording calorimeter for therm billing. PSC 134.251(1) (1) In the application of gas rates based on the therm, a recording calorimeter shall be used to determine the heating value of the gas being distributed to utility customers. These calorimeters will be located as set forth in s. PSC 134.25 (2) and (3) . They shall have such accuracy characteristics as to be able to measure the heating value of the gas to within + or -2 B.t.u., shall be able to reproduce these readings to within +or - 2 B.t.u., and shall be able to hold their accuracy over an extended period of time. The instruments shall be installed in accordance with the manufacturer's recommendations. PSC 134.251(2) (2) Each utility selling gas shall file with the commission a complete installation report stating the following information: location of calorimeter, kind of gas tested, type of scale, uniform or split scale range, date installed, publication number of manufacturer's applicable book of instructions, outline of the building, the location of the calorimeter or calorimeters within the building, the size, length, gas pressure, and general route of the gas sample pipe from the supply main to each calorimeter and location of all secondary equipment necessary for the operation of the recording calorimeter. PSC 134.251(3) (3) PSC 134.251(3)(a) (a) Each utility selling gas shall keep a chronological record of dates and results of tests and operations performed on the calorimeter to test and maintain accuracy. PSC 134.251(3)(b) (b) Twice every month the following tests shall be made: PSC 134.251(3)(b)1. 1. Two days of each month shall be selected for the performance of an "as found" accuracy test, mechanical tests, adjustments, and an "as left" accuracy test of each recording calorimeter, and thereafter the specified accuracy tests, adjustments, and maintenance work shall be performed on the same days of each month insofar as practicable. PSC 134.251(3)(b)2. 2. In making the accuracy tests on the calorimeter, the utility shall use reference natural gas which has been certified by the Institute of Gas Technology before cleaning parts or making any adjustments to either the tank unit or the recorder mechanism. The change from line gas to the certified gas should be made so as to have a continuous chart recording. The inlet pressure used should be the same for both calibration and subsequent operation. PSC 134.251(3)(b)3. 3. If the "as found" accuracy test is within + or -3 B.t.u., no adjustment will be required and the instrument may be returned to service. If the "as found" accuracy test is not within + or - 3 B.t.u., maintenance shall be performed to restore the accuracy of the instrument. PSC 134.251(3)(b)4. 4. In order that adequate information concerning each cylinder of natural gas which is to be used for the semi-monthly check tests be available at all times, the following information shall be entered on a form or in a log book provided for the purpose and also on a label or tag securely attached to each cylinder in which the gas is stored: PSC 134.251(3)(b)4.a. a. Institute of Gas Technology Cylinder Number. PSC 134.251(3)(b)4.b. b. Institute of Gas Technology Certificate Number. PSC 134.251(3)(b)4.c. c. Date cylinder was certified. PSC 134.251(3)(b)4.d. d. Date cylinder was received by the utility. PSC 134.251(3)(b)4.e. e. Heating value certified by Institute of Gas Technology. Down Down /code/admin_code/psc/134 true administrativecode /code/admin_code/psc/134/17 administrativecode/PSC 134.17 administrativecode/PSC 134.17?
http://docs.legis.wisconsin.gov/code/admin_code/psc/134/17
2014-08-20T10:44:01
CC-MAIN-2014-35
1408500804220.17
[]
docs.legis.wisconsin.gov
Introduction Java 5 and above supports the use of annotations to include metadata within programs. Groovy 1.1 and above also supports such annotations. Annotations are used to provide information to tools and libraries. They allow a declarative style of providing metadata information and allow it to be stored directly in the source code. Such information would need to otherwise be provided using non-declarative means or using external files. We won't discuss guidelines here for when it is appropriate to use annotations, just give you a quick run down of annotations in Groovy. Annotations are defined much like Java class files but use the @interface keyword. As an example, here is how you could define a FeatureRequest Annotation in Java: This annotation represents the kind of information you may have in an issue tracking tool. You could use this annotation in a Groovy file as follows: Now if you had tools or libraries which understood this annotation, you could process this source file (or the resulting compiled class file) and perform operations based on this metadata. As well as defining your own annotations, there are many existing tools, libraries and frameworks that make use of annotations. See some of the examples referred to at the end of this page. As just one example, here is how you could use annotations with Hibernate or JPA: Example As another example, consider this XStream example. XStream is a library for serializing Java (and Groovy) objects to XML (and back again if you want). Here is an example of how you could use it without annotations: This results in the following output: Just as an aside, not related to annotations, here is how you could write the XML to a file: And how you would read it back in: Now, on to the annotations ... XStream also allows you to have more control over the produced XML (in case you don't like its defaults). This can be done through API calls or with annotations. Here is how we can annotate our Groovy class with XStream annotations to alter the resulting XML: When run, this produces the following output: Differences to Java Annotations may contain lists. When using such annotations with Groovy, remember to use the square bracket list notation supported by Groovy rather than the braces used by Java, i.e.: Would become: More Examples Annotations are also used in examples contained within the following pages:
http://docs.codehaus.org/display/GROOVY/Annotations+with+Groovy?showComments=true%20showCommentArea=true
2014-08-20T11:03:09
CC-MAIN-2014-35
1408500804220.17
[]
docs.codehaus.org
Introduction Intune is a cloud-based service that focuses on mobile device management (MDM) and mobile application management (MAM) which enables the following: - To be 100% cloud with Intune, or to be co-managed with Configuration Manager and Intune; - To set rules and configure settings on personal and organization-owned devices to access data and networks; - To deploy and authenticate apps on devices (both on-premises and mobile); - To control the way users access and share information; - To stay compliant with company security requirements. Integration Intune with Apptimized saves time and enables a user to upload, update, and manage the ready-made packages without the need to leave Apptimized. Initial integration to Apptimized requires a one-time configuration of settings in the Microsoft Azure portal and the Apptimized portal, namely: - Application registrations in the Microsoft Azure portal; - Assigning permissions to a user to work with Microsoft Intune from the Microsoft Azure portal; - Integration of the application from the Microsoft Azure portal into the Apptimized portal.
https://docs.apptimized.com/books/apptimized-platform-admin-manual/page/introduction/export/html
2022-08-08T00:54:46
CC-MAIN-2022-33
1659882570741.21
[]
docs.apptimized.com
behaviours which are important to consider for each Action and they reflect the user scoring e.g. wrong collision, action time, max movement velocity etc. How to add scoring factors to your Action¶ Right click on Unreal’s content browser, on the folder you want to save your analytics asset. From the MAGES submenu select Analytics Asset in order to create an analytics configuration. Each Action has each own Analytics configuration. In order to specify which action this asset is referring to, you need to reference it from respective action node in the Scenegraph Blueprint. This window contains the scoring factors. To enable a new scoring factor click the corresponding checkbox. To save your changes click the Save button. current Action, making it count as multiple actions. Importance: This value identifies the weight of each scoring factor. - VeryLittle: 15% - Little: 30% - Neutral: 50% - Big: 80% - VeryBig: 100% If our Action has only a Little scoring factor then its maximum score will be 30/100. If we configure a Neutral and a VeryLittle scoring factor within the same Action, the maximum score of the Action will be 65/100. Note The score is capped at 100. If our scoring factors overpass 100 e.g. three Neutral scoring factors it will be capped at 100, allowing the user to have 50 “bonus” points. Error Type: We support three different types of errors with different popup UIs for each case: - Warning - Error - CriticalError Error Message: In the error message input field you can type the message that will be shown to the user, in case the user performs this error. Show UI: Boolean value to toggle the error message. In case of false, the error will be logged but not shown to the user. less seconds than the Completion Time. Passing this time-limit results in points loss (10 points per second). Example: In this example we give user 25 seconds to complete the Action, Here is the analytics editor for this Action: We set the Completion Time to 25 seconds. We also set the Importance to Neutral meaning that this scoring factor will give 50/100 points to the user. If this is the only scoring factor the user can achieve a highest score of 50/100. Error Colliders¶ This scoring factor refers to the usage of overlapping colliders in order to define invalid events the user can perform in the simulation. The collider behaviour field defines when an error should be triggered. The available options are: 1. Avoid Objects 2. Stay in Collider 3. Must hit objects Avoid Objects Behaviour: - The first error collider actor contains two error trigger components: one for the femur and one for the tibia. You can see them below: We select the actor containing the two error triggers in the ErrorColliderActor input field. This reference will spawn the actor We select the ScalpelToolGrabbable from the corresponding Interactable Actors fields We set the Error Type as an Error We type the Error Message to our custom message ShowUI is enabled in order to show the error UI to the user 2. The second error collider actor is an error trigger for the floor. We select the actor containing the floor error trigger in the ErrorColliderActor input field. This reference will spawn the actor We set the Error Type as a Warning We type in the Error Message our custom message ShowUI is enabled We add all the available items from the corresponding Interactable Actors fields Stay Error Colliders Usage: Track if an object is not in contact with a collider. Example: In this example we set an error collider to track if the user holds the sponza while cleaning it with the cloth. If the hand exits the trigger box, the user will lose points. You can see the error box here on sponza: In addition a non-visible static mesh actor is spawned as child of the user hands. This is actor is spawned through the action blueprint and is not automatically spawned from the analytics editor. Here is the analytics editor for this action: We select the colliders behaviour to Stay while Interacting We select the collider representing the area that the user needs to place his hand on top of sponza into Error Collider Actor. This will spawn the safe area collider. We select the Cloth (Interactable_UseAction) from the Trigger Interactable dropdown We set the Error Type as an Error We set the Importance to Neutral. This factor is valued 50/100. We type in the Error Message our custom message ShowUI is enabled Hit Perform Colliders Note Currently under development. select the question blueprint in the corresponding object field next to the Importance. Since, Spawns Error is enabled we need to set the type of error and the message that will be shown to the user. We type the Error Message. We set the Type of Error to Error from the dropdown field._7<< We set the Importance to Big, in this way the Action will get a perfect score of 80/100 We add the Velocity Interactable Actor that will be observed in the Velocity Actor field We set the Velocity Threshold value to 60. If the velocity of the object overpasses 60 the user will lose 80 points. A general guideline for velocity thresholds is: 30: The user must move his hand extremely slowly. 40: The user must move his hand slower than the average speed. 60: The user must not do rapid movements with his hand while holding this object. Since, Spawns Error is enabled we need to set the type of error and the message that will be shown to the user. We type the Error Message. We set the Type of Error to Error from the dropdown field._8<< Note For custom scoring factors we don’t need the analytic editor. We will implement the behaviour using Blueprints First we create a new blueprint class that inherits from ScoringFactor The ForceScoringFactor blueprint implements our example custom scoring factor. The ScoringFactor class contains virtual functions and events for you to override in your custom scoring factors. The Initialize event is called to setup your custom scoring factor. Everything you need to spawn or configure you should implement it in this function. In this case we add an event on the Actor Hit listener of the back part actor. On each hit the Actor Hit function is called which determines if the applied force results in an error. In the same way you can implement your own logic to gather information about the user’s performance. The Perform function of the custom scoring factor needs to be overridden. It is called along with the Action’s Perform(). The purpose of this method is to calculate and return the score of the user. In this example, it calculates the score with data retrieved from the Actor Hit Function. Make sure the score in the range [0,100] GetReadableData manages the data from the custom scoring factor that will be saved at the end of the Action in human-readable form. A new ScoringFactorData struct needs to be created, which contains: - Score: The user’s score - Out Of: In case the scoring factor contains a number of possible values (e.g maximum velocity, maximum time etc) this variable reflects this amount - Type: The scoring factor’s description name - Score Specific: The specific metric of this scoring factor. E.g. In time it is the seconds the user needed to complete the action. - Error Message: The error message to spawn when triggering this error - Error Type: The type of error (warning, normal, critical error) The final step is to link this scoring factor with our Actions script. Below you can see the Blueprints that are responsible for adding a custom scoring factor in the current Action. Warning The Add Custom Scoring Factor Blueprint connects our custom scoring factor with this Action and the Analytics Manager. It needs to be called in the Initialization State of the current Action. The Sub Action argument is used in Combined Actions to specify the sub-action which will be added. In all other type of actions this field should be 0. This is the proper way to configure a custom scoring factor.
https://docs.oramavr.com/en/4.0.2/unreal/tutorials/action_analytics/index.html
2022-08-08T01:53:45
CC-MAIN-2022-33
1659882570741.21
[array(['../../../_images/CreateAnalyticsPanel.png', '../../../_images/CreateAnalyticsPanel.png'], dtype=object) array(['../../../_images/AnalyticsScenegraph.png', '../../../_images/AnalyticsScenegraph.png'], dtype=object) array(['../../../_images/AnalyticsEditor1.png', '../../../_images/AnalyticsEditor1.png'], dtype=object) array(['../../../_images/AnalyticsErrorMessage1.png', '../../../_images/AnalyticsErrorMessage1.png'], dtype=object) array(['../../../_images/AnalyticsOverview1.png', '../../../_images/AnalyticsOverview1.png'], dtype=object) array(['../../../_images/ExampleErrorTime1.png', '../../../_images/ExampleErrorTime1.png'], dtype=object) array(['../../../_images/QuestionAnalyticsExample.png', '../../../_images/QuestionAnalyticsExample.png'], dtype=object) array(['../../../_images/VelocityActionExample.png', '../../../_images/VelocityActionExample.png'], dtype=object) array(['../../../_images/ExampleCustomFactorKnossos1.png', '../../../_images/ExampleCustomFactorKnossos1.png'], dtype=object) array(['../../../_images/ScoringFactorClass.png', '../../../_images/ScoringFactorClass.png'], dtype=object) array(['../../../_images/CustomInitializeEvent.png', '../../../_images/CustomInitializeEvent.png'], dtype=object) array(['../../../_images/OnActorHit.png', '../../../_images/OnActorHit.png'], dtype=object) array(['../../../_images/CustomPerform.png', '../../../_images/CustomPerform.png'], dtype=object) array(['../../../_images/GetReadableData.png', '../../../_images/GetReadableData.png'], dtype=object) array(['../../../_images/AddCustomScoringFactor.png', '../../../_images/AddCustomScoringFactor.png'], dtype=object)]
docs.oramavr.com
Application development Most applications that run on a Substrate blockchain require some form of front-end or user-facing interface—such as a browser, desktop, mobile, or hardware client—that enables users or other programs to access and modify the data that the blockchain stores. For example, you might develop a browser-based application for interactive gaming or a hardware-specific application to implement a hardware wallet. Different libraries exist to build these types of applications, depending on your needs. This article explains the process of querying a Substrate node and using the metadata it exposes to help you understand how you can use the metadata when creating front-end client applications and using client-specific libraries. Metadata system Substrate nodes provide an RPC call, state_getMetadata, that returns a complete description of all the types in the current runtime. Client applications use the metadata to interact with the node, to parse responses, and to format message payloads sent to the node. This metadata includes information about a pallet's storage items, transactions, events, errors, and constants. The current metadata version (V14) differs significantly from its predecessors as it contains much richer type information. If a runtime includes a pallet with a custom type, the type information is included as part of the metadata returned. Polkadot uses V14 metadata starting from runtime spec version 9110 at block number 7229126 and Kusama from runtime spec version 9111, at block number 9625129. This is useful to know for developers who intend to interact with runtimes that use older metadata versions. Refer to this document for a migration guide from V13 to V14. The current metadata schema uses the scale-info crate to get type information for the pallets in the runtime when you compile a node. The current implementation of the metadata requires front-end APIs to use the SCALE codec library to encode and decode RPC payloads to send and receive transactions. The following steps summarize how metadata is generated, exposed, and used to make and receive calls from the runtime: - Callable pallet functions, as well as types, parameters and documentation are exposed by the runtime. - The frame-metadatacrate describes the structure in which the information about how to communicate with the runtime will be provided. The information takes the form of a type registry provided by scale-info, as well as information about things like which pallets exist (and what the relevant types in the registry are for each pallet). - The scale-infocrate is used to annotate types across the runtime, and makes it possible to build a registry of runtime types. This type information is detailed enough that we can use it to find out how to correctly SCALE encode or decode some value for a given type. - The structure described in frame-metadatais populated with information from the runtime, and this is then SCALE encoded and made available via the state_getMetadataRPC call. - Custom RPC APIs use the metadata interface and provide methods to make calls into the runtime. A SCALE codec library is required to encode and decode calls and data to and from the API. Every Substrate chain stores the version number of the metadata system they are using, which makes it useful for applications to know how to handle the metadata exposes by a certain block. As previously mentioned, the latest metadata version (V14) provides a major enhancement to the metadata that a chain is able to generate. But what if an application wants to interact with blocks that were created with an earlier version than V14? Well, it would require setting up a front-end interface that follows the older metadata system, whereby custom types would need to be identified and manually included as part of the front-end's code. Learn how to use the desub tool to accomplish this if you needed. Type information bundled in the metadata gives applications the ability to communicate with nodes across different chains, each of which may each expose different calls, events, types and storage. It also allows libraries to generate almost all of the code needed to communicate with a given Substrate node, giving the possibility for libraries like subxt to generate front-end interfaces that are specific to a target chain. With this system, any runtime can be queried for its available runtime calls, types and parameters. The metadata also exposes how a type is expected to be decoded, making it easier for an external application to retrieve and process this information. Metadata format Querying the state_getMetadata RPC function will return a vector of SCALE-encoded bytes which is decoded using the frame-metadata and parity-scale-codec libraries. The hex blob returned by the state_getMetadata RPC depends on the metadata version, however will generally have the following structure: - a hard-coded magic number, 0x6d657461, which represents "meta" in plain text. - a 32 bit integer representing the version of the metadata format in use, for example 14or 0x0ein hex. - hex encoded type and metadata information. In V14, this part would contain a registry of type information (generated by the scale-infocrate). In previous versions, this part contained the number of pallets followed by the metadata each pallet exposes. Here is a condensed version of decoded metadata for a runtime using the V14 metadata system (generated using subxt): [ 1635018093, // the magic number { "V14": { // the metadata version "types": { // type information "types": [] }, "pallets": [ // metadata exposes by pallets ], "extrinsic": { // the format of an extrinsic and its signed extensions "ty": 111, "version": 4, // the transaction version used to encode and decode an extrinsic "signed_extensions": [] }, "ty": 125 // the type ID for the system pallet } } ] As described above, the integer 1635018093 is a "magic number" that represents "meta" in plain text. The rest of the metadata has two sections: pallets and extrinsic. The pallets section contains information about the runtime's pallets, while the extrinsic section describes the version of extrinsics that the runtime is using. Different extrinsic versions may have different formats, especially when considering signed transactions. Pallets Here is a condensed example of a single element in the pallets array: { "name": "System", // name of the pallet, the System pallet for example "storage": { // storage entries }, "calls": [ // index for this pallet's call types ], "event": [ // index for this pallet's event types ], "constants": [ // pallet constants ], "error": [ // index for this pallet's error types ], "index": 0 // the index of the pallet in the runtime } Every element contains the name of the pallet that it represents, as well as a storage object, calls array, event array, and error array. If calls or events are empty, they will be represented as null and if constants or errors are empty, they will be represented as an empty array. Type indices for each item are just u32 integers used to access the type information for that item. For example, the type ID for the calls in the System pallet is 145. Querying the type ID will give you information about the available calls of the system pallet including the documentation for each call. For each field, you can access type information and metadata for: - Storage metadata: provides blockchain clients with the information that is required to query the storage RPC to get information for a specific storage item. - Call metadata: includes information about the runtime calls are defined by the #[pallet]macro including call names, arguments and documentation. - Event metadata: provides the metadata generated by the #[pallet::event]macro, including the name, arguments and documentation for a pallet's events - Constants metadata provides metadata generated by the #[pallet::constant]macro, including the name, type and hex encoded value of the constant. - Error metadata: provides metadata generated by the #[pallet::error]macro, including the name and documentation for each error type in that pallet. Note that the IDs used aren't stable over time: they will likely change from one version jump to the next, meaning that developers should avoid relying on fixed type IDs to future proof their applications. Extrinsic Exrinsic metadata is generated by the runtime and provides useful information on how a transaction is formatted. The returned decoded metadata contains the transaction version and signed extensions, which looks like this: "extrinsic": { "ty": 111, "version": 4, "signed_extensions": [ { "identifier": "CheckSpecVersion", "ty": 117, "additional_signed": 4 }, { "identifier": "CheckTxVersion", "ty": 118, "additional_signed": 4 }, { "identifier": "CheckGenesis", "ty": 119, "additional_signed": 9 }, { "identifier": "CheckMortality", "ty": 120, "additional_signed": 9 }, { "identifier": "CheckNonce", "ty": 122, "additional_signed": 34 }, { "identifier": "CheckWeight", "ty": 123, "additional_signed": 34 }, { "identifier": "ChargeTransactionPayment", "ty": 124, "additional_signed": 34 } ] } The type system is composite, which means that each type ID contains a reference to some type or to another type ID that gives access to the associated primitive types. For example one type we can encode is a BitVec<Order, Store> type: to decode it properly we need to know what the Order and Store types used were, which can be accessed used the "path" in the decoded JSON for that type ID. RPC APIs Substrate comes with the following APIs to interact with a node: AuthorApi: An API to make calls into a full node, including authoring extrinsics and verifying session keys. ChainApi: An API to retrieve block header and finality information. OffchainApi: An API for making RPC calls for offchain workers. StateApi: An API to query information about on-chain state such as runtime version, storage items and proofs. SystemApi: An API to retrieve information about network state, such as connected peers and node roles. Connecting to a node Querying a Substrate node can either be done by using a Hypertext Transfer Protocol (HTTP) or WebSocket (WS) based JSON-RPC client. The main advantage of WS (used in most applications) is that a single connection can be reused for many messages to and from a node, whereas a typical HTTP connection allows only for a single message from, and then response to the client at a time. For this reason, if you want to subscribe to some RPC endpoint that could lead to multiple messages being returned to the client, you must use a websocket connection and not an HTTP one. Connecting via HTTP is commonly used for fetching data in offchain workers-learn more about that in Offchain operations. An alternative (and still experimental) way to connect to a Substrate node is by using Substrate Connect, which allows applications to spawn their own light clients and connect directly to the exposed JSON-RPC end-point. These applications would rely on in-browser local memory to establish a connection with the light client. Start building Parity maintains the following libraries built on top of the JSON-RPC API for interacting with a Substrate node: - subxt provides a way to create an interface for static front-ends built for specific chains. - Polkadot JS API provides a library to build dynamic interfaces for any Substrate built blockchain. - Substrate Connect provides a library and a browser extension to build applications that connect directly with an in-browser light client created for its target chain. As a library that uses the Polkadot JS API, Connect is useful for applications that need to connect to multiple chains, providing end users with a single experience when interacting with multiple chains for the same app.
https://docs.substrate.io/main-docs/build/application-dev/
2022-08-08T01:49:08
CC-MAIN-2022-33
1659882570741.21
[]
docs.substrate.io
Gaussian CUBE File Format¶ Disclaimer¶ The CUBE file format as described here is NOT an official specification, sanctioned by Gaussian, Inc. It is instead a best effort to define the contents of a representative subset of CUBE files in circulation. FILES FORMATTED TO THIS SPECIFICATION MAY NOT BE COMPATIBLE WITH ALL SOFTWARE SUPPORTING CUBE FILE INPUT. Overview¶ The CUBE file format is described on the Gaussian webpage as part of the documentation of the cubegen utility [Gau16]. As noted there, all data in CUBE files MUST be stored in atomic units (electrons and Bohrs, and units derived from these). The format specification on the webpage of the VMD visualization program [UIUC16] provides a cleaner layout of one possible arrangement of CUBE file contents. In particular, the Gaussian specification is ambiguous about whitespace requirements, so parsing of CUBE files SHOULD accommodate some variation in the format, including (i) variable amounts/types of whitespace between the values on a given line, and (ii) the presence of leading and/or trailing whitespace on a given line. The CUBE file format as laid out below uses tagged fields ( {FIELD (type)}) to indicate the types of the various data elements and where they are located in the file. Descriptions of the fields are provided below the field layout. Lowercase algebraic symbols \(\left(x\right.\), \(y\), \(\left. z\right)\) indicate coordinates in the frame of the molecular geometry, whereas uppercase algebraic symbols \(\left(X\right.\), \(Y\), \(\left. Z\right)\) indicate coordinates in the voxel grid defined by {XAXIS}, {YAXIS}, and {ZAXIS}. All fields except for {DSET_IDS} and {NVAL} MUST be present in all files. {DSET_IDS} MUST be present if {NATOMS} is negative; it MUST NOT be present if {NATOMS} is positive. {NVAL} may be omitted if its value would be equal to one; it MUST be absent or have a value of one if {NATOMS} is negative. Field Layout¶ {COMMENT1 (str)} {COMMENT2 (str)} {NATOMS (int)} {ORIGIN (3x float)} {NVAL (int)} {XAXIS (int) (3x float)} {YAXIS (int) (3x float)} {ZAXIS (int) (3x float)} {GEOM (int) (float) (3x float)} . . {DSET_IDS (#x int)} . . {DATA (#x scinot)} . . Field Contents¶ Field Descriptions¶ {COMMENT1 (str)} and {COMMENT2 (str)} Two lines of text at the head of the file. Per VMD [UIUC16], by convention {COMMENT1}is typically the title of the system and {COMMENT2}is a description of the property/content stored in the file, but they MAY be anything. For robustness, both of these fields SHOULD NOT be zero-length. As well, while there is no defined maximum length for either of these fields, both SHOULD NOT exceed 80 characters in length. {NATOMS (int)} This first field on the third line indicates the number of atoms present in the system. A negative value here indicates the CUBE file MUST contain the {DSET_IDS}line(s); a positive value indicates the file MUST NOT contain this/these lines. The absolute value of {NATOMS}defines the number of rows of molecular geometry data that MUST be present in {GEOM}. The CUBE specification is silent as to whether a zero value is permitted for {NATOMS}; most applications likely do not support CUBE files with no atoms. {ORIGIN (3x float)} This set of three fields defines the displacement vector from the geometric origin of the system \(\left(0,0,0\right)\) to the reference point \(\left(x_0, y_0, z_0\right)\) for the spanning vectors defined in {XAXIS}, {YAXIS}, and {ZAXIS}. {NVAL (int)} If {NATOMS}is positive, this field indicates how many data values are recorded at each point in the voxel grid; it MAY be omitted, in which case a value of one is assumed. If {NATOMS}is negative, this field MUST be either absent or have a value of one. {XAXIS (int) (3x float)} The first field on this line is an integer indicating the number of voxels \(N_X\) present along the \(X\)-axis of the volumetric region represented by the CUBE file. This value SHOULD always be positive; whereas the input to the cubegen[Gau16] utility allows a negative value here as a flag for the units of the axis dimensions, in a CUBE file distance units MUST always be in Bohrs, and thus the ‘units flag’ function of a negative sign is superfluous. It is prudent to design applications to handle gracefully a negative value here, however. The second through fourth values on this line are the components of the vector \(\vec X\) defining the voxel \(X\)-axis. They SHOULD all be non-negative; proper loading/interpretation/calculation behavior is not guaranteed if negative values are supplied. As noted in the Gaussian documentation [Gau16], the voxel axes need neither be orthogonal nor aligned with the geometry axes. However, many tools only support voxel axes that are aligned with the geometry axes (and thus are also orthogonal). In this case, the first floatvalue \(\left(X_x\right)\) will be positive and the other two \(\left(X_y\right.\) and \(\left.X_z\right)\) will be identically zero. {YAXIS (int) (3x float)} This line defines the \(Y\)-axis of the volumetric region of the CUBE file, in nearly identical fashion as for {XAXIS}. The key differences are: (1) the first integer field \(N_Y\) MUST always be positive; and (2) in the situation where the voxel axes aligned with the geometry axes, the second floatfield \(\left(Y_y\right)\) will be positive and the first and third floatfields \(\left(Y_x\right.\) and \(\left.Y_z\right)\) will be identically zero. {ZAXIS (int) (3x float)} This line defines the \(Z\)-axis of the volumetric region of the CUBE file, in nearly identical fashion as for {YAXIS}. The key difference is that in the situation where the voxel axes are aligned with the geometry axes, the third floatfield \(\left(Z_z\right)\) will be positive and the first and second floatfields \(\left(Z_x\right.\) and \(\left.Z_y\right)\) will be identically zero. {GEOM (int) (float) (3x float)} This field MUST have multiple rows, equal to the absolute value of {NATOMS} Each row of this field provides atom identity and position information for an atom in the molecular system of the CUBE file: - (int)- Atomic number of atom \(a\) - (float)- Nuclear charge of atom \(a\) (will deviate from the atomic number when an ECP is used) - (3x float)- Position of the atom in the geometric frame of reference \(\left(x_a, y_a, z_a\right)\) {DSET_IDS (#x int)} This field is only present if {NATOMS}is negative This field comprises one or more rows of integers, representing identifiers associated with multiple {DATA}values at each voxel, with a total of \(m+1\) values present. The most common meaning of these identifiers is orbital indices, in CUBE files containing wavefunction data. The first value MUST be positive and equal to \(m\), to indicate the length of the rest of the list. Each of these \(m\) values may be any integer, with the constraint that all values SHOULD be unique. Further, all \(m\) values SHOULD be non-negative, as unpredictable behavior may result in some applications if negative integers are provided. {DATA (#x scinot)} This field encompasses the remainder of the CUBE file. Typical formatted CUBE output has up to six values on each line, in whitespace-separated scientific notation. If {NATOMS}is positive, a total of \(N_X N_Y N_Z*\) {NVAL}values should be present, flattened as follows (in the below Python pseudocode the for-loop variables are iterated starting from zero):for i in range(NX): for j in range(NY): for k in range(NZ): for l in range({NVAL}): write(data_array[i, j, k, l]) if (k*{NVAL} + l) mod 6 == 5: write('\n') write('\n') If {NATOMS}is negative and \(m\) datasets are present (see {DSET_IDS} above), a total of \(N_X N_Y N_Z m\) values should be present, flattened as follows:for i in range(NX): for j in range(NY): for k in range(NZ): for l in range(m): write(data_array[i, j, k, l]) if (k*m + l) mod 6 == 5: write('\n') write('\n') The sequence of the data values along the last ( l) dimension of the data array for each i, j, kMUST match the sequence of the identifiers provided in {DSET_IDS}in order for the dataset to be interpreted properly. Regardless of the sign of {NATOMS}, as illustrated above a newline is typically inserted after the block of data corresponding to each \(\left(X_i, Y_j\right)\) pair.
https://h5cube-spec.readthedocs.io/en/latest/cubeformat.html
2022-08-08T02:18:04
CC-MAIN-2022-33
1659882570741.21
[]
h5cube-spec.readthedocs.io
All content with label async+concurrency+datagrid+grid+hot_rod+hotrod+infinispan+jboss_cache+listener+non-blocking+release+roadmap+server+user_guide+whitepaper+write_through. Related Labels: podcast, expiration, publish, coherence, interceptor, replication, recovery, transactionmanager, partitioning, query, deadlock, intro, archetype, pojo_cache, lock_striping, jbossas, nexus, guide, schema, cache, amazon, s3, memcached, test, jcache, api, xsd, ehcache, maven, documentation, youtube, userguide, write_behind, ec2, hibernate, aws, interface, custom_interceptor, clustering, setup, eviction, gridfs, out_of_memory, fine_grained,, searchable, cache_server, installation, scala, command-line, client, migration, filesystem, jpa, tx, article, gui_demo, eventing, shell, client_server, testng, infinispan_user_guide, murmurhash, standalone, snapshot, webdav, repeatable_read, docs, batching, consistent_hash, store, jta, faq, as5, 2lcache, jsr-107, lucene, jgroups, locking more » ( - async, - concurrency, - datagrid, - grid, - hot_rod, - hotrod, - infinispan, - jboss_cache, - listener, - non-blocking, - release, - roadmap, - server, - user_guide, - whitepaper, - write_through ) Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today.
https://docs.jboss.org/author/label/async+concurrency+datagrid+grid+hot_rod+hotrod+infinispan+jboss_cache+listener+non-blocking+release+roadmap+server+user_guide+whitepaper+write_through
2019-10-14T06:23:12
CC-MAIN-2019-43
1570986649232.14
[]
docs.jboss.org
All content with label aws+buddy_replication+client+distribution+eventing+events+gridfs+infinispan+locking+query+rebalance+snapshot. Related Labels: publish, datagrid, coherence, interceptor, server, rehash, replication, transactionmanager, dist, release, partitioning, archetype, lock_striping, jbossas, nexus, guide, schema, listener, state_transfer, cache, amazon, s3, grid, memcached, test, jcache, api, xsd, ehcache, maven, documentation, write_behind, ec2, 缓存, hibernate, custom_interceptor, setup, clustering, eviction, concurrency, out_of_memory, jboss_cache, import, index, batch, configuration, hash_function, loader, colocation, write_through, cloud, remoting, mvcc, tutorial, notification, murmurhash2, read_committed, jbosscache3x, meeting, cachestore, data_grid, cacheloader, hibernate_search, resteasy, cluster, br, development, websocket, async, transaction, interactive, xaresource, build, hinting, searchable, demo, installation, scala, ispn, command-line, migration, non-blocking, filesystem, jpa, tx, gui_demo, shell, client_server, testng, murmurhash, infinispan_user_guide, standalone, webdav, hotrod, repeatable_read, docs, consistent_hash, store, jta, faq, as5, 2lcache, jsr-107, jgroups, lucene, rest, hot_rod more » ( - aws, - buddy_replication, - client, - distribution, - eventing, - events, - gridfs, - infinispan, - locking, - query, - rebalance, - snapshot ) Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today.
https://docs.jboss.org/author/label/aws+buddy_replication+client+distribution+eventing+events+gridfs+infinispan+locking+query+rebalance+snapshot
2019-10-14T07:01:19
CC-MAIN-2019-43
1570986649232.14
[]
docs.jboss.org
All content with label batching+cloud+faq+gridfs+hotrod+infinispan+jboss_cache+jta+lock_striping+mvcc+notification+out_of_memory+setup. Related Labels: podcast, expiration, publish, datagrid, coherence, server, replication, recovery, transactionmanager, dist, release, partitioning, query, deadlock, intro, archetype, jbossas, guide, schema, listener, cache, s3, amazon, grid, memcached, test, jcache, api, ehcache, maven, documentation, youtube, write_behind, ec2, 缓存, hibernate, interface, custom_interceptor, clustering, eviction, concurrency, import, events, configuration, batch, hash_function, buddy_replication, loader, xa, write_through, remoting, tutorial, murmurhash2, presentation, jbosscache3x, read_committed, xml, distribution, meeting,, standalone, webdav, snapshot, repeatable_read, docs, consistent_hash, 2lcache, as5, jsr-107, lucene, jgroups, locking, rest, hot_rod more » ( - batching, - cloud, - faq, - gridfs, - hotrod, - infinispan, - jboss_cache, - jta, - lock_striping, - mvcc, - notification, - out_of_memory, - setup ) Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today.
https://docs.jboss.org/author/label/batching+cloud+faq+gridfs+hotrod+infinispan+jboss_cache+jta+lock_striping+mvcc+notification+out_of_memory+setup
2019-10-14T06:18:14
CC-MAIN-2019-43
1570986649232.14
[]
docs.jboss.org
BCDEdit /set. bcdedit /set [{ID}] datatype value value The following list shows some useful datatypes and their associated values. bootlog [ yes | no ] Enables the system initialization log. This log is stored in the Ntbtlog.txt file in the %WINDIR% directory. It includes a list of loaded and unloaded drivers in text format. bootmenupolicy [ Legacy | Standard ]policy policy Controls the boot status policy. The boot status policy can be one of the following: DisplayAllFailures: Displays all errors if there is a failed boot, failed shutdown, or failed checkpoint. The computer will fail over to the Windows recovery environment on reboot. IgnoreAllFailures: Ignore errors if there is a failed boot, failed shutdown, or failed checkpoint. The computer will attempt to boot normally after an error occurs. IgnoreShutdownFailures: Only ignore errors if there is a failed shutdown. If there is a failed shutdown, the computer does not automatically fail over to the Windows recovery environment on reboot. This is the default setting for Windows 8. IgnoreBootFailures: Only ignore errors if there is a failed boot. If there is a failed boot, the computer does not automatically fail over to the Windows recovery environment on reboot. IgnoreCheckpointFailures: Only ignore errors if there is a failed checkpoint. If there is a failed checkpoint, the computer does not automatically fail over to the Windows recovery environment on reboot. The option is available starting with Windows 8 and Windows Server 2012. DisplayShutdownFailures: Displays errors if there is a failed shutdown. If there is a failed shutdown, the computer will fail over to the Windows recovery environment on reboot. Ignores boot failures and failed checkpoints. The option is available starting with Windows 8 and Windows Server 2012. DisplayBootFailures: Displays errors if there is a failed boot. If there is a failed boot, the computer will fail over to the Windows recovery environment on reboot. Ignores shutdown failures and failed checkpoints. The option is available starting with Windows 8 and Windows Server 2012. DisplayCheckpointFailures: Displays errors if there is a failed checkpoint. If there is a failed checkpoint, the computer will fail over to the Windows recovery environment on reboot. Ignores boot and shutdown failures. The option is available starting with Windows 8 and Windows Server 2012. bootux [ disabled | basic | standard ] Controls the boot screen animation. The possible values are disabled, basic, and standard. Note Not supported in Windows 8 and Windows Server 2012. disabledynamictick [ yes | no ] Enables and disables dynamic timer tick feature. Note This option should only be used for debugging. disableelamdrivers [ yes | no ]. forcelegacyplatform [ yes | no ] Forces the OS to assume the presence of legacy PC devices like CMOS and keyboard controllers. Note This option should only be used for debugging. groupsize max | off ]. hal file Directs the operating system loader to load an alternate HAL file. The specified file must be located in the %SystemRoot%\system32 directory. hypervisorbusparams Bus.Device.Function Defines the PCI bus, device, and function numbers of the debugging device. For example, 1.5.0 describes the debugging device on bus 1, device 5, function 0. Use this option when you are using either a 1394 cable, or a USB 2.0 or USB 3.0 debug cable for debugging. hypervisordebug [ On | Off ] Controls whether the hypervisor debugger is enabled. Serial Specifies a serial connection for debugging. When the Serial option is specified, you also set the hypervisordebugport and hypervisorbaudrate options. bcdedit /set hypervisordebugtype serial bcdedit /set hypervisordebugport 1 bcdedit /set hypervisorbaudrate 115200 bcdedit /set hypervisordebug on bcdedit /set hypervisorlaunchtype auto 1394 Specifies an IEEE 1394 (FireWire) connection for debugging. When this option is used, the hypervisorchannel option should also be set. Important The 1394 transport is available for use in Windows 10, version 1607 and earlier. It is not available in later versions of Windows. You should transition your projects to other transports, such as KDNET using Ethernet. For more information about that transport, see Setting Up KDNET Network Kernel Debugging Automatically. Net Specifies an Ethernet network connection for debugging. When this option is used, the hypervisorhostip option must be also be set. hypervisorhostip IP address (Only used when the hypervisordebugtype is Net.) For debugging hypervisor over a network connection, specifies the IPv4 address of the host debugger. For information about debugging Hyper-V, see Create a Virtual Machine with Hyper-V. hypervisorhostport [ port ] (Only used when the hypervisordebugtype is Net.) For network debugging, specifies the port to communicate with on the host debugger. Should be 49152 or higher. hypervisordhcp [ yes | no ] Controls use of DHCP by the network debugger used with the hypervisor. Setting this to no forces the use of Automatic Private IP Addressing (APIPA) to obtain a local link IP address. hypervisoriommupolicy [ default | enable | disable] Controls whether the hypervisor uses an Input Output Memory Management Unit (IOMMU). hypervisorlaunchtype [ Off | Auto ] | No ] Specifies whether the hypervisor should enforce snoop control on system IOMMUs. hypervisornumproc number Specifies the total number of logical processors that can be started in the hypervisor. hypervisorrootproc number Specifies the maximum number of virtual processors in the root partition and limits the number of post-split Non-Uniform Memory Architecture (NUMA) nodes which can have logical processors started in the hypervisor. hypervisorrootprocpernode number Specifies the total number of virtual processors in the root partition that can be started within a pre-split Non-Uniform Memory Architecture (NUMA) node. hypervisorusekey [ key ] (Only used when the hypervisordebugtype is Net.) For network debugging specifies the key with which to encrypt the connection. [0-9] and [a-z] allowed only. hypervisoruselargevtlb [ yes | no Increases virtual Translation Lookaside Buffer (TLB) size. increaseuserva Megerva Meg. kernel file Directs the operating system loader to load an alternate kernel. The specified file must be located in the %SystemRoot%\system32 directory.. Note For 1394 debugging, the bus parameters must be specified in decimal, regardless of which version of Windows is being configured. The format of the bus parameters used for USB 2.0 debugging depends on the Windows version. In Windows Server 2008, the USB 2.0 bus parameters must be specified in hexadecimal. In Windows 7 and Windows Server 2008 R2 and later versions of Windows, the USB 2.0 bus parameters must be specified in decimal. maxgroup [ on | off ] this. nointegritychecks [ on | off ] Disables integrity checks. Cannot be set when secure boot is enabled. This value is ignored by Windows 7 and Windows 8.. novesa [ on | off ] Indicates whether the VGA driver should avoid VESA BIOS calls. The option is ignored in Windows 8 and Windows Server 2012. novga [ on | off ] Disables the use of VGA modes in the OS. The option is available starting in Windows 8 and Windows Server 2012. nx [Optin |OptOut | AlwaysOn |AlwaysOff] Enables, disables, and configures Data Execution Prevention (DEP), a set of hardware and software technologies designed to prevent harmful code from running in protected memory locations. For information about DEP settings, see Data Execution Prevention. onetimeadvancedoptions [ on | off ] Controls whether the system boots to the legacy menu (F8 menu) on the next boot. Note The option is available starting in Windows 8 and Windows Server 2012. bcdedit /set {current} onetimeadvancedoptions on pae [ Default | ForceEnable | ForceD. pciexpress [ default | forced. quietboot [ on | off ] Controls the display of a high-resolution bitmap in place of the Windows boot screen display and animation. In operating systems prior to Windows Vista, the /noguiboot serves a similar function. Note Do not use the quietboot option in Windows 8 as it will prevent the display of bug check data in addition to all boot graphics.. testsigning [ on | off ]. Note Before setting BCDEdit options you might need to disable or suspend BitLocker and Secure Boot on the computer. tpmbootentropy [ default | ForceEnable | ForceDisable] Determines whether entropy is gathered from the trusted platform module (TPM) to help seed the random number generator in the operating system. 0x40000000 tscsyncpolicy [ Default | Legacy | Enhanced ] Controls the times stamp counter synchronization policy. This option should only be used for debugging. Note The option is available starting in Windows 8 and Windows Server 2012. usefirmwarepcisettings [ yes | no ] Enables or disables the use of BIOS-configured peripheral component interconnect (PCI) resources. useplatformclock [ yes | no ] Forces the use of the platform clock as the system's performance counter. Note This option should only be used for debugging. uselegacyapicmode [ yes | no ] Used to force legacy APIC mode, even if the processors and chipset support extended APIC mode. useplatformtick [ yes | no ] Forces the clock to be backed by a platform source, no synthetic timers are allowed. The option is available starting in Windows 8 and Windows Server 2012. Note This option should only be used for debugging.. xsavedisable [ 0 | 1 ] When set to a value other than zero (0), disables XSAVE processor functionality in the kernel. x2apicpolicy [ enable | disable ] Enables or disables the use of extended APIC mode, if supported. The system defaults to using extended APIC mode if it is. bcdedit /deletevalue groupsize Any change to a boot option requires a restart to take effect. For information about commonly used BCDEdit commands, see Boot Configuration Data Editor Frequently Asked Questions. Requirements See also Feedback
https://docs.microsoft.com/en-us/windows-hardware/drivers/devtest/bcdedit--set?redirectedfrom=MSDN
2019-10-14T06:02:34
CC-MAIN-2019-43
1570986649232.14
[]
docs.microsoft.com
Indicates the store that the app was built for. It will yield one of the following strings: "apple"— Always returned on iOS. "google"— Targets Google Play (only returned on Android). "amazon"— Targets the Amazon Appstore (only returned on Android). "windows"— Targets the Windows app store. "none"— Indicates that the app is not targeting a specific app store. This is always returned by the Corona Simulator. This property yields the same result as passing "targetAppStore" to the system.getInfo() function. store.target
https://docs.coronalabs.com/api/library/store/target.html
2019-10-14T05:55:13
CC-MAIN-2019-43
1570986649232.14
[]
docs.coronalabs.com
Prepare your Windows game for publishing [This article is for Windows 8.x and Windows Phone 8.x developers writing Windows Runtime apps. If you’re developing for Windows 10, see the latest documentation] This section provides Windows 8 game developers with info about the tools and support for common game publishing and packaging scenarios. Note For info on game ratings and certificates, see Windows Store age ratings and boards.
https://docs.microsoft.com/en-us/previous-versions/windows/apps/hh452788%28v%3Dwin.10%29
2019-10-14T06:36:23
CC-MAIN-2019-43
1570986649232.14
[]
docs.microsoft.com
All you need to use stage 2 (and greater) plugins This preset includes the following plugins: And all plugins from presets: The gist of Stage 2 is: Stage 2: draft What is it? A first version of what will be in the specification. At this point, an eventual inclusion of the feature in the standard is likely. What’s required? The proposal must now additionally have a formal description of the syntax and semantics of the feature (using the formal language of the ECMAScript specification). The description should be as complete as possible, but can contain todos and placeholders. Two experimental implementations of the feature are needed, but one of them can be in a transpiler such as Babel. What’s next? Only incremental changes are expected from now on. npm install --save-dev babel-preset-stage-2 .babelrc(Recommended) .babelrc { "presets": ["stage-2"] } babel script.js --presets stage-2 require("babel-core").transform("code", { presets: ["stage-2"] }); © 2018 Sebastian McKenzie Licensed under the MIT License.
https://docs.w3cub.com/babel/plugins/preset-stage-2/
2019-10-14T05:22:33
CC-MAIN-2019-43
1570986649232.14
[]
docs.w3cub.com
database. Important Do not add flexible database roles as members of fixed roles. This could enable unintended privilege escalation. The following table shows the fixed database-level roles and their capabilities. These roles exist in all databases.. Related Content Security Catalog Views (Transact-SQL) Security Stored Procedures (Transact-SQL) Security Functions (Transact-SQL) sp_helprotect (Transact-SQL)
https://docs.microsoft.com/en-us/sql/relational-databases/security/authentication-access/database-level-roles?view=sql-server-2014&redirectedfrom=MSDN
2019-10-14T06:11:05
CC-MAIN-2019-43
1570986649232.14
[]
docs.microsoft.com
method lock Documentation for method lock assembled from the following types: class Lock (Lock). class IO::CatHandle From IO::CatHandle (IO::CatHandle) method lock Defined as: method lock(IO::CatHandle: Bool : = False, Bool : = False --> True) Same as IO::Handle.lock. Returns Nil if the source handle queue has been exhausted. Locks only the currently active source handle. The .on-switch Callable can be used to conveniently lock/unlock the handles as they're being processed by the CatHandle. class IO::Handle From IO::Handle (IO::Handle) method lock Defined as: method lock(IO::Handle: Bool : = False, Bool : = False --> True) Places an advisory lock on the filehandle. class Lock::Async From Lock::Async (Lock::Async).
http://docs.perl6.wakelift.de/routine/lock
2019-10-14T06:24:02
CC-MAIN-2019-43
1570986649232.14
[]
docs.perl6.wakelift.de
Obtains information on the formatting capabilities required to export the current document correctly. Namespace: DevExpress.XtraRichEdit.API.Native Assembly: DevExpress.RichEdit.v19.1.Core.dll In order to check whether the document can be correctly exported to various formats, you can get the information about formatting features applied to the document. The RequiredExportCapabilities property provides access to a structure which indicates if non-default character formatting, paragraph formatting, inline pictures or objects etc. are contained within the document. Use its properties, such as the DocumentExportCapabilities.ParagraphFormatting, DocumentExportCapabilities.Sections and others, or the DocumentExportCapabilities.Contains method to compare the export method capabilities with the document features which should be supported, to ensure the correct result. The following code snippet illustrates how you can detect if the capabilities of your custom exporter match the formatting capabilities required to export the current document. First, declare the exporter capabilities in the DocumentExportCapabilities structure, and then compare it with the structure obtained via the Document.RequiredExportCapabilities property. DevExpress.XtraRichEdit.DocumentExportCapabilities myExportFeatures = new DevExpress.XtraRichEdit.DocumentExportCapabilities(); myExportFeatures.CharacterFormatting = true; myExportFeatures.ParagraphFormatting = true; myExportFeatures.InlinePictures= true; if(myExportFeatures.Contains(richEditControl1.Document.RequiredExportCapabilities)) MessageBox.Show("The document can be exported");
https://docs.devexpress.com/OfficeFileAPI/DevExpress.XtraRichEdit.API.Native.Document.RequiredExportCapabilities
2019-10-14T05:57:59
CC-MAIN-2019-43
1570986649232.14
[]
docs.devexpress.com
preload type: String or Numberdefault: 2 Defines how many images Galleria should preload in advance. Please note that this only applies when you are using separate thumbnail files. Galleria always cache all preloaded images. - 2 preloads the next 2 images in line - ‘all’ forces Galleria to start preloading all images. This may slow down client. - 0 will not preload any images
https://docs.galleria.io/options/preload.html
2019-10-14T05:37:14
CC-MAIN-2019-43
1570986649232.14
[]
docs.galleria.io
Changing Your Password To change your password for the TagniFi Platform, select My Account from the upper navigation menu. Select the Edit icon under the Password section.. If you know your current password, enter it along with your new password. If you've forgotten your password, select 'Forgot your password?' and a password reset link will be emailed to you.
https://docs.tagnifi.com/article/169-changing-your-password
2019-10-14T06:58:43
CC-MAIN-2019-43
1570986649232.14
[]
docs.tagnifi.com
. Install & Uninstall Preview NPM Packages The latest version of preview NPM packages can be installed by the running below command in the root folder of application: abp switch-to-preview If you're using the ABP Framework preview packages, you can switch back to stable version using this command: abp switch-to-stable See the ABP CLI documentation for more information.
https://docs.abp.io/en/abp/2.9/Nightly-Builds
2022-08-08T08:03:02
CC-MAIN-2022-33
1659882570767.11
[]
docs.abp.io
Glacio-hydrological model calibration and evaluation DOI: Persistent URL: Persistent URL: van Tiel, Marit; Stahl, Kerstin; Freudiger, Daphné; Seibert, Jan, 2020: Glacio-hydrological model calibration and evaluation. In: Wiley Interdisciplinary Reviews: Water, Band 7, 6, DOI: 10.1002/wat2.1483. Glaciers are essential for downstream water resources. Hydrological modeling is necessary for a better understanding and for future projections of the water resources in these rapidly changing systems, but modeling glacierized catchments is especially challenging. Here we review a wealth of glacio-hydrological modeling studies (145 publications) in catchments around the world. Major model challenges include a high uncertainty in the input data, mainly precipitation, due to scarce observations. Consequently, the risk of wrongly compensating input with model errors in competing snow and ice accumulation and melt process parameterization is particularly high. Modelers have used a range of calibration and validation approaches to address this issue. The review revealed that while a large part (~35%) of the reviewed studies used only streamflow data to evaluate model performances, most studies (~50%) have used additional data related to snow and glaciers to constrain model parameters. These data were employed in a variety of calibration strategies, including stepwise and multi-signal calibration. Although the primary aim of glacio-hydrological modeling studies is to assess future climate change impacts, long-term changes have rarely been taken into account in model performance evaluations. Overall, a more precise description of which data are used how for model evaluation would facilitate the interpretation of the simulation results and their uncertainty, which in turn would support water resources management. Moreover, there is a need for systematic analyses of calibration approaches to disentangle what works best and why. Addressing this need will improve our system understanding and model simulations of glacierized catchments. This article is categorized under: Science of Water > Hydrological Processes Science of Water > Methods Statistik:View Statistics Collection - Geographie, Hydrologie [357] This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.
https://e-docs.geo-leo.de/handle/11858/9308
2022-08-08T07:49:39
CC-MAIN-2022-33
1659882570767.11
[]
e-docs.geo-leo.de
Table of Contents Product Index All this standing has made Genesis 8 Female a bit tired, she needs a moment to lie down. NG Build Your Own Lying Down Poses for Genesis 8 Female is a set of partial poses with an emphasis on lying on a flat surface, in 3 different orientations: on the side of her hip, on her belly and on her back. Partial poses are divided into 3 groups of subfolders with a multitude of lower and upper body poses (and mirrors) designed specifically for each orientation. Additionally, numerous extra foot and toe poses are provided to add an extra element of uniqueness. Shoe fits are provided to zero metatarsals and individual toe poses only, in order to remove poke through when your Genesis 8 Female is wearing footwear. Each pose is uniquely named and thumbnails are designed with tips so you can hover and see each pose selection in better detail. Get NG Build Your Own Lying Down Poses to add another element of diversity to your partial poses.
http://docs.daz3d.com/doku.php/public/read_me/index/72921/start
2022-08-08T07:46:55
CC-MAIN-2022-33
1659882570767.11
[]
docs.daz3d.com
The Python agent API allows you to customize and extend your monitoring. Use the Python agent API to: - Manually instrument an unsupported framework or third-party system. - Add instrumentation to supplement the agent's default monitoring. This document describes some of the available Python API calls. For a description of all our available APIs, see Introduction to APIs. Custom instrumentation or API If your goal is custom instrumentation, consider using the configuration file method, which allows you to add functions and class methods to the config file that will be auto-instrumented by the agent. The benefit of the config-file method is that it does not require you to change your application code. However, the Python agent API is much more powerful and is best for setting up more complex and tailored instrumentation. To ensure you have access to the full API functionality, update to the latest Python agent. Monitor transactions and segments The Python agent is compatible with most of the common WSGI web frameworks. If the agent supports your framework, web requests automatically will target is visible in our UI, but some details of the method are not useful. For example: - The default name is not helpful, or it is causing a metric grouping issue. - You want to add custom attributes to your transactions so you can filter them when querying. Use these calls when you want to change the metadata of an existing transaction: See related logs To see logs directly within the context of your application's errors and traces, use the get_linking_metadata API call to annotate your logs. For more information about correlating log data with other telemetry data, see our logs in context documentation. Report custom events and custom metric data The agent reports data in two primary forms: - Metric data measures numeric, time-based values; for example, connections per minute. - Event data captures discrete event information. Events have key-value attributes attached to them. You can analyze and query event data. Use these methods to create new event data and new metric data: Message-related calls These API calls allow you to collect performance data on your message-passing architecture or service; for example, RabbitMQ. To use these calls, make sure you have Python agent version 2.88.0.72 or higher. Implement distributed tracing These APIs require distributed tracing to be enabled. Services and applications monitored by our agents will automatically pass distributed tracing context to each other when using a supported framework. When not using a supported framework, you will need to use the distributed tracing APIs to manually accept this context. Supported web frameworks (for example, Flask, Django, Tornado) will automatically call accept_distributed_trace_payload when creating a transaction. Supported external web services libraries will automatically call create_distributed_trace_payload before making an external HTTP call. For general instructions on how to use the calls below to implement distributed tracing, see Use distributed tracing APIs. Agent configuration, initialization, shutdown These calls help you manage Python agent behavior, such as initializing and integrating the agent, and referencing or changing configuration settings: Control the Browser monitoring agent You can install the browser monitoring agent by automatically adding it to your pages or by deploying it on specific pages by copying and pasting the browser agent JavaScript snippet. You can also control the browser agent by using APM agent API calls. For more information, see Browser agent and the Python agent.
https://docs.newrelic.com/docs/apm/agents/python-agent/python-agent-api/guide-using-python-agent-api/
2022-08-08T07:32:44
CC-MAIN-2022-33
1659882570767.11
[]
docs.newrelic.com
12.1. Prerequisites 12.1.1. Compilers Although it should probably be assumed, you’ll need a C compiler that supports C99. You’ll also need a Fortran compiler if you want to build the Fortran MPI bindings (the more recent the Fortran compiler, the better), and a Java compiler if you want to build the (unofficial) Java MPI bindings. 12.1.2. GNU Autotools When building Open MPI from its repository sources, the GNU Autotools must be installed (i.e., GNU Autoconf, GNU Automake, and GNU Libtool). Note The GNU Autotools are not required when building Open MPI from distribution tarballs. Open MPI distribution tarballs are bootstrapped such that end-users do not need to have the GNU Autotools installed. You can generally install GNU Autoconf, Automake, and Libtool via your Linux distribution native package system, or via Homebrew or MacPorts on MacOS. This usually “just works.” If you run into problems with the GNU Autotools, or need to download / build them manually, see the how to build and install GNU Autotools section for much more detail. 12.1.3. Flex Minimum supported version: 2.5.4. Flex is used during the compilation of a developer’s checkout (it is not used to build official distribution tarballs). Other flavors of lex are not supported: given the choice of making parsing code portable between all flavors of lex and doing more interesting work on Open MPI, we greatly prefer the latter. Note that no testing has been performed to see what the minimum version of Flex is required by Open MPI. We suggest that you use v2.5.35 at the earliest. For now, Open MPI will allow developer builds with Flex 2.5.4. This is primarily motivated by the fact that RedHat/CentOS 5 ships with Flex 2.5.4. It is likely that someday Open MPI developer builds will require Flex version >=2.5.35. Note that the flex-generated code generates some compiler warnings on some platforms, but the warnings do not seem to be consistent or uniform on all platforms, compilers, and flex versions. As such, we have done little to try to remove those warnings. If you do not have Flex installed and cannot easily install it via your operating system’s packaging system (to include Homebrew or MacPorts on MacOS), see the Flex Github repository. 12.1.4. Sphinx Sphinx is used to generate both the HTML version of the documentation (that you are reading right now) and the nroff man pages. Official Open MPI distribution tarballs contain pre-built HTML documentation and man pages. This means that – similar to the GNU Autotools – end users do not need to have Sphinx installed, but will still have both the HTML documentation and man pages installed as part of the normal configure / build / install process. However, the HTML documentation and man pages are not stored in Open MPI’s Git repository; only the ReStructred Text source code of the documentation is in the Git repository. Hence, if you are building Open MPI from a Git clone, you will need Sphinx (and some Python modules) in order to build the HTML documentation and man pages. Important Most systems do not have Sphinx and/or the required Python modules installed by default. See the Installing Sphinx section for details on how to install Sphinx and the required Python modules. If configure is able to find Sphinx and the required Python modules, it will automatically generate the HTML documentation and man pages during the normal build procedure (i.e., during make all). If configure is not able to find Sphinx and/or the required Python modules, it will simply skip building the documentation. Note If you have built/installed Open MPI from a Git clone and unexpectedly did not have the man pages installed, it is likely that you do not have Sphinx and/or the required Python modules available. See the Installing Sphinx section for details on how to install Sphinx and the required Python modules. Important make dist will fail if configure did not find Sphinx and/or the required Python modules. Specifically: if make dist is not able to generate the most up-to-date HTML documentation and man pages, you cannot build a distribution tarball. This is an intentional design decision.
https://docs.open-mpi.org/en/main/developers/prerequisites.html
2022-08-08T07:28:47
CC-MAIN-2022-33
1659882570767.11
[]
docs.open-mpi.org
# Slack Fairwinds Insights has an integration with Slack so you can get notifications about critical changes to your clusters. There are two types of Slack notifications: - Realtime: Alert every time one of your reports generate new Action Items. This is good for production clusters which deserve more attention and should be relatively stable - Daily Digest: One alert per day highlighting any new Action Items or fixed Action Items in your cluster from the previous day Read our privacy policy (opens new window) to learn more about how Fairwinds Insights handles user data. # Installation To set up Slack notifications: - Visit your organization's Settings > Integrationpage - Hover over Slackand click Add Integration - Once you have connected Slack to Insights, you can choose which channels you'd like notifications to be sent to in the Settings > Notificationspage See the confgure section to customize Slack alerts through Automation Rules.
https://insights.docs.fairwinds.com/installation/integrations/slack/
2022-08-08T07:48:17
CC-MAIN-2022-33
1659882570767.11
[]
insights.docs.fairwinds.com
Table of Contents Product Index Honeymoon Bathroom Props is a furniture set inspired by Caribbean beach styles. This furniture set contains 26 separate props and 4 subset groups for interior bathroom scenes that are comfy, cozy, and designed to please your characters. The Honeymoon Bathroom Props can make your bathroom scenes more realistic and attractive..
http://docs.daz3d.com/doku.php/public/read_me/index/72515/start
2022-08-08T07:41:42
CC-MAIN-2022-33
1659882570767.11
[]
docs.daz3d.com
Table of Contents Product Index Are you looking to add a little summer, sun and fun to your life? Look no further than Elodie for Genesis 8 Female. With her sun-kissed skin and multiple make-up nail and lip options, she can go from beach bunny to starlet on the red carpet with the click of a button. For a little more summer fun, take a peek at her Beachin' options complete with tan lines and soft subtle barely-there looks for a day at the beach. Lastly, she shows off her love for all things sea and sand with 7 custom drawn tattoos available for both skin options. Let Elodie surf into your renders and your heart.
http://docs.daz3d.com/doku.php/public/read_me/index/72933/start
2022-08-08T08:17:16
CC-MAIN-2022-33
1659882570767.11
[]
docs.daz3d.com
Phone actions The Genesys package offers a palette of actions to detail and access data for phone records. Phone action palette for Genesys Genesys Cloud supports the WebRTC technology with the Genesys Cloud WebRTC phone. The Genesys Cloud WebRTC phone runs from a browser, that is, there are no special hardware requirements or software to download. When the Genesys Cloud WebRTC phone is enabled, you can immediately use it to make and receive calls. It is an important part of the onboarding process that phones be created and assigned to agents so that calls are effectively routed from the queue to agents. Configuring a Genesys Cloud WebRTC phone is a two-step operation. First, base settings must be created and configured by the admin and must already be established. Second, the phone is created and configured to retrieve established base settings. These secondary tasks can be automated. The Genesys package includes the following phone actions:
https://docs.automationanywhere.com/fr-FR/bundle/enterprise-v2019/page/enterprise-cloud/topics/aae-client/bot-creator/using-the-workbench/genesys-phone-actions.html
2022-08-08T07:02:57
CC-MAIN-2022-33
1659882570767.11
[]
docs.automationanywhere.com
Medien Optionen From Joomla! Documentation Outdated translations are marked like this. Description Media Manager Options configuration allows setting of parameters used globally for Media Manager. - Control the file types allowed for uploading, - MIME type check, - MIME type blacklisting, and more options for Media Manager. How to Access - Select Content → Media from the dropdown menu of the Administrator Panel - Click the Options button in the toolbar. Screenshot Form Fields Media - Legal Extensions (File Types). File types (extensions) users are allowed to upload, separated by a comma. Example: jpg,png,cvs... - Maximum Size (in MB). Maximum file size in MB allowed for uploading. - Path to Files Folder. Path to file folder relative to the root of Joomla installation.Note: Changing the default 'Path to Files Folder' to another folder than default 'images' may break your image links. - Path to Images Folder. Path to images folder relative to the root of Joomla installation.Note: The 'Path to Images Folder' has to be the same or to a subfolder of 'Path to Files'. - Restrict Uploads. (Yes/No) Restrict uploads to just images for Users with less than a Manager Permission if Fileinfo or MIME Magic isn't installed on server. - Check MIME Types. (Yes/No) Use MIME Magic or Fileinfo to verify file types. - Legal Image Extensions (File Types). Image file types allowed for uploading, comma separated. Used to check for valid image headers. - Ignored Extensions. Ignored file types for MIME checking, comma separated. - Legal MIME Types. Legal MIME types for MIME checking, comma separated. - Illegal MIME Types. Comma separated list of not allowed MIME Types. Example list: text/html,application/javascript,application/x-httpd-php ... Permissions Manage the permission settings for user groups. To change the permissions for media, do the following. - 1. Select the Group by clicking its title located on the left. - 2. Find the desired Action. Possible Actions are: - Configure ACL & Options. Users can edit the options and permissions of media. - Configure Options Only. Users can edit the options except the permissions of media. - Access Administration Interface. Users can access user administration interface of media. - Create. Users can create content of media. - Delete. Users can delete content of media. - 3. Select the desired Permission for the action you wish to change. Possible settings are: - Inherited: Inherited for users in this Group from the Global Configuration At the top left you will see the toolbar. The functions are: - Save. Saves the Media options and stays in the current screen. - Save & Close. Saves the Media options and closes the current screen. - Cancel. Closes the current screen and returns to the previous screen without saving any modifications you may have made. - Help. Opens this help screen. Quick Tips Remember, these choices are applied globally.
https://docs.joomla.org/Help310:Components_Media_Manager_Options/de
2022-08-08T07:15:41
CC-MAIN-2022-33
1659882570767.11
[]
docs.joomla.org
Raster Tiles The Mapbox Raster Tiles API serves raster tiles generated from satellite imagery tilesets and tilesets generated from raster data uploaded to Mapbox.com. Retrieve raster tiles get{tileset_id}/{zoom}/{x}/{y}{@2x}.{format} Example request: Retrieve raster tiles # Retrieve a 2x tile; this 512x512 tile is appropriate for high-density displays $ curl "" Response: Retrieve raster tiles The response is a raster image tile in the specified format. For performance, image tiles are delivered with a max-age header value set 12 hours in the future. Raster Tiles API errors Raster Tiles API restrictions and limits - The default rate limit for the Mapbox Raster Tiles API endpoint is 100,000 requests per minute. If you require a higher rate limit, contact us. - If you exceed the rate limit, you will receive an HTTP 429 Too Many Requestsresponse. For information on rate limit headers, see the Rate limit headers section. - Responses from the Raster. Raster Tiles API pricing Usage of the Raster Tiles API is measured in tile requests. Details about the number of tile requests included in the free tier and the cost per request beyond what is included in the free tier are available on the pricing page.
https://docs.mapbox.com/api/maps/raster-tiles/
2022-08-08T06:55:03
CC-MAIN-2022-33
1659882570767.11
[]
docs.mapbox.com
You're reading the documentation for a version of ROS 2 that has reached its EOL (end-of-life), and is no longer officially supported. If you want up-to-date information, please have a look at Humble. Overview and usage of RQt Table of Contents Overview RQt is a graphical user interface framework that implements various tools and interfaces in the form of plugins. One can run all the existing GUI tools as dockable windows within RQt! The tools can still run in a traditional standalone method, but RQt makes it easier to manage all the various windows in a single screen layout. You can run any RQt tools/plugins easily by: rqt This GUI allows you to choose any available plugins on your system. You can also run plugins in standalone windows. For example, RQt Python Console: ros2 run rqt_py_console rqt_py_console Users can create their own plugins for RQt with either Python or C++. Over 20 plugins were created in ROS 1 and these plugins are currently being ported to ROS 2 (as of Dec 2018, more info). System setup Installing From Debian sudo apt install ros-dashing-rqt* RQt Components Structure RQt consists of three metapackages: rqt - core infrastucture modules. - rqt_common_plugins - Backend tools for building tools. TODO: as of Dec 2018 this metapackage isn’t available in ROS 2 since not all plugins it contains have been ported yet. - rqt_robot_plugins - Tools for interacting with robots during runtime. TODO: as of Dec 2018 this metapackage isn’t available in ROS 2 since not all plugins it contains have been ported yet. Advantage of RQt framework Compared to building your own GUIs from scratch: Standardized common procedures for GUI (start-shutdown hook, restore previous states). Multiple widgets can be docked in a single window. Easily turn your existing Qt widgets into RQt plugins. Expect support at ROS Answers (ROS community website for the questions). From system architecture’s perspective: Further Reading ROS 2 Discourse announcment of porting to ROS 2). RQt for ROS 1 documentation. Brief overview of RQt (from a Willow Garage intern blog post).
https://docs.ros.org/en/crystal/Tutorials/RQt-Overview-Usage.html
2022-08-08T07:55:41
CC-MAIN-2022-33
1659882570767.11
[]
docs.ros.org
You can create site groups to manage several sites at once. Site groups allow you to reuse a group of sites when creating policies. You can group sites any way you choose, for example, by geographic location or by site purpose (public Wi-Fi or point-of-sale). Note: Each site can only belong to one site group. - In the top navigation bar, click and select Site Groups. - Click to create a new site group, select an existing site group and click Edit. - Enter the site group name and description. - Add sites to the site group: - In the Search command line, start typing the name of the site you want to add. Sites that are currently part of the site group are indicated in grey. Selecting one will show you where the site is in your list. - Select from the list of sites that appear. - If you add a site in error, click the X next to the site name to remove it. - To remove all of the sites from the group, type /clear. - Click Save. To delete a site group, select it and click Delete. If a site group is associated with a policy, you must remove it from the policy before you can delete the site group.
https://docs.bluecatnetworks.com/r/DNS-Edge-User-Guide-Fleet-Service-Point/Site-groups
2022-08-08T07:03:01
CC-MAIN-2022-33
1659882570767.11
[]
docs.bluecatnetworks.com
Concepts Twelve-Factor Applications In the modern era, software is commonly delivered as a service. These services are commonly referred as “web applications”, or “software-as-a-service”. The Twelve-Factor App is a methodology for building modern web applications that can be deployed at scale following modern developer best practices. Twelve-factor is a valuable synthesis of years of experience deploying software-as-a-service applications at scale in the wild, particularly on platforms like Heroku, Cloud Foundry, and the now-defunct Deis Workflow. The maintainers of the Hippo project have been directly involved in the development and deployment of countless web applications, and some of the maintainers are from the original Deis team. Hippo is designed to run applications that adhere to the Twelve-Factor App methodology and best practices. HTTP handlers and WebAssembly An important workload in event-driven environments is represented by HTTP applications, and Hippo has built-in support for creating and running HTTP components. At the current writing of this document, WebAssembly modules are single-threaded. As a result, WebAssembly modules cannot run as a standalone web server without blocking the main thread. To work around this limitation, Hippo deploys applications as HTTP handlers using Spin. The HTTP trigger in Spin is a web server. It listens for incoming requests and based on the application configuration, it routes them to an executor which instantiates the appropriate component, executes its entry point function, then returns an HTTP response. As more capabilities are provided to the WebAssembly runtime, we will re-evaluate this architecture and provide more capabilities to developers. Bindle Bindle is the term for a versioned package that can contain multiple objects of different types, or aggregate object storage. Each item in a bindle is called a parcel. Parcels can be any arbitrary data such as: - WebAssembly modules - Templates - Web files such as HTML, CSS, JavaScript, and images - Machine learning models - Any other files that your application relies upon A bindle for a website could look like this: my-web-app 1.2.3 |- index.html |- style.css |- library.js |- pretty-picture.jpg Bindles are not just an alternative to a zip file. They can also express groups and relationships between its parcels. By associating related parcels into groups, a client can make decisions about which parcels it needs to download, and only retrieve what it needs. A parcel can be associated with more than one group, which is useful for common components. Take for example a fantasy football prediction application, with three groups defined: - Frontend UI - Backend that uses a machine learning model - Backend that uses a statistical prediction model The frontend group would contain HTML, images, and JavaScript files and is a required group. The frontend specifies that it requires a backend, which could be the “machine-learning” backend or the “statistical” backend. When the application is run, the frontend group and its parcels are downloaded, and depending on the client’s configuration, one of the backend groups and their parcels are downloaded as well. A client with plenty of time and resources might select the machine learning backend, while a constrained client might pick the faster statistical formulas. Individual parcels in a bindle are content addressable and can be retrieved independently. Parcels can be cached by the client and reused across bindles. For example, if three bindles all contained jquery v3.6.0, then the parcel is downloaded when the first bindle is run, and it is retrieved from the cache when the other two bindles are run. Bindle Server A Bindle server provides storage for bindles. Bindle servers provide tools for uploading, searching, and downloading bindles. Additionally, they provide signature-based verification and provenance information so that you can assess the reliability of a bindle. A particular bindle is identified by its name and version. For example, hello/1.2.3 is a valid reference for a bindle, while hello is not. Often, bindle names are qualified with additional information, so it is not uncommon to see bindles with references like example.com/herbert_the_hippo/hello/1.2.3 Once a bindle is uploaded to a Bindle server, that bindle is immutable. It cannot be changed. For example, if some part of hello/1.2.3 is changed locally, you will need to change the version before pushing it to a Bindle server. Bindle CLI Bindle does include a CLI (called bindle) for working directly with Bindle servers. You can use this tool to upload and download bindles, and also to search a Bindle server. Hippo does not require the bindle CLI. Instead, it provides its own CLI for working with Bindle. Hippo Server The Hippo server is a Platform as a Service (PaaS) layer for creating WebAssembly-based micro-services and web applications. It provides a browser-based portal, an API for the client CLI, and the back-end management features to work with Bindle servers, load balancers, and Spin. Hippo CLI The hippo command-line tool. Spin Spin is a framework for building and running event-driven micro-service applications with WebAssembly components. Spin executes the component(s) as a result of events being generated by the trigger(s) defined in the spin.toml file.
https://docs.hippofactory.dev/topics/concepts/
2022-08-08T08:14:17
CC-MAIN-2022-33
1659882570767.11
[]
docs.hippofactory.dev
inginious-autotest¶ Assistant to test automatically the content and the format of the task.yaml files of a courses and to check if the output of the submission.test files corresponding to submissions with the new test is consistent with the output of the test in the inginious instance. The submission.test files for a task are situated in a test/ folder which is a sub directory of the task directory and which at the same level as the task.yaml. inginious-autotest [-h] [--logging] [-f FILE] [--ptype PTYPE [PTYPE ...]] task_dir course_dir - task_dir¶ Path to the courses directory of inginious, corresponds to field task_directory in the configuration.yaml
https://docs.inginious.org/en/latest/admin_doc/commands_doc/inginious-autotest.html
2022-08-08T07:40:04
CC-MAIN-2022-33
1659882570767.11
[]
docs.inginious.org
Guidelines for Writing T4 Text Templates Note This article applies to Visual Studio 2015. If you're looking for the latest Visual Studio documentation, see Visual Studio documentation. We recommend upgrading to the latest version of Visual Studio. Download it here These general guidelines might be helpful if you are generating program code or other application resources in Visual Studio. They are not fixed rules. Guidelines for Design-Time T4 Templates Design-time T4 templates are templates that generate code in your Visual Studio project at design time. For more information, see Design-Time Code Generation by using T4 Text Templates. Generate variable aspects of the application. Code generation is most useful for those aspects of the application that might change during the project, or will change between different versions of the application. Separate these variable aspects from the more invariant aspects so that you can more easily determine what has to be generated. For example, if your application provides a Web site, separate the standard page serving functions from the logic that defines the navigation paths from one page to another. Encode the variable aspects in one or more source models. A model is a file or database that each template reads to obtain specific values for variable parts of the code that is to be generated. Models can be databases, XML files of your own design, diagrams, or domain-specific languages. Typically, one model is used to generate many files in a Visual Studio project. Each file is generated from a separate template. You can use more than one model in a project. For example, you might define a model for navigation between Web pages, and a separate model for the layout of the pages. Focus the model on the users' needs and vocabulary, not on your implementation. For example, in a Web site application, you would expect the model to refer to Web pages and hyperlinks. Ideally, choose a form of presentation that suits the kind of information that the model represents. For example, a model of navigation paths through a Web site could be a diagram of boxes and arrows. Test the generated code. Use manual or automated tests to verify that the resulting code works as the users require. Avoid generating tests from the same model from which the code is generated. In some cases, general tests can be performed on the model directly. For example, you could write a test that ensures that every page in the Web site can be reached by navigation from any other. Allow for custom code: generate partial classes. Allow for code that you write by hand in addition to the generated code. It is unusual for a code generation scheme to be able to account for all possible variations that might arise. Therefore, you should expect to add to or override some of the generated code. Where the generated material is in a .NET language such as Visual C# or Visual Basic, two strategies are especially useful: The generated classes should be partial. This lets you to add content to the generated code. Classes should be generated in pairs, one inheriting from the other. The base class should contain all the generated methods and properties, and the derived class should contain only the constructors. This allows your hand-written code to override any of the generated methods. In other generated languages such as XML, use the <#@include#>directive to make simple combinations of hand-written and generated content. In more complex cases, you might have to write a post-processing step that combines the generated file with any hand-written files. Move common material into include files or run-time templates To avoid repeating similar blocks of text and code in multiple templates, use the <#@ include #>directive. For more information, see T4 Include Directive. You can also build run-time text templates in a separate project, and then call them from the design-time template. To do this, use the <#@ assembly #>directive to access the separate project. Consider moving large blocks of code into a separate assembly. If you have large code blocks and class feature blocks, it might be useful to move some of this code into methods that you compile in a separate project. You can use the <#@ assembly #>directive to access the code in the template. For more information, see T4 Assembly Directive. You can put the methods in an abstract class that the template can inherit. The abstract class must inherit from Microsoft.VisualStudio.TextTemplating.TextTransformation. For more information, see T4 Template Directive. Generate code, not configuration files One method of writing a variable application is to write generic program code that accepts a configuration file. An application written in this manner is very flexible, and can be reconfigured when the business requirements change, without rebuilding the application. However, a drawback of this approach is that the application will perform less well than a more specific application. Also, its program code will be more difficult to read and maintain, partly because it has always to deal with the most generic types. By contrast, an application whose variable parts are generated before compilation can be strongly typed. This makes it much easier and more reliable to write hand-written code and integrate it with the generated parts of the software. To obtain the full benefit of code generation, try to generate program code instead of configuration files. Use a Generated Code folder Place the templates and the generated files in a project folder named Generated Code, to make it clear that these are not files that should be edited directly. If you create custom code to override or add to the generated classes, place those classes in a folder that is named Custom Code. The structure of a typical project looks like this: MyProject Custom Code Class1.cs Class2.cs Generated Code Class1.tt Class1.cs Class2.tt Class2.cs AnotherClass.cs Guidelines for Run-Time (Preprocessed) T4 Templates Move common material into inherited templates You can use inheritance to share methods and text blocks between T4 text templates. For more information, see T4 Template Directive. You can also use include files that have run-time templates. Move large bodies of code into a partial class. Each run-time template generates a partial class definition that has the same name as the template. You can write a code file that contains another partial definition of the same class. You can add methods, fields, and constructors to the class in this manner. These members can be called from the code blocks in the template. An advantage of doing this is that the code is easier to write, because IntelliSense is available. Also, you can achieve a better separation between the presentation and the underlying logic. For example, in MyReportText.tt: The total is: <#= ComputeTotal() #> In MyReportText-Methods.cs: private string ComputeTotal() { ... } Allow for custom code: provide extension points Consider generating virtual methods in <#+ class feature blocks #>. This allows a single template to be used in many contexts without modification. Instead of modifying the template, you can construct a derived class which supplies the minimum additional logic. The derived class can be either regular code, or it can be a run-time template. For example, in MyStandardRunTimeTemplate.tt: This page is copyright <#= CompanyName() #>. <#+ protected virtual string CompanyName() { return ""; } #> In the code of an application: class FabrikamTemplate : MyStandardRunTimeTemplate { protected override string CompanyName() { return "Fabrikam"; } } ... string PageToDisplay = new FabrikamTemplate().TextTransform(); Guidelines for All T4 Templates Separate data-gathering from text generation Try to avoid mixing computation and text blocks. In each text template, use the first <# code block #> to set variables and perform complex computations. From the first text block down to the end of the template or the first <#+ class feature block #>, avoid long expressions, and avoid loops and conditionals unless they contain text blocks. This practice makes the template easier to read and maintain. Don’t use .tt for include files Use a different file name extension such as .ttinclude for include files. Use .tt only for files that you want to be processed either as run-time or design-time text templates. In some cases, Visual Studio recognizes .tt files and automatically sets their properties for processing. Start each template as a fixed prototype. Write an example of the code or text that you want to generate, and make sure that it is correct. Then change its extension to .tt and incrementally insert code that modifies the content by reading the model. Consider using typed models. Although you can create an XML or database schema for your models, it might be useful to create a domain specific language (DSL). A DSL has the advantage that it generates a class to represent each node in the schema, and properties to represent the attributes. This means that you can program in terms of the business model. For example: Team Members: <# foreach (Person p in team.Members) { #> <#= p.Name #> <# } #> Consider using diagrams for your models. Many models are most effectively presented and managed simply as text tables, especially if they are very large. However, for some kinds of business requirements, it is important to clarify complex sets of relationships and work flows, and diagrams are the best suited medium. An advantage of a diagram is that it is easy to discuss with users and other stakeholders. By generating code from a model at the level of business requirements, you make your code more flexible when the requirements change. UML class and activity diagrams can often be adapted for these purposes. You can also design your own type of diagram as a domain-specific language (DSL). Code can be generated from both UML and DSLs. For more information, see Analyzing and Modeling Architecture and Analyzing and Modeling Architecture. See Also Design-Time Code Generation by using T4 Text Templates Run-Time Text Generation with T4 Text Templates
https://docs.microsoft.com/en-us/previous-versions/visualstudio/visual-studio-2015/modeling/guidelines-for-writing-t4-text-templates?view=vs-2015
2022-08-08T08:37:42
CC-MAIN-2022-33
1659882570767.11
[]
docs.microsoft.com
Crate drm_fourcc[−][src] Expand description DrmFourcc is an enum representing every pixel format supported by DRM (as of kernel version 5.10.0). A fourcc is four bytes of ascii representing some data format. This enum contains every fourcc representing a pixel format supported by DRM, the Linux Direct Rendering Manager. The names of pixel formats generally provide clues as to how they work, for more information you may find this guide helpful. To get the bytes of the fourcc representing the format, cast to u32. assert_eq!(DrmFourcc::Xrgb8888 as u32, 875713112); To get the string form of the fourcc, use ToString::to_string. assert_eq!(DrmFourcc::Xrgb8888.to_string(), "XR24"); We also provide a type for representing a fourcc/modifier pair let format = DrmFormat { code: DrmFourcc::Xrgb8888, modifier: DrmModifier::Linear, }; The enums are autogenerated from the canonical list in the Linux source code. Features serde- Derive Serialize/Deserialize where it makes sense build_bindings- Re-generate autogenerated code. Useful if you need varients added in a more recent kernel version. Structs Wraps some u32 that isn’t a DRM fourcc we recognize Wraps some u64 that isn’t a DRM modifier we recognize Wraps some u8 that isn’t a DRM vendor we recognize
https://docs.rs/drm-fourcc/2.2.0/drm_fourcc/
2022-08-08T07:44:05
CC-MAIN-2022-33
1659882570767.11
[]
docs.rs
VFIO virtual device¶ Device types supported: - KVM_DEV_TYPE_VFIO Only one VFIO instance may be created per VM. The created device tracks VFIO groups in use by the VM and features of those groups important to the correctness and acceleration of the VM. As groups are enabled and disabled for use by the VM, KVM should be updated about their presence. When registered with KVM, a reference to the VFIO-group is held by KVM. - Groups: KVM_DEV_VFIO_GROUP - KVM_DEV_VFIO_GROUP attributes: - KVM_DEV_VFIO_GROUP_ADD: Add a VFIO group to VFIO-KVM device tracking kvm_device_attr.addr points to an int32_t file descriptor for the VFIO group. - KVM_DEV_VFIO_GROUP_DEL: Remove a VFIO group from VFIO-KVM device tracking kvm_device_attr.addr points to an int32_t file descriptor for the VFIO group. - KVM_DEV_VFIO_GROUP_SET_SPAPR_TCE: attaches a guest visible TCE table allocated by sPAPR KVM. kvm_device_attr.addr points to a struct: struct kvm_vfio_spapr_tce { __s32 groupfd; __s32 tablefd; }; where: @groupfd is a file descriptor for a VFIO group; @tablefd is a file descriptor for a TCE table allocated via KVM_CREATE_SPAPR_TCE.
https://docs.kernel.org/virt/kvm/devices/vfio.html
2022-08-08T07:53:41
CC-MAIN-2022-33
1659882570767.11
[]
docs.kernel.org
4.6. Required support libraries Open MPI requires the following support libraries with the minimum listed versions: Since these support libraries are fundamental to Open MPI’s operation, they are directly incorporated into Open MPI’s configure, build, and installation process. More on this below. 4.6.1. Library dependencies These support libraries have dependencies upon each other: Open MPI required support library dependency graph. The higher-level boxes depend on the lower-level boxes. Specifically: Open MPI depends on PRRTE, PMIx, Hwloc, and Libevent (i.e., everything). PRRTE depends on PMIx, Hwloc, and Libevent (i.e., everything except Open MPI). PMIx depends on Hwloc and Libevent. Hwloc does not depend on anything. Libevent does not depend on anything. At run time, it is critical that the run-time linker loads exactly one copy of each of these libraries. Note The required support libraries can have other dependencies, but for simplicity and relevance to building Open MPI, those other dependencies are not discussed here. 4.6.2. Potential problems Problems can (will) arise if multiple different copies of the above shared libraries are loaded into a single process. For example, consider if: Loading the Open MPI shared library causes the loading of Libevent shared library vA.B.C. But then the subsequent loading of the PMIx shared library causes the loading of Libevent shared library vX.Y.Z. Since there are now two different versions of the Libevent shared library loaded into the same process (yes, this can happen!), unpredictable behavior can (will) occur. Many variations on this same basic erroneous scenario are possible. All of them are bad, and can be extremely difficult to diagnose. 4.6.3. Avoiding the problems A simple way to avoid these problems is to configure your system such that it has exactly one copy of each of the required support libraries. Important If possible, use your OS / environment’s package manager to install as many of these support libraries — including their development headers — as possible before invoking Open MPI’s configure script. Not all package managers provide all of the required support libraries. But even if your package manager installs — for example — only Libevent and Hwloc, that somewhat simplifies the final Open MPI configuration, and therefore avoids some potentially erroneous configurations. 4.6.4. How configure finds the required libraries In an attempt to strike a balance between end-user convenience and flexibility, Open MPI bundles these four required support libraries in its official distribution tarball. Generally, if Open MPI cannot find a required support library, it will automatically configure, build, install, and use its bundled version as part of the main Open MPI configure, build, and installation process. Put differently: Open MPI’s configure script will examine the build machine and see if it can find each of the required support header files and libraries. If it cannot find them, it will attempt to fall back and use the corresponding bundled support library instead. Important Note, however, that configure is smart enough to understand the dependencies between the required support libraries. Specifically: If configure finds the development headers and libraries for a given support library already installed on the system, then it will ignore both the corresponding bundled support library, and it will also ignore all bundled support libraries that are below it in the dependency graph shown above. 4.6.4.1. Build example 1 configure finds the PRRTE development headers and libraries in /usr/local. This will cause the following to occur: configurewill ignore the PRRTE library that is bundled in the Open MPI source tree and will use the PRRTE that is already installed in /usr/local. configurewill also ignore the bundled PMIx, Hwloc, and Libevent libraries in the Open MPI source tree. If configureis unable to find header files and libraries for PMIx, Hwloc, and Libevent elsewhere on the build machine (i.e., assumedly the same PMIx, Hwloc, and Libevent than the PRRTE in /usr/localis using), this is an error: configurewill abort, and therefore refuse to build Open MPI. 4.6.4.2. Build example 2 configure does not find PRRTE on the build machine, but does find PMIx development headers and libraries in /opt/local. This will cause the following to occur: configurewill set up to build the PRRTE library that is bundled in the Open MPI source tree. configurewill ignore the PMIx library that is bundled in the Open MPI source tree and will use the PMIx that is already installed in /opt/local. configurewill also ignore the bundled Hwloc and Libevent libraries in the Open MPI source tree. If configureis unable to find header files and libraries for Hwloc and Libevent elsewhere on the build machine (i.e., assumedly the same Hwloc and Libevent than the PMIx in /opt/localis using), this is an error: configurewill abort, and therefore refuse to build Open MPI. 4.6.4.3. Build example 3 configure only finds the development headers and libraries for Libevent on the build machine. This will cause the following to occur: configurewill set up to build the PRRTE, PMIx, and Hwloc libraries that are bundled in the Open MPI source tree. configurewill ignore the Libevent library that is bundled in the Open MPI source tree and will use the Libevent that is already installed. 4.6.5. Overriding configure behavior If configure’s default searching behavior is not sufficient for your environment, you can use command line options to override its default behavior. For example, if PMIx and/or PRRTE are installed such that the default header file and linker search paths will not find them, you can provide command line options telling Open MPI’s configure where to search. Here’s an example configure invocation where PMIx and PRRTE have both been installed to /opt/open-mpi-stuff: ./configure --prefix=$HOME/openmpi-install \ --with-pmix=/opt/open-mpi-stuff \ --with-prrte=/opt/open-mpi-stuff ... As another example, if you do not have root-level privileges to use the OS / environment package manager, and if you have a simple MPI application (e.g., that has no external library dependencies), you may wish to configure Open MPI something like this: ./configure --prefix=$HOME/openmpi-install \ --with-libevent=internal --with-hwloc=internal \ --with-pmix=internal --with-prrte=internal ... The internal keywords force configure to use all four bundled versions of the required libraries. Danger Be very, very careful when overriding configure’s default search behavior for these libraries. Remember the critical requirement: that Open MPI infrastructure and applications load exactly one copy of each support library. For simplicity, it may be desirable to ensure to use exactly the support libraries that Open MPI was compiled and built against. For example, using the Open MPI installed from the sample configure line (above), you may want to prefix your run-time linker search path (e.g., LD_LIBRARY_PATH on Linux) with $HOME/openmpi-install/lib. This will ensure that linker finds the four support libraries from your Open MPI installation tree, even if other copies of the same support libraries are present elsewhere on your system. 4.6.6. (Strong) Advice for packagers If you are an Open MPI packager, we strongly suggest that your Open MPI package should not include Hwloc, Libevent, PMIx, or PRRTE. Instead, it should depend on independently-built versions of these packages. You may wish to configure Open MPI with something like the following: ./configure --with-libevent=external --with-hwloc=external \ --with-pmix=external --with-prrte=external ... The external keywords will force configure to ignore all the bundled libraries and only look for external versions of these support libraries. This also has the benefit of causing configure to fail if it cannot find the required support libraries outside of the Open MPI source tree — a good sanity check to ensure that your package is correctly relying on the independently-built and installed versions. See this section for more information about the required support library --with-FOO command line options.
https://docs.open-mpi.org/en/main/installing-open-mpi/required-support-libraries.html
2022-08-08T07:09:46
CC-MAIN-2022-33
1659882570767.11
[array(['../_images/required-support-libraries-dependency-graph.png', '../_images/required-support-libraries-dependency-graph.png'], dtype=object) ]
docs.open-mpi.org
Sci-Fi Atmosphere Volume One is an ever-growing sound library and contains atmospheric audio loops made explicitly for Unreal Engine artists. Sci-Fi Atmosphere Volume One is an ever-growing sound library. It currently contains more than 100 very dark and atmospheric audio loops. By purchasing this sound library, you will be provided with the project file, which contains different platforms based on the categories of the sound library. There you'll not only be able to experience the audio in an "in-game" environment, but you have the option to create a favourites list which should help you find your prefered sounds for your art. You can also switch between the mono and the stereo sound. Previews of sounds can be found here. Feel free to join my Discord server if you have any questions or suggestions for future updates. Technical Details Access to more than 100 custom made audio loops. (Version 1.0.6) All sounds come twice as mono and stereo. Samplerate: 48khz, Bitrate: 24bit. All sounds loop seamlessly. Minutes of audio provided: ~90 minutes. (Version 1.0.6)
https://docs.unrealengine.com/marketplace/ko/product/sci-fi-atmosphere-volume-one?lang=ko
2022-08-08T08:16:21
CC-MAIN-2022-33
1659882570767.11
[]
docs.unrealengine.com
Administration. Administrator responsibilities Lido for Solana is implemented as a program called Solido, that runs on the Solana blockchain. Programs on Solana have an upgrade authority: an address that can replace the program with a newer version. This upgrade authority has a lot of power, especially for a program like Solido that manages user’s funds. After all, the upgrade authority could deploy a new program that withdraws all staked SOL into an address of their choice. Therefore, it is essential that the upgrade authority is trustworthy. It is possible on Solana to disable upgrades for a program. In that case nobody will ever be able to change it, so there is no party to trust — you only need to trust the code itself. This is a double-edged sword: if the code contains a critical bug, then nobody can fix it. This makes disabling upgrades dangerous, potentially more risky than trusting an upgrade authority. Especially for early versions of a program, we need a way to upgrade. Aside from the program code itself, the Solido program has parameters, whose values must be set by somebody: - How much fees does it take, and how are those split up among the treasury, the developer, and validators? - Which validators are part of the validator set? In the program, we refer to the address that can sign parameter changes as the manager. The role of the administrator, is to act as the manager for parameter changes, and to act as the upgrade authority for program changes. Multisig administration Different administration methods exists, each with different advantages and disadvantages. A single person could act as the administrator. This has very low overhead, and the administrator can move quickly when there is a need to deploy a critical bugfix. However, it also places a high degree of trust in a single person. On the opposide a multisig, a program that executes administrative tasks after m out of n members have approved. For m greater than one, no single party can unilaterally execute administrative tasks, but we only need to coordinate with m parties to get something done, not with a majority of LDO holders. Multisig details For Lido for Solana, we use the Serum Multisig program, and we require approval from 4 out of 7 members. The members are: The addresses of the multisig members are listed on the deployments page. The multisig instance is used both as the upgrade authority of the Solido program, and as the manager of the Solido instance. For initial testing on testnet, Bonafida participated as one of the seven multisig members. For the mainnet deployment, ChainLayer has taken their place. During the initial mainnet deployment, Solana Foundation participated as one of the seven members. They were succeeded by Mercurial after the v1.0.0 launch. Aside from approving parameter changes to onboard validators, the multisig members also verify that the deployed Solido program can be reproduced, to ensure that the on-chain program was built from the publicly available source code, and contains no back doors. Multisig origin The 4-out-of-7 multisig was established as follows: - Chorus One reached out to all participants, and verified their identities on Telegram and GitHub. - Participants shared their public keys on GitHub. - Chorus one deployed the Serum Multisig program, and created an instance that has the 7 public keys as owners. The upgrade authority of the multisig program was set to the multisig instance itself. - Participants verified that they could reproduce the program, and that the list of public keys matched the keys shared earlier on GitHub.
https://docs.solana.lido.fi/administration/
2022-08-08T07:45:32
CC-MAIN-2022-33
1659882570767.11
[]
docs.solana.lido.fi
Table of Contents Product Index Every hardworking man has Grit… If your character is made of true Grit and ready to rumble, or caring and ready to set off to a hard days work, he deserves an Outfit to match his humble power. The dForce Grit Outfit for Genesis 8 Males is comprised or Shirt, Pants, Gloves, and Shoes. dForce Grit comes with tons of supported shapes and Widen morphs to fit Genesis8 Male in most all poses just. Get the dForce Grit Outfit for your next Sci-fi or Fantasy render! Note: the product is made for dForce and Genesis8 Male but works well in most poses as conforming..
http://docs.daz3d.com/doku.php/public/read_me/index/72839/start
2022-08-08T07:47:57
CC-MAIN-2022-33
1659882570767.11
[]
docs.daz3d.com
Terraform test suite How to run the Terraform test suite. We regularly run the test suite of the Terraform AWS provider against LocalStack to test the compatibility of LocalStack to Terraform. To that end we have a dedicated repository localstack/localstack-terraform-test, where you can also find instructions on how to run the tests. Last modified October 8, 2021: overhaul of developer guide (435e4ca5)
https://docs.localstack.cloud/developer-guide/terraform-tests/
2022-08-08T07:20:52
CC-MAIN-2022-33
1659882570767.11
[]
docs.localstack.cloud
Octave code. The number of return arguments, their size, and their class depend on the expression entered.,.. Next: Simple File I/O, Previous: Terminal Output, Up: Basic Input and Output [Contents][Index]
https://docs.octave.org/interpreter/Terminal-Input.html
2022-08-08T07:30:51
CC-MAIN-2022-33
1659882570767.11
[]
docs.octave.org
Notice: Please keep in mind that Shopify only allows you to customize these basic options for your store and nothing we can interfere because of Checkout page is managed by Shopify. So if you want to interfere more deeper like add sections or feature to the Checkout page, please contact to the Shopify Help Center for help, you can not customize it from the themes.
https://docs.the4.co/kalles-4/theme-settings/checkout
2022-08-08T06:34:12
CC-MAIN-2022-33
1659882570767.11
[]
docs.the4.co
✽ Submissions are open! Want to submit your project? We'd love to hear about what you've built! Check out our Community Guidelines to learn more about how to get in touch with the team. We're proud and humbled to list publicly available tools and services that have been created by the Tomorrow.io developer community. They are split up into four main categories: Some of these projects, marked with official, are maintained as part of the Tomorrow.io API GitHub organization by our very own team. Contributors, feedback and requests are always more than welcome.
https://docs.tomorrow.io/reference/community-projects
2022-08-08T08:07:07
CC-MAIN-2022-33
1659882570767.11
[]
docs.tomorrow.io
The friction coefficient used when an object is lying on a surface. Must be greater than or equal to zero. Natural materials will usually have a friction coefficient between 0 (no friction at all, like slippy ice) and 1 (full friction, like rubber). Values larger then 1 are possible, and may be realistic for sticky materials.
https://docs.unity3d.com/kr/2021.2/ScriptReference/PhysicMaterial-staticFriction.html
2022-08-08T08:14:48
CC-MAIN-2022-33
1659882570767.11
[]
docs.unity3d.com
Unix to Windows API dictionary Wen-ming on my team worked with folks from InteropSystems on a porting dictionary to help ease the pain porting source code from Unix to Windows. They took the most frequently used Unix calls and provided a piece of sample code demonstrating the equivalent function on Windows (if it exists). Please go to and check it out. The team will certainly appreciate your comments and feedback on the site. Feel free to use the discussion forum on the dictionary website and share your own experiences and help expanding the dictionary.
https://docs.microsoft.com/en-us/archive/blogs/volkerw/unix-to-windows-api-dictionary
2022-08-08T07:16:27
CC-MAIN-2022-33
1659882570767.11
[]
docs.microsoft.com
onos-e2-sm O-RAN defines the E2 Service Models in the form of “ASN.1” specifications. Each Service Model defines 5 top level objects: RAN Function Description (from the E2Node, describes the supported actions and triggers) Event Trigger Defintion (contained in a RICSubscriptionRequest, defines the conditions on which E2Node should send a RICIndication) Action Definition (contained in a RICSubscriptionRequest, defines the actions on an E2Node) Indication Header (contained in a RICIndication, describes general parameters of the source E2 Node) Indication Message (contained in a RICIndication, describes specific parameters requested) Implementation in SD-RAN The onos-e2-sm project provides, for each of these Service Models: A Protobuf translation of these ASN.1 specifications e.g. e2sm_kpm_ies.proto Go code mapping between Protobuf and ASN.1 PER encoding The implementation can be accessed as either: a Go module e.g. go get github.com/onosproject/onos-e2-sm/servicemodels/[email protected](preferred for xApps who only need to access the Proto defintions) or as a Go plugin e.g. e2sm_kpm.so.1.0.0(allowing a loose coupling with onos-e2tand ran-simulator) Since dynamically loaded modules in Go require being compiled in the code of the target they will plugin to, there are 2 versions of the plugin docker file produced onosproject/service-model-docker-e2sm_kpm-1.0.0and onosproject/service-model-ransim-e2sm_kpm-1.0.0 Third party vendors will be able to build their own Service Models and load them in to onos-e2tusing the plugin method, and will be able to access the translated Protobuf in corresponding xApps Key Performance Metrics (E2SM_KPM) This is the first E2 Service Model to be handled by SD-RAN - it is for extracting statistics from the E2Node. Currently supported version is E2SM KPM v2.0.3. E2SM KPM v1 is partially implemented and not supported anymore. There is also an implementation of KPMv2 SM with Go-based APER library (produces APER bytes out of Protobuf). Native Interface (E2SM_NI) While the Proto definitions have been created for this Service Model, the Go mapping code has not been implemented in SD-RAN yet. RAN Control (E2SM_RC_PRE) Pre-standard E2 Service model with PCI and Neighbor relation table information from E2 Nodes. There is also an implementation of RC-PRE SM with Go-based APER library. Mobile HandOver (E2SM_MHO) E2 Service model for handling Mobile HandOver use case. There is also an implementation of MHO SM with Go-based APER library. RAN Slicing (E2SM_RSM) E2 service model for handling RAN Slicing use case. It was implemented with Go-based APER library. Development Service models are created from the ASN1 models stored at: From these: Protobuf is generated with asn1c -B(requires the specially modified version of asn1ctool - asn1c) Go code is generated from this Protobuf as an interface/reusable layer e.g. e2sm_kpm_ie.pb.go C code is generated the version of the asn1ctool from the O-RAN Software Community with asn1c -fcompound-names -fincludes-quoted -fno-include-deps -findirect-choice -gen-PER -no-gen-OER -D.e.g. E2SM-KPM-IndicationHeader.h Then glue code is generated by hand (at first) using CGO(wrapping the C code in Go) e.g. E2SM-KPM-IndicationHeader.go It’s also possible to use protoc-gen-cgo, a protocplugin that prints CGo code out of Protobuf. Some hand tweaks are still needed to be done. To generate the C code with the O-RAN Software Community version of the asn1ctool, it must be installed on your system with sudo make install. This is because it takes skeleton file from /usr/local/share/asn1cregardless of where it is run from. The E2AP (E2 Application Protocol) is not a Service Model, and so is kept completely inside the onos-e2t. How to create your own SM? Here you can find a tutorial on how to create your own SM.
https://docs.sd-ran.org/master/onos-e2-sm/README.html
2022-08-08T08:03:06
CC-MAIN-2022-33
1659882570767.11
[]
docs.sd-ran.org
Deploying desktops on virtual machines that are managed by vCenter Server provides all the storage efficiencies that were previously available only for virtualized servers. Using instant clones or View Composer linked clones as desktop machines increases the storage savings because all virtual machines in a pool share a virtual disk with a base image.
https://docs.vmware.com/en/VMware-Horizon-7/7.2/com.vmware.horizon.virtual.desktops.doc/GUID-5E1CED3D-3E99-4511-B735-958F4057C8AF.html
2018-07-15T23:37:45
CC-MAIN-2018-30
1531676589022.38
[]
docs.vmware.com
When you use the HPP for your storage devices, set the latency sensitive threshold for the device, so that I/O can avoid the I/O scheduler. By default, ESXi passes every I/O through the I/O scheduler. However, using the scheduler might create internal queuing, which is not efficient with the high-speed storage devices. You can configure the latency sensitive threshold and enable the direct submission mechanism that helps I/O to bypass the scheduler. With this mechanism enabled, the I/O passes directly from PSA through the HPP to the device driver. For the direct submission to work properly, the observed average I/O latency must be lower than the latency threshold you specify. If the I/O latency exceeds the latency threshold, the system stops the direct submission and temporarily reverts to using the I/O scheduler. The direct submission is resumed when the average I/O latency drops below the latency threshold again. Procedure - Set the latency sensitive threshold for the device by running the following command: esxcli storage core device latencythreshold set --device=device name --latency-sensitive-threshold=value in milliseconds - Verify that the latency threshold is set: esxcli storage core device latencythreshold list Device Latency Sensitive Threshold -------------------- --------------------------- naa.55cd2e404c1728aa 0 milliseconds naa.500056b34036cdfd 0 milliseconds naa.55cd2e404c172bd6 50 milliseconds - Monitor the status of the latency sensitive threshold. Check VMkernel logs for the following entries: Latency Sensitive Gatekeeper turned on for device device. Threshold of XX msec is larger than max completion time of YYY msec Latency Sensitive Gatekeeper turned off for device device. Threshold of XX msec is exceeded by command completed in YYY msec
https://docs.vmware.com/en/VMware-vSphere/6.7/com.vmware.vsphere.storage.doc/GUID-CD579C5C-74DC-44AB-B404-907C9D385122.html
2018-07-15T23:34:58
CC-MAIN-2018-30
1531676589022.38
[]
docs.vmware.com
Using the OSPF screen in the vCloud Director tenant portal, you can configure the Open Shortest Path First (OSPF) routing protocol for the dynamic routing capabilities of your advanced edge gateway. A common application of OSPF on an edge gateway in a vCloud Director environment is to exchange routing information between edge gateways in vCloud Director. About this task The NSX edge gateway supports OSPF, an interior gateway protocol that routes IP packets only within a single routing domain. As described in the NSX Administration Guide, configuring OSPF on an NSX edge gateway enables the edge gateway to learn and advertise routes. The edge gateway uses OSPF to gather link state information from available edge gateways and construct a topology map of the network. The topology determines the routing table presented to the Internet layer, which makes routing decisions based on the destination IP address found in IP packets. As a result, . Specify Default Routing Configurations for. Procedure - Launch the tenant portal using the following steps. - Log in to the vCloud Director Web console and navigate to the edge gateway. - Right-click the name of the edge gateway and click Edge Gateway Services in the context menu. The tenant portal opens in a new browser tab and displays the Edge Gateway screen for that edge gateway. - In the tenant portal, navigate to . - If OSPF is not currently enabled, use the OSPF Enabled toggle to enable it. - Configure the OSPF settings according to your organization's needs. At this point, you can click Save changes or continue with configuring area definitions and interface mappings. - Add an OSPF area definition to the on-screen table by clicking the + icon, specifying details for the mapping in the dialog box, and then clicking Keep.Note: By default, the system configures a not-so-stubby area (NSSA) with area ID of 51, and this area is automatically displayed in the area definitions table on the OSPF screen. You can modify or delete this NSSA area if it does not meet your organization's needs. - Click Save changes, so that the newly configured area definitions are available for selection when you add interface mappings. - Add an interface mapping to the on-screen table by clicking the + icon, specifying details for the mapping in the dialog box, and then clicking Keep. These mappings map the edge gateway's interfaces to the areas. - In the dialog box, select the interface you want to map to an area definition. The interface specifies the external network that both edge gateways are connected to. - Select the area ID for the area to map to the selected interface. - (Optional) Change the OSPF settings from the default values to customize them for this interface mapping. When configuring a new mapping, the default values for these settings are displayed. In most cases, it is recommended to retain the default settings. If you do change the settings, make sure that the OSPF peers use the same settings. - Click Keep. - Click Save changes in the OSPF screen. What to do next Configure OSPF on the other edge gateways that you want to exchange routing information with. Add a firewall rule that allows traffic between the OSPF-enabled edge gateways. See Add an Edge Gateway Firewall Rule Using the Tenant Portal for information. Make sure that the route redistribution and firewall configuration allow the correct routes to be advertised. See Configure Route Redistribution Using the Tenant Portal.
https://docs.vmware.com/en/vCloud-Director/8.20/com.vmware.vcloud.tenantportal.doc/GUID-238A6AFB-9004-4AED-8015-FEB2B274C367.html
2018-07-15T23:35:00
CC-MAIN-2018-30
1531676589022.38
[]
docs.vmware.com
The views module is used to create and manage SQL schemas and views. The main topics in this chapter are: The following are the definitions for the terms used in this guide: You must have the tde-admin and any-uri roles to create template views and the view-admin role to create range views. Schemas and views are the main SQL data-modeling components used to represent content stored in a MarkLogic Server database to SQL clients. A view is a virtual read-only table that represents data stored in a MarkLogic Server database. Each column in a view is based on an index in the content database, as described in Example Template View. User access to each view is controlled by a set of permissions, as described in Template View Security. There are two types of views: In most situations, you will want to create a template view. Though a range view may be preferable to a template view in some situations, such as for a database already configured with range indexes, they are supported mostly for backwards compatibility with previous versions of MarkLogic. For this reason, most of the dicussion in this guide will be on the use of template views. For details on range views, see Creating Range Views. A schema is a naming context for a set of views and user access to each schema can be controlled with a different set of permissions. Each view in a schema must have a unique name. However, you can have multiple views of the same name in different schemas. For example, you can have three views, named ‘Songs,' each in a different schema with different protection settings. Each view has a scope that defines the documents from which it reads the column data. The view scope constrains the view to documents located in a particular directory (template views only), or to documents in a particular collection. The figure below shows a schema called ‘main' that contains two views, each with a different view scope. The view 'Songs' is constrained to documents that are in the collection and the view 'Names' is constrained to documents that are located in the /my/directory/ directory. As described above, schemas and views are stored as documents in the schema database associated with the content database for which they are defined. The default schema database is named ‘Schemas.' If multiple content databases share a single schema database, each content database will have access to all of the views in the schema database. For example, in the figure below, you have two content databases, Database A and Database B, that both make use of the Schemas database. In this example, you create a single schema, named ‘main,' that contains two views, View1 and View2, on Database A. You then create two views, View3 and View4, on Database B and place them into the ‘main' schema. In this situation, both Database A and Database B will each have access to all four views in the ‘main' schema. A more 'relational' configuration is to assign a separate schema database to each content database. In the figure below, Database A and Database B each have a separate schema database, SchemaA and SchemaB, respectively. In this example, you create a ‘main' schema for each content database, each of which contains the views to be used for its respective content database. The tde-admin and any-uri roles are required in order to insert a template document into the schema database. The tde-view role is required to access a template view. Access to views can be further restricted by setting additional permissions on the template document that defines the view. Since the same view can be declared in multiple templates loaded with different permissions, the access to views should be controlled at the column level., there are two views, as illustrated below. John has P1 Permissions, so he can see Columns C1 and C2. Chris has both P1 and P2 Permissions, so he can see Columns C1, C2, and C3. Mary has P2 Permissions, so she can see Columns C1 and C3. For details on how to set document permissions, see Protecting Documents in the Security Guide. The MarkLogic SQL engine does not support documents that make use of element-level security. Any document containing protected elements will be skipped by the indexer. This section provides an example document and a template view used to extract data from the document and present it in the form of a view. Consider a document of the following form: <book> <title subject="oceanography">Sea Creatures</title> <pubyear>2011</pubyear> <keyword>science</keyword> <author> <name>Jane Smith</name> <university>Wossamotta U</university> </author> <body> <name type="cephalopod">Squid</name> Fascinating squid facts... <name type="scombridae">Tuna</name> Fascinating tuna facts... <name type="echinoderm">Starfish</name> Fascinating starfish facts... </body> </book> The following template extracts each element and presents it as a column in a view, named ‘book' in the ‘main' schema. <template xmlns=""> <context>/book</context> <rows> <row> <schema-name>main</schema-name> <view-name>book</view-name> <columns> <column> <name>title</name> <scalar-type>string</scalar-type> <val>title</val> </column> <column> <name>pubyear</name> <scalar-type>date</scalar-type> <val>pubyear</val> </column> <column> <name>keyword</name> <scalar-type>string</scalar-type> <val>keyword</val> </column> <column> <name>author</name> <scalar-type>string</scalar-type> <val>author/name</val> </column> <column> <name>university</name> <scalar-type>string</scalar-type> <val>author/university</val> </column> <column> <name>cephalopod</name> <scalar-type>string</scalar-type> <val>body/name[@type="cephalopod"]</val> </column> <column> <name>scombridae</name> <scalar-type>string</scalar-type> <val>body/name[@type="scombridae"]</val> </column> <column> <name>echinoderm</name> <scalar-type>string</scalar-type> <val>body/name[@type="echinoderm"]</val> </column> </columns> </row> </rows> </template>
http://docs.marklogic.com/guide/sql/intro
2018-07-15T22:45:46
CC-MAIN-2018-30
1531676589022.38
[]
docs.marklogic.com
public class GroovyInternalPosixParser extends Parser flattenmethod. cmd checkRequiredOptions, getOptions, getRequiredOptions, parse, parse, parse, parse, processArgs, processOption, processProperties, setOptions clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait public GroovyInternalPosixParser() protected String[] flatten(Options options, String[] arguments, boolean stopAtNonOption) Parser's abstract flattenmethod.
http://docs.groovy-lang.org/latest/html/api/org/apache/commons/cli/GroovyInternalPosixParser.html
2017-10-17T03:53:43
CC-MAIN-2017-43
1508187820700.4
[]
docs.groovy-lang.org
This topic describes how to service a DIRQL interrupt. For information about servicing a passive-level interrupt, see Supporting Passive Level Interrupts. Servicing an interrupt consists of two, and sometimes three, steps: Saving volatile information (such as register contents) quickly, in an interrupt service routine that runs at IRQL = DIRQL. Processing the saved volatile information in a deferred procedure call (DPC) that runs at IRQL = DISPATCH_LEVEL. Performing additional work at IRQL = PASSIVE_LEVEL, if necessary. When a device generates a hardware interrupt, the framework calls the driver's interrupt service routine (ISR), which framework-based drivers implement as an EvtInterruptIsr callback function. The EvtInterruptIsr callback function, which runs at the device's DIRQL, must quickly save interrupt information, such as register contents, that will be lost if another interrupt occurs. Typically, the EvtInterruptIsr callback function schedules a deferred procedure call (DPC) to process the saved information later at a lower IRQL (DISPATCH_LEVEL). Framework-based drivers implement DPC routines as EvtInterruptDpc or EvtDpcFunc callback functions. Most drivers use a single EvtInterruptDpc callback function for each type of interrupt. To schedule execution of an EvtInterruptDpc callback function, a driver must call WdfInterruptQueueDpcForIsr from within the EvtInterruptIsr callback function. If your driver creates multiple framework queue objects for each device, you might consider using a separate DPC object and EvtDpcFunc callback function for each queue. To schedule execution of an EvtDpcFunc callback function, the driver must first create one or more DPC objects by calling WdfDpcCreate, typically in the driver's EvtDriverDeviceAdd callback function. Then the driver's EvtInterruptIsr callback function can call WdfDpcEnqueue. Drivers typically complete I/O requests in their EvtInterruptDpc or EvtDpcFunc callback functions. Sometimes a driver must perform some interrupt-servicing operations at IRQL = PASSIVE_LEVEL. In such cases the driver's EvtInterruptDpc or EvtDpcFunc callback function, executing at IRQL = DISPATCH_LEVEL, can schedule execution of one or more framework work items, which run at IRQL = PASSIVE_LEVEL. For an example of a driver that uses work items while servicing device interrupts, see the PCIDRV sample driver.
https://docs.microsoft.com/en-us/windows-hardware/drivers/wdf/servicing-an-interrupt
2017-10-17T05:07:13
CC-MAIN-2017-43
1508187820700.4
[]
docs.microsoft.com
Many users of Spring Batch may encounter requirements that are outside the scope of Spring Batch, yet may be efficiently and concisely implemented using Spring Integration. Conversely, Spring Batch users may encounter Spring Batch requirements and need a way to efficiently integrate both frameworks. In this context several patterns and use-cases emerge and Spring Batch Integration will address those requirements. The line between Spring Batch and Spring Integration is not always clear, but there are guidelines that one can follow. Principally, these are: think about granularity, and apply common patterns. Some of those common patterns are described in this reference manual section. Adding messaging to a batch process enables automation of operations, and also separation and strategizing of key concerns. For example a message might trigger a job to execute, and then the sending of the message can be exposed in a variety of ways. Or when a job completes or fails that might trigger a message to be sent, and the consumers of those messages might have operational concerns that have nothing to do with the application itself. Messaging can also be embedded in a job, for example reading or writing items for processing via channels. Remote partitioning and remote chunking provide methods to distribute workloads over an number of workers. Some key concepts that we will cover are: Launching Batch Jobs through Messages Providing Feedback with Informational Messages Externalizing Batch Process Execution Since Spring Batch Integration 1.3, dedicated XML Namespace support was added, with the aim to provide an easier configuration experience. In order to activate the namespace, add the following namespace declarations to your Spring XML Application Context file: <beans xmlns="" xmlns: ... </beans> A fully configured Spring XML Application Context file for Spring Batch Integration may look like the following: <beans xmlns="" xmlns: ... </beans> Appending version numbers to the referenced XSD file is also allowed but, as a version-less declaration will always use the latest schema, we generally don't recommend appending the version number to the XSD name. Adding a version number, for instance, would create possibly issues when updating the Spring Batch Integration dependencies as they may require more recent versions of the XML schema. When starting batch jobs using the core Spring Batch API you basically have 2 options: Command line via the CommandLineJobRunner Programatically via either JobOperator.start() or JobLauncher.run(). For example, you may want to use the CommandLineJobRunner when invoking Batch Jobs using a shell script. Alternatively, you may use the JobOperator directly, for example when using Spring Batch as part of a web application. However, what about more complex use-cases? Maybe you need to poll a remote (S)FTP server to retrieve the data for the Batch Job. Or your application has to support multiple different data sources simultaneously. For example, you may receive data files not only via the web, but also FTP etc. Maybe additional transformation of the input files is needed before invoking Spring Batch. Therefore, it would be much more powerful to execute the batch job using Spring Integration and its numerous adapters. For example, you can use a File Inbound Channel Adapter to monitor a directory in the file-system and start the Batch Job as soon as the input file arrives. Additionally you can create Spring Integration flows that use multiple different adapters to easily ingest data for your Batch Jobs from multiple sources simultaneously using configuration only. Implementing all these scenarios with Spring Integration is easy as it allow for an decoupled event-driven execution of the JobLauncher. Spring Batch Integration provides the JobLaunchingMessageHandler class that you can use to launch batch jobs. The input for the JobLaunchingMessageHandler is provided by a Spring Integration message, which payload is of type JobLaunchRequest. This class is a wrapper around the Job that needs to be launched as well as the JobParameters necessary to launch the Batch job. The following image illustrates the typical Spring Integration message flow in order to start a Batch job. The EIP (Enterprise IntegrationPatterns) website provides a full overview of messaging icons and their descriptions. package io.spring.sbi; import org.springframework.batch.core.Job; import org.springframework.batch.core.JobParametersBuilder; import org.springframework.batch.integration.launch.JobLaunchRequest; import org.springframework.integration.annotation.Transformer; import org.springframework.messaging.Message; import java.io.File; public class FileMessageToJobRequest { private Job job; private String fileParameterName; public void setFileParameterName(String fileParameterName) { this.fileParameterName = fileParameterName; } public void setJob(Job job) { this.job = job; } @Transformer public JobLaunchRequest toRequest(Message<File> message) { JobParametersBuilder jobParametersBuilder = new JobParametersBuilder(); jobParametersBuilder.addString(fileParameterName, message.getPayload().getAbsolutePath()); return new JobLaunchRequest(job, jobParametersBuilder.toJobParameters()); } } When a Batch Job is being executed, a JobExecution instance is returned. This instance can be used to determine the status of an execution. If a JobExecution was able to be created successfully, it will always be returned, regardless of whether or not the actual execution was successful. The exact behavior on how the JobExecution instance is returned depends on the provided TaskExecutor. If a synchronous (single-threaded) TaskExecutor implementation is used, the JobExecution response is only returned after the job completes. When using an asynchronous TaskExecutor, the JobExecution instance is returned immediately. Users can then take the id of JobExecution instance ( JobExecution.getJobId()) and query the JobRepository for the job's updated status using the JobExplorer. For more information, please refer to the Spring Batch reference documentation on Querying the Repository. The following configuration will create a file inbound-channel-adapter to listen for CSV files in the provided directory, hand them off to our transformer ( FileMessageToJobRequest), launch the job via the Job Launching Gateway then simply log the output of the JobExecution via the logging-channel-adapter. <int:channel <int:channel <int:channel <int-file:inbound-channel-adapter <int:poller </int-file:inbound-channel-adapter> <int:transformer <bean class="io.spring.sbi.FileMessageToJobRequest"> <property name="job" ref="personJob"/> <property name="fileParameterName" value="input.file.name"/> </bean> </int:transformer> <batch-int:job-launching-gateway <int:logging-channel-adapter Now that we are polling for files and launching jobs, we need to configure for example our Spring Batch ItemReader to utilize found file represented by the job parameter "input.file.name": <bean id="itemReader" class="org.springframework.batch.item.file.FlatFileItemReader" scope="step"> <property name="resource" value="{jobParameters['input.file.name']}"/> ... </bean> The main points of interest here are injecting the value of #{jobParameters['input.file.name']} as the Resource property value and setting the ItemReader bean to be of Step scope to take advantage of the late binding support which allows access to the jobParameters variable. id Identifies the underlying Spring bean definition, which is an instance of either: EventDrivenConsumer PollingConsumer The exact implementation depends on whether the component's input channel is a: SubscribableChannel or PollableChannel auto-startup Boolean flag to indicate that the endpoint should start automatically on startup. The default istrue. request-channel The input MessageChannel of this endpoint. reply-channel Message Channel to which the resulting JobExecution payload will be sent. reply-timeout. The attribute will default, if not specified, to-1, meaning that by default, the Gateway will wait indefinitely. The value is specified in milliseconds. job-launcher Pass in a custom JobLauncher bean reference. This attribute is optional. If not specified the adapter will re-use the instance that is registered under the id jobLauncher. If no default instance exists an exception is thrown. order Specifies the order for invocation when this endpoint is connected as a subscriber to a SubscribableChannel. When this Gateway is receiving messages from a PollableChannel, you must either provide a global default Poller or provide a Poller sub-element to the Job Launching Gateway: <batch-int:job-launching-gateway <int:poller </batch-int:job-launching-gateway> As Spring Batch jobs can run for long times, providing progress information will be critical. For example, stake-holders may want to be notified if a some or all parts of a Batch Job has failed. Spring Batch provides support for this information being gathered through: Active polling or Event-driven, using listeners. When starting a Spring Batch job asynchronously, e.g. by using the Job Launching Gateway, a JobExecution instance is returned. Thus, JobExecution.getJobId() can be used to continuously poll for status updates by retrieving updated instances of the JobExecution from the JobRepository using the JobExplorer. However, this is considered sub-optimal and an event-driven approach should be preferred. Therefore, Spring Batch provides listeners such as: StepListener ChunkListener JobExecutionListener In the following example, a Spring Batch job was configured with a StepExecutionListener. Thus, Spring Integration will receive and process any step before/after step events. For example, the received StepExecution can be inspected using a Router. Based on the results of that inspection, various things can occur for example routing a message to a Mail Outbound Channel Adapter, so that an Email notification can be send out based on some condition. Below is an example of how a listener is configured to send a message to a Gateway for StepExecution events and log its output to a logging-channel-adapter: First create the notifications integration beans: <int:channel <int:gateway <int:logging-channel-adapter Then modify your job to add a step level listener: <job id="importPayments"> <step id="step1"> <tasklet ../> <chunk ../> <listeners> <listener ref="notificationExecutionsListener"/> </listeners> </tasklet> ... </step> </job> Asynchronous Processors help you to to scale the processing of items. In the asynchronous processor use-case, an AsyncItemProcessor serves as a dispatcher, executing the ItemProcessor's logic for an item on a new thread. The Future is passed to the AsynchItemWriter to be written once the processor completes. Therefore, you can increase performance by using asynchronous item processing, basically allowing you to implement fork-join scenarios. The AsyncItemWriter will gather the results and write back the chunk as soon as all the results become available. Configuration of both the AsyncItemProcessor and AsyncItemWriter are simple, first the AsyncItemProcessor: <bean id="processor" class="org.springframework.batch.integration.async.AsyncItemProcessor"> <property name="delegate"> <bean class="your.ItemProcessor"/> </property> <property name="taskExecutor"> <bean class="org.springframework.core.task.SimpleAsyncTaskExecutor"/> </property> </bean> The property " delegate" is actually a reference to your ItemProcessor bean and the " taskExecutor" property is a reference to the TaskExecutor of your choice. Then we configure the AsyncItemWriter: <bean id="itemWriter" class="org.springframework.batch.integration.async.AsyncItemWriter"> <property name="delegate"> <bean id="itemWriter" class="your.ItemWriter"/> </property> </bean> Again, the property " delegate" is actually a reference to your ItemWriter bean. The integration approaches discussed so far suggest use-cases where Spring Integration wraps Spring Batch like an outer-shell. However, Spring Batch can also use Spring Integration internally. Using this approach, Spring Batch users can delegate the processing of items or even chunks to outside processes. This allows you to offload complex processing. Spring Batch Integration provides dedicated support for: Remote Chunking Remote Partitioning Taking things one step further, one can also externalize the chunk processing using the ChunkMessageChannelItemWriter which is provided by Spring Batch Integration which will send items out and collect the result. Once sent, Spring Batch will continue the process of reading and grouping items, without waiting for the results. Rather it is the responsibility of the ChunkMessageChannelItemWriter to gather the results and integrate them back into the Spring Batch process. Using Spring Integration you have full control over the concurrency of your processes, for instance by using a QueueChannel instead of a DirectChannel. Furthermore, by relying on Spring Integration's rich collection of Channel Adapters (E.g. JMS or AMQP), you can distribute chunks of a Batch job to external systems for processing. A simple job with a step to be remotely chunked would have a configuration similar to the following: <job id="personJob"> <step id="step1"> <tasklet> <chunk reader="itemReader" writer="itemWriter" commit- </tasklet> ... </step> </job> The ItemReader reference would point to the bean you would like to use for reading data on the master. The ItemWriter reference points to a special ItemWriter " ChunkMessageChannelItemWriter" as described above. The processor (if any) is left off the master configuration as it is configured on the slave. The following configuration provides a basic master setup. It's advised to check any additional component properties such as throttle limits and so on when implementing your use case. <bean id="connectionFactory" class="org.apache.activemq.ActiveMQConnectionFactory"> <property name="brokerURL" value="tcp://localhost:61616"/> </bean> <int-jms:outbound-channel-adapter <bean id="messagingTemplate" class="org.springframework.integration.core.MessagingTemplate"> <property name="defaultChannel" ref="requests"/> <property name="receiveTimeout" value="2000"/> </bean> <bean id="itemWriter" class="org.springframework.batch.integration.chunk.ChunkMessageChannelItemWriter" scope="step"> <property name="messagingOperations" ref="messagingTemplate"/> <property name="replyChannel" ref="replies"/> </bean> <bean id="chunkHandler" class="org.springframework.batch.integration.chunk.RemoteChunkHandlerFactoryBean"> <property name="chunkWriter" ref="itemWriter"/> <property name="step" ref="step1"/> </bean> <int:channel <int:queue/> </int:channel> <int-jms:message-driven-channel-adapter This configuration provides us with a number of beans. We configure our messaging middleware using ActiveMQ and inbound/outbound JMS adapters provided by Spring Integration. As shown, our itemWriter bean which is referenced by our job step utilizes the ChunkMessageChannelItemWriter for writing chunks over the configured middleware. Now lets move on to the slave configuration: <bean id="connectionFactory" class="org.apache.activemq.ActiveMQConnectionFactory"> <property name="brokerURL" value="tcp://localhost:61616"/> </bean> <int:channel <int:channel <int-jms:message-driven-channel-adapter <int-jms:outbound-channel-adapter </int-jms:outbound-channel-adapter> <int:service-activator <bean id="chunkProcessorChunkHandler" class="org.springframework.batch.integration.chunk.ChunkProcessorChunkHandler"> <property name="chunkProcessor"> <bean class="org.springframework.batch.core.step.item.SimpleChunkProcessor"> <property name="itemWriter"> <bean class="io.spring.sbi.PersonItemWriter"/> </property> <property name="itemProcessor"> <bean class="io.spring.sbi.PersonItemProcessor"/> </property> </bean> </property> </bean> Most of these configuration items should look familiar from the master configuration. Slaves do not need access to things like the Spring Batch JobRepository nor access to the actual job configuration file. The main bean of interest is the " chunkProcessorChunkHandler". The chunkProcessor property of ChunkProcessorChunkHandler takes a configured SimpleChunkProcessor which is where you would provide a reference to your ItemWriter and optionally your ItemProcessor that will run on the slave when it receives chunks from the master. For more information, please also consult the Spring Batch manual, specifically the chapter on Remote Chunking. Remote Partitioning, on the other hand, is useful when the problem is not the processing of items, but the associated I/O represents the bottleneck. Using Remote Partitioning, work can be farmed out to slaves that execute complete Spring Batch steps. Thus, each slave has its own ItemReader, ItemProcessor and ItemWriter. For this purpose, Spring Batch Integration provides the MessageChannelPartitionHandler. This implementation of the PartitionHandler interface uses MessageChannel instances to send instructions to remote workers and receive their responses. This provides a nice abstraction from the transports (E.g. JMS or AMQP) being used to communicate with the remote workers. The reference manual section Remote Partitioning provides an overview of the concepts and components needed to configure Remote Partitioning and shows an example of using the default TaskExecutorPartitionHandler to partition in separate local threads of execution. For Remote Partitioning to multiple JVM's, two additional components are required: Remoting fabric or grid environment A PartitionHandler implementation that supports the desired remoting fabric or grid environment Similar to Remote Chunking JMS can be used as the "remoting fabric" and the PartitionHandler implementation to be used as described above is the MessageChannelPartitionHandler. The example shown below assumes an existing partitioned job and focuses on the MessageChannelPartitionHandler and JMS configuration: <bean id="partitionHandler" class="org.springframework.batch.integration.partition.MessageChannelPartitionHandler"> <property name="stepName" value="step1"/> <property name="gridSize" value="3"/> <property name="replyChannel" ref="outbound-replies"/> <property name="messagingOperations"> <bean class="org.springframework.integration.core.MessagingTemplate"> <property name="defaultChannel" ref="outbound-requests"/> <property name="receiveTimeout" value="100000"/> </bean> </property> </bean> <int:channel <int-jms:outbound-channel-adapter <int:channel <int-jms:message-driven-channel-adapter <bean id="stepExecutionRequestHandler" class="org.springframework.batch.integration.partition.StepExecutionRequestHandler"> <property name="jobExplorer" ref="jobExplorer"/> <property name="stepLocator" ref="stepLocator"/> </bean> <int:service-activator <int:channel <int-jms:outbound-channel-adapter <int:channel <int-jms:message-driven-channel-adapter <int:aggregator <int:channel <int:queue/> </int:channel> <bean id="stepLocator" class="org.springframework.batch.integration.partition.BeanFactoryStepLocator" /> Also ensure the partition handler attribute maps to the partitionHandler bean: <job id="personJob"> <step id="step1.master"> <partition partitioner="partitioner" handler="partitionHandler"/> ... </step> </job>
https://docs.spring.io/spring-batch/trunk/reference/html/springBatchIntegration.html
2017-10-17T04:17:53
CC-MAIN-2017-43
1508187820700.4
[]
docs.spring.io
Looking for pre-made Drupal, WordPress and Backdrop apps? If you aren’t interested in making your own apps, check out the documentation on… - Our Pantheon plugin - Allows you to pull Drupal, WordPress and Backdrop sites from Pantheon - Our PHP plugin - Allows you to spin up brand-new Drupal, WordPress, and Backdrop sites. Getting Started Now that you’ve successfully installed Kalabox you can start creating your own apps. Kalabox apps are completely isolated development environments. At a high level they contain the following things: - Metadata about the containers and services your application needs to run. - Metadata about the tooling your application needs for development. - High level configuration such as name, service exposure, file sharing, etc. - Your application’s codebase. Apps are mutually exclusive This architecture means that everything you need to run and develop your app is contained within the app itself. That means you can blow away app1 without it having any impact on the containers and tools you are using on app2. Kalabox Apps Kalabox apps can be quite simple or massively complex. For example, a Kalabox app can be a static HTML site, or can mimic and integrate with hosting providers like Pantheon. A Kalabox app at it’s smallest requires only two files: - A kalabox.ymlfile that contains the high level configuration for your app. - A kalabox-compose.ymlfile that contains the services your app needs to run. Kalabox uses Docker Compose The kalabox-compose file is simply a Docker Compose file. Let’s look at a few examples. Example 1: Static HTML site You can find the code for this example over here. Let’s clone the repo and then start up the app. git clone && cd kalabox-app-examples/html1 kbox start Now that you’ve started up the app you should have… - A static HTML site running the latest nginxand accessible at - A webroot with the default nginxindex.html inside your app in the codedirectory. File sharing Everything in your apps code directory should be synced to the web server. Try editing index.html and refreshing your site to see all the magix. Now let examine your app’s directory structure and files. Directory structure . |-- kalabox.yml |-- kalabox-compose.yml Files kalabox.yml Let’s examine the basic config options here: name: html1.example type: example version: 0.13.0-alpha.1 pluginconfig: sharing: share: 'web:/usr/share/nginx/html' services: web: - port: 80/tcp default: true - name - Tells Kalabox the name of the app - type - Tells Kalabox the type of the app - version - Tells Kalabox the version of the app - pluginconfig - Tells Kalabox how this apps sharing and services plugins should be configured. Read more about that here. kalabox-compose.yml This is a simple docker-compose file that tells Kalabox to spin up a container called examplehtml1_web_1, which should be built using the latest official image of nginx and whose port 80 should be exposed to the outside world so we can communicate with it. It also sets the containers hostname based on the environmental variable $KALABOX_APP_HOSTNAME. Read more about that in the tip below. web: image: nginx:latest hostname: $KALABOX_APP_HOSTNAME ports: - "80" PRO TIP: Level up your kalabox-compose.yml You can use variables found in kbox env when constructing your kalabox-compose.yml such as $KALABOX_APP_HOSTNAME. This can give you a lot of power and flexibility when crafting your app.
http://docs.kalabox.io/en/v2.1/users/started/
2017-10-17T04:07:55
CC-MAIN-2017-43
1508187820700.4
[]
docs.kalabox.io
As you work in the Drawing or Camera view, some layers may be in the way or are used as references. You can hide these layers to make your work area easier and less cluttered. You can show and hide layers in the Timeline view in several different ways. To show or hide all layers in the Timeline view: To show or hide individual layers in the Timeline view: When you disable a layer in the Timeline view, the corresponding column is hidden in the Xsheet view. To disable all layers but the selected one: In the Timeline view, you can hide or show certain types of layers such as Group and Effect. To show and hide layer types in the Timeline view: The Show Manager dialog box opens. In the Xsheet view, you have the choice of hiding or showing certain types of columns such as Annotation and Functions. To show and hide layer types in the Timeline view: The Column Types dialog box opens. Related Topics
https://docs.toonboom.com/help/harmony-10.3/Content/HAR/Stage/007_Timing/053_H3_Showing_and_Hiding_Layer.html
2017-10-17T03:55:37
CC-MAIN-2017-43
1508187820700.4
[]
docs.toonboom.com
Get metrics from your Windows applications/servers with Windows Management Instrumentation (WMI) in real time to If you are only collecting standard metrics from Microsoft Windows and other packaged applications, there are no installation steps. If you need to define new metrics to collect from your application, then you have a few options: To learn more about using System.Diagnostics, refer to the MSDN documentation here. After adding your metric you should be able to find it in WMI. To browse the WMI namespaces you may find this tool useful: WMI Explorer. You can find the same information with Powershell here. If you assign the new metric a category of My_New_Metric, the WMI path will be \\<ComputerName>\ROOT\CIMV2:Win32_PerfFormattedData_My_New_Metric If the metric isn’t showing up in WMI, try running winmgmt /resyncperf to force the computer to reregister the performance libraries with WMI. Edit the Wmi Check configuration. init_config: instances: - class: Win32_OperatingSystem metrics: - [NumberOfProcesses, system.proc.count, gauge] - [NumberOfUsers, system.users.count, gauge] - class: Win32_PerfFormattedData_PerfProc_Process metrics: - [ThreadCount, proc.threads.count, gauge] - [VirtualBytes, proc.mem.virtual, gauge] - [PercentProcessorTime, proc.cpu_pct, gauge] tag_by: Name - class: Win32_PerfFormattedData_PerfProc_Process metrics: - [IOReadBytesPerSec, proc.io.bytes_read, gauge] tag_by: Name tag_queries: - [IDProcess, Win32_Process, Handle, CommandLine] The metrics definitions include three components: This feature is available starting with version 5.3 of the agent Each WMI query has 2 required options, class and metrics and six optional options, host, namespace, filters, provider, tag_by, constant_tags and tag_queries. class is the name of the WMI class, for example Win32_OperatingSystem or Win32_PerfFormattedData_PerfProc_Process. You can find many of the standard class names on the MSDN docs. The Win32_FormattedData_* classes provide many useful performance counters by default. metrics is a list of metrics you want to capture, with each item in the list being a set of [WMI property name, metric name, metric type]. The property name is something like NumberOfUsers or ThreadCount. The standard properties are also available on the MSDN docs for each class. The metric name is the name you want to show up in Stackstate. The metric type is from the standard choices for all agent checks, such as gauge, rate, histogram or counter. host is the optional target of the WMI query, localhost is assumed by default. If you set this option, make sure that Remote Management is enabled on the target host see here for more information. namespace is the optionnal WMI namespace to connect to (default to cimv2). filters is a list of filters on the WMI query you may want. For example, for a process-based WMI class you may want metrics for only certain processes running on your machine, so you could add a filter for each process name. You can also use the ‘%’ character as a wildcard. provider is the optional WMI provider (default to 32 on Stackstate Agent 32-bit or 64). It is used to request WMI data from the non-default provider. Available options are: 32 or 64. See MSDN for more information. tag_by optionally lets you tag each metric with a property from the WMI class you’re using. This is only useful when you will have multiple values for your WMI query. The examples below show how you can tag your process metrics with the process name (giving a tag of “name:app_name”). constant_tags optionally lets you tag each metric with a set of fixed values. tag_queries optionally lets you specify a list of queries, to tag metrics with a target class property. Each item in the list is a set of [link source property, target class, link target class property, target property] where: ‘link source property’ contains the link value ‘target class’ is the class to link to ‘link target class property’ is the target class property to link to ‘target property’ contains the value to tag with It translates to a WMI query: SELECT 'target property' FROM 'target class' WHERE 'link target class property' = 'link source property' To validate your installation and configuration, click the Agent Status menu from the Logs and Status button. The output should contain a section similar to the following:
http://docs.stackstate.com/integrations/wmi_check/
2017-10-17T03:52:25
CC-MAIN-2017-43
1508187820700.4
[]
docs.stackstate.com
Applies To: Windows Server 2012 So that the organizational partners in your Active Directory Federation Services (AD FS) deployment can collaborate successfully, you must first make sure that your corporate network infrastructure is configured to support AD FS requirements for accounts, name resolution, and certificates. AD FS has the following types of requirements: Tip You can find additional AD FS resource links at the AD FS Content Map page on the Microsoft TechNet Wiki. This page is managed by members of the AD FS Community and is monitored on a regular basis by the AD FS Product Team. Hardware requirements The following minimum and recommended hardware requirements apply to the federation server and federation server proxy computers. Software requirements AD FS relies on server functionality that is built into the Windows Server® 2012 operating system. Note The Federation Service and Federation Service Proxy role services cannot coexist on the same computer. Certificate requirements Certificates play the most critical role in securing communications between federation servers, federation server proxies, claims-aware applications, and Web clients. The requirements for certificates vary, depending on whether you are setting up a federation server or federation server proxy computer, as described in this section. Federation server certificates Federation servers require the certificates in the following table. Caution Certificates that are used for token-signing and token-decrypting are critical to the stability of the Federation Service. Because a loss or unplanned removal of any certificates that are configured for this purpose can disrupt service, you should back up any certificates that are configured for this purpose. For more information about the certificates that federation servers use, see Certificate Requirements for Federation Servers. Federation server proxy certificates Federation server proxies require the certificates in the following table. For more information about the certificates that federation server proxies use, see Certificate Requirements for Federation Server Proxies. Browser requirements. Note AD FS supports both the 32bit and 64bit versions of all the browsers showing in the above table. Cookies AD FS creates session-based and persistent cookies that must be stored on client computers to provide sign-in, sign-out, single sign-on (SSO), and other functionality. Therefore, the client browser must be configured to accept cookies. Cookies that are used for authentication are always Secure Hypertext Transfer Protocol (HTTPS) session cookies that are written for the originating server. If the client browser is not configured to allow these cookies, AD FS cannot function correctly. Persistent cookies are used to preserve user selection of the claims provider. You can disable them by using a configuration setting in the configuration file for the AD FS sign-in pages. Support for TLS/SSL is required for security reasons. Network requirements Configuring the following network services appropriately is critical for successful deployment of AD FS in your organization. TCP/IP network connectivity For AD FS to function, TCP/IP network connectivity must exist between the client; a domain controller; and the computers that host the Federation Service, the Federation Service Proxy (when it is used), and the AD FS Web Agent. DNS. Attribute store requirements AD FS requires at least one attribute store to be used for authenticating users and extracting security claims for those users. For a list of attribute stores that AD FS supports, see The Role of Attribute Stores in the AD FS Design Guide. Note AD FS automatically creates an Active Directory attribute store, by default. Attribute store requirements depend on whether your organization is acting as the account partner (hosting the federated users) or the resource partner (hosting the federated application). AD DS. Important Because AD FS requires the installation of Internet Information Services (IIS), we recommend that you not install the AD FS software on a domain controller in a production environment for security purposes. However, this configuration is supported by Microsoft Customer Service Support. Schema requirements AD FS does not require schema changes or functional-level modifications to AD DS. Functional-level requirements Most AD FS features do not require AD DS functional-level modifications to operate successfully. However, Windows Server 2008 domain functional level or higher is required for client certificate authentication to operate successfully if the certificate is explicitly mapped to a user's account in AD DS. Service account requirements. LDAP When you work with other Lightweight Directory Access Protocol (LDAP)-based attribute stores, you must connect to an LDAP server that supports Windows Integrated authentication. The LDAP connection string must also be written in the format of an LDAP URL, as described in RFC 2255. SQL Server For AD FS to operate successfully, computers that host the Structured Query Language (SQL) Server attribute store must be running either Microsoft SQL Server 2005 or SQL Server 2008. When you work with SQL-based attribute stores, you also must configure a connection string. Custom attribute stores You can develop custom attribute stores to enable advanced scenarios. The policy language that is built into AD FS can reference custom attribute stores so that any of the following scenarios can be enhanced: Creating claims for a locally authenticated user Supplementing claims for an externally authenticated user Authorizing a user to obtain a token Authorizing a service to obtain a token on behavior of a user Federation servers can communicate with and protect federation applications, such as claims-aware applications. Authentication requirements. Smart card the user account in AD DS by either of the following methods: The certificate subject name corresponds to the LDAP distinguished name of a user account in AD DS. The certificate subject altname extension has the user principal name (UPN) of a user account in AD DS. To support certain authentication strength requirements in some scenarios, it is also possible to configure AD FS to create a claim that indicates how the user was authenticated. A relying party can then use this claim to make an authorization decision. See Also AD FS Design Guide in Windows Server 2012
https://docs.microsoft.com/en-us/windows-server/identity/ad-fs/design/appendix-a--reviewing-ad-fs-requirements
2017-10-17T05:01:40
CC-MAIN-2017-43
1508187820700.4
[]
docs.microsoft.com
Using Security Groups Each Amazon EC2 instance has one or more associated security groups that govern the instance's network traffic, much like a firewall. A security group has one or more rules, each of which specifies a particular category of allowed traffic. A rule specifies the following: The type of allowed traffic, such as SSH or HTTP The traffic's protocol, such as TCP or UDP The IP address range that the traffic can originate from The traffic's allowed port range Security groups have two types of rules: Inbound rules govern inbound network traffic. For example, application server instances commonly have an inbound rule that allows inbound HTTP traffic from any IP address to port 80, and another inbound rule that allows inbound SSH traffic to port 22 from specified set of IP addresses. Outbound rules govern outbound network traffic. A common practice is to use the default setting, which allows any outbound traffic. For more information about security groups, see Amazon EC2 Security Groups. The first time you create a stack in a region, AWS OpsWorks Stacks creates a built-in security group for each layer with an appropriate set of rules. All of the groups have default outbound rules, which allow all outbound traffic. In general, the inbound rules allow the following: Inbound TCP, UDP, and ICMP traffic from the appropriate AWS OpsWorks Stacks layers Inbound TCP traffic on port 22 (SSH login) Warning The default security group configuration opens SSH (port 22) to any network location (0.0.0.0/0.) This allows all IP addresses to access your instance by using SSH. For production environments, you must use a configuration that only allows SSH access from a specific IP address or range of addresses. Either update the default security groups immediately after they are created, or use custom security groups instead. For web server layers, all inbound TCP, and UDP traffic to ports 80 (HTTP) and 443 (HTTPS) Note The built-in AWS-OpsWorks-RDP-Server security group is assigned to all Windows instances to allow RDP access. However, by default, it does not have any rules. If you are running a Windows stack and want to use RDP to access instances, you must add an inbound rule that allows RDP access. For more information, see Logging In with RDP. To see the details for each group, go to the Amazon EC2 console, select Security Groups in the navigation pane, and select the appropriate layer's security group. For example, AWS-OpsWorks-Default-Server is the default built-in security group for all stacks, and AWS-OpsWorks-WebApp is the default built-in security group for the Chef 12 sample stack. Note If you accidentally delete an AWS OpsWorks Stacks security group, the preferred way to recreate it. If you want to recreate the security group manually, it must be an exact duplicate of the original, including the group name's capitalization. Additionally, AWS OpsWorks Stacks will attempt to recreate all built-in security groups if any of the following occur: You make any changes to the stack's settings page in the AWS OpsWorks Stacks console. You start one of the stack's instances. You create a new stack. You can use either of the following approaches for specifying security groups. You use the Use OpsWorks security groups setting to specify your preference when you create a stack. Yes (default setting) – AWS OpsWorks Stacks automatically associates the appropriate built-in security group with each layer. You can fine-tune a layer's built-in security group by adding a custom security group with your preferred settings. However, when Amazon EC2 evaluates multiple security groups, it uses the least restrictive rules, so you cannot use this approach to specify more restrictive rules than the built-in group. No – AWS OpsWorks Stacks does not associate built-in security groups with layers. You must create appropriate security groups and associate at least one with each layer that you create. Use this approach to specify more restrictive rules than the built-in groups. Note that you can still manually associate a built-in security group with a layer if you prefer; custom security groups are required only for those layers that need custom settings. Important If you use the built-in security groups, you cannot create more restrictive rules by manually modifying the group's settings. Each time you create a stack, AWS OpsWorks Stacks overwrites the built-in security groups' configurations,.
http://docs.aws.amazon.com/opsworks/latest/userguide/best-practices-groups.html
2017-10-17T04:24:27
CC-MAIN-2017-43
1508187820700.4
[]
docs.aws.amazon.com
Messages. Create Messagebutton in the Messages menu. Enter a Message Title and Message Summary in the pop up modal, and click Create Messageto create the Message. Publish Datefield to display a calendar to select the date and time (all times are listed in UTC) for when the Message should be published. You cannot currently set a time that is in the past. Publish ASAPbutton, the current time and date will be automatically entered into the Publish Date field. Draftbutton. This will remove the Publish Date.
https://docs.lootlocker.io/how-to/messages/set-up-messages
2022-05-16T22:30:44
CC-MAIN-2022-21
1652662512249.16
[]
docs.lootlocker.io
#time Syntax #time(hour as number, minute as number, second as number) as time About Creates a time value from numbers representing the hour, minute, and (fractional) second. Raises an error if these conditions are not true: - 0 ≤ hour ≤ 24 - 0 ≤ minute ≤ 59 - 0 ≤ second < 60 - if hour is 24, then minute and second must be 0
https://docs.microsoft.com/en-us/powerquery-m/sharptime
2022-05-16T23:29:39
CC-MAIN-2022-21
1652662512249.16
[]
docs.microsoft.com
V V VuFi.Finance Search… V V VuFi.Finance VuFi Finance Whitepaper - VuFi v1 What is VuFi Our Story Mission How VuFi Works Protocol VuFi Features VuFi Staking VuFi Pools VuFi Shares VuFi Bonds Additional Info Roadmap Links Risks Conclusion Tutorials Airdrop How to setup MetaMask Contracts Exchanges Ethereum Polygon GitBook How VuFi Works As we wait for banks and other financial services providers to integrate digital assets, VuFi brings cryptocurrency adoption one step closer to the mainstream. We achieve this by transforming the crypto markets into a more desirable, affordable and accessible financial avenue. VuFi is designed to introduce more stability to the highly volatile world of digital assets by tracking the dollar inflation rate as the target price. Backed with collateral tokens, it provides crypto traders a safe avenue for investing. Moreover, the VuFi stable coin is completely decentralized and works within a token-governed protocol. It does not sway according to the whims and fancies of any person or group of persons, as is the case with centralized stable coins. The VuFi stable coin is different from other stable coins such as USDT and USDC. Those coins are centralized and operated by individuals or organizations. VuFi follows Bitcoin's governance model where the development team remains anonymous to remain free from regulatory pressures and promote fair play. Thus, the protocol is designed to be token-governed. It is entirely controlled by the token holders who have voting rights. There is absolutely nothing that can happen on this platform without a vote, be it changes to the smart contract or fund allocations. In addition to its completely decentralized structure, there are a few other protocol- specific traits that make the VuFi stable coin unique. VuFi builds on the DNA of cryptocurrencies and their underlying blockchain technology. This innovation-driven project is backed by a strong technical infrastructure, which delivers key improvements, and a proven development strategy that creates real intrinsic value. VuFi's differentiators No Pre-mining VuFi is one of the few projects that does not pre-mine the coin, which means there is a fair launch. In other words, there are no perks for any group — be it the founders, contributors, employees, or initial traders. The concept of pre-mining is a pre-listing technique used by private companies. Usually, companies about to launch their Initial Public Offering, give away stocks to the management, founding team, or to a class of employees either for free or at a discounted price. This concept was aped by many stable coin developers, who used the pre-mined coins to reward themselves and others in the founding team. As large pre-mines dilute the outstanding stock of the tokens, VuFi developers consider it best if avoided. Flexible Redemption of Tokens Even the best stable coins offer only a limited period for the redemption of tokens, which is never a good deal for those who provide stability to the blockchain. Therefore, VuFi does not set an expiry time frame for token redemption. This could change in the future if the token owners vote for it. Inflation resistant VuFi's price target is designed to keep the holder protected from inflation and maintain token purchasing power. Smart contracts tracking Consumer Price Index of Dollar that measures the cost of the market basket in current time, this make VuFi a unique stablecoin. NFT lending protocol Vufi is a unique way of borrowing and lending NFTs. Automated protocol The Vufi algorithm automates price stabilization, minters who provide liquidity gives the opportunity to receive rewards from either side of the price target without paying gas fee Multi chains The ultimate goal is to be on ERC20, Polygon and Avalanche, Solana. Ecosystem Whitepaper - VuFi v1 - Previous Mission Next - Protocol VuFi Features Last modified 6mo ago Copy link Contents VuFi's differentiators Ecosystem
https://docs.vufi.finance/whitepaper-vufi-v1/how-vufi-works
2022-05-16T22:10:23
CC-MAIN-2022-21
1652662512249.16
[]
docs.vufi.finance
. Improved adaptive model accuracy Valid from Pega Version 7.1.7 Enhancements have been made in the grouping scores method that is used to create the adaptive model outcome profile (also known as: classifier). Because scores form the base for calculating propensities, these changes make models more accurate. The enhanced grouping scores method is available with any new model generated in Pega 7.1.7 and does not require any changes to existing adaptive model rules or adaptive model components. For more information on adaptive models, see About Adaptive Model rules. In-memory Adaptive Analytics Manager process Valid from Pega Version 7.1.7 The new internal Adaptive Analytics Manager (ADM) process running in PRPC makes it possible to use adaptive analytics without the availability of the external ADM server. Although not a replacement for the external ADM server, it streamlines the design and development of applications making use of adaptive analytics. For more information on the configuration that allows your application to use the internal ADM process, see the Infrastructure Services landing page..
https://docs.pega.com/platform/release-notes-archive?f%5B0%5D=releases_capability%3A9031&f%5B1%5D=releases_capability%3A9076&f%5B2%5D=releases_note_type%3A983&f%5B3%5D=releases_version%3A7096&f%5B4%5D=releases_version%3A7116&f%5B5%5D=releases_version%3A7136
2022-05-16T21:19:11
CC-MAIN-2022-21
1652662512249.16
[]
docs.pega.com
Set up adaptive response actions in Adaptive response actions allow you to gather information or take other action in response to the results of a correlation search or the details of a notable event. includes several adaptive response actions. See Included adaptive response actions with . Install and deploy add-ons Platform, see Using the alert actions manager in the Splunk Cloud Platform. Run an adaptive response action from Incident Review See Run an adaptive response action on the Incident Review dashboard. imported by . See Install and deploy add-ons App for PCI Compliance: 5.0.0 Feedback submitted, thanks!
https://docs.splunk.com/Documentation/PCI/5.0.0/User/SetupAdaptiveResponse
2022-05-16T21:30:06
CC-MAIN-2022-21
1652662512249.16
[]
docs.splunk.com
UserPro is powered by a solid and powerful yet very easy API and functions that can help you do a lot of stuff just by writing little code. All available API functions will be introduced in this guide. Please read the introduction If you are new to UserPro API. You should have at least a minimal PHP knowledge to use and apply these functions in production mode. Read the introduction first before exploring the functions. The functions will be listed in alphabetic order. Introduction to UserPro API In order to use any of the API functions, you must define a global at the top of your PHP file. This global will tell UserPro that you can access all functions and features of the API. Here is what you need to add to access the API: <?php global $userp ?>
https://docs.userproplugin.com/knowledge-base/33/
2022-05-16T21:06:02
CC-MAIN-2022-21
1652662512249.16
[]
docs.userproplugin.com
Planewaves This page gives hints on how to perform numerically precise calculations with planewaves or projector- augmented waves and pseudopotentials with the ABINIT package. Introduction¶ The numerical precision of the calculations depends on many settings, among which the definition of a basis set is likely the most important. With planewaves, there is one single parameter, ecut that governs the completeness of the basis set. The wavefunction, density, potentials are represented in both reciprocal space (plane waves) and real space, on a homogeneous grid of points. The transformation from reciprocal space to real space and vice-versa is made thanks to the Fast Fourier Transform (FFT) algorithm. With norm-conserving pseudopotential, ecut is also the main parameter to define the real space FFT grid, In PAW, the sampling for such quantities is governed by a more independent variable, pawecutdg. More precise tuning might be done by using boxcutmin and ngfft. Avoiding discontinuity issues with changing the size of the planewave basis set is made possible thanks to ecutsm. The accuracy variable enables to tune the accuracy of a calculation by setting automatically up to seventeen variables. Many more parameters govern a PAW computation than a norm-conserving pseudopotential calculation. They are described in a specific page topic_PAW. For the settings related to wavelets, see topic_Wavelets. Related Input Variables¶ compulsory: basic: useful: expert: - mqgrid Maximum number of Q-space GRID points for pseudopotentials - nc_xccc_gspace Norm-Conserving pseudopotentials - use XC Core-Correction in G-SPACE internal: - %mgfft Maximum of nGFFT - %mgfftdg Maximum of nGFFT for the Double Grid - %mpw Maximum number of Plane Waves - %nfft Number of FFT points - %nfftdg Number of FFT points for the Double Grid first tutorial on the the projector-augmented wave technique.
https://docs.abinit.org/topics/Planewaves/
2022-05-16T21:55:18
CC-MAIN-2022-21
1652662512249.16
[]
docs.abinit.org
Text Analytics Request Options Constructor Definition Important Some information relates to prerelease product that may be substantially modified before it’s released. Microsoft makes no warranties, express or implied, with respect to the information provided here. Initializes a new instance of the TextAnalyticsRequestOptions class which allows callers to specify details about how the operation is run. For example, set model version, whether to include statistics, and more. public TextAnalyticsRequestOptions (); Public Sub New ()
https://docs.azure.cn/en-us/dotnet/api/azure.ai.textanalytics.textanalyticsrequestoptions.-ctor?view=azure-dotnet
2022-05-16T22:22:53
CC-MAIN-2022-21
1652662512249.16
[]
docs.azure.cn
Routing Service Bus Topic Endpoint Properties. Subscription Id Property Definition Important Some information relates to prerelease product that may be substantially modified before it’s released. Microsoft makes no warranties, express or implied, with respect to the information provided here. Gets or sets the subscription identifier of the service bus topic endpoint. [Newtonsoft.Json.JsonProperty(PropertyName="subscriptionId")] public string SubscriptionId { get; set; } member this.SubscriptionId : string with get, set Public Property SubscriptionId As String Property Value - System.String - Attributes - Newtonsoft.Json.JsonPropertyAttribute
https://docs.azure.cn/zh-cn/dotnet/api/microsoft.azure.management.iothub.models.routingservicebustopicendpointproperties.subscriptionid?view=azure-dotnet
2022-05-16T21:35:17
CC-MAIN-2022-21
1652662512249.16
[]
docs.azure.cn
perspective correction Automatically correct for converging lines, a form of perspective distortion. The underlying mechanism is inspired by Markus Hebel’s ShiftN program., are transformed into converging lines that meet at some vantage point within or outside of the image frame. This module is able to correct converging lines by warping the image in such a way that the lines in question become parallel to the image frame. Corrections can be applied in a vertical and horizontal direction, either separately or in combination. In order to perform automatic correction the module first analyzes the image for suitable structural features consisting of line segments. Based on these line segments a fitting procedure is initiated which determines the best values for the module’s parameters. 🔗analysing the structure of an image Click the icon to analyze the image for structural elements – darktable will automatically detect and evaluate line elements. Only lines that form a set of vertical or horizontal converging lines are used for the subsequent processing steps. The line segments are displayed as overlays on the image canvas, with the type of line identified by color: - green - Vertical converging lines - red - Vertical lines that do not converge - blue - Horizontal converging lines - yellow - Horizontal lines that do not converge - gray - Other lines that are not of interest to this module Lines marked in red or yellow are regarded as outliers and are not taken into account during the automatic fitting step. This outlier elimination involves a statistical process using random sampling which means that each time you press the “get structure” button the color pattern of the lines will look slightly different. You can manually change the status of line segments: Left-Click on a line to select it (turn the color to green or blue) and Right-click to deselect it (turn the color to red or yellow). If you keep the mouse button pressed, you can use a sweeping action to select/deselect multiple lines in a row. The size of the select/deselect brush can be changed with the mouse wheel. Hold down the Shift key and keep the left or right mouse button pressed while dragging to select or deselect all lines in the chosen rectangular area. Click on one of the “automatic fit” icons (see below) to initiate an optimization process, which finds the best suited module parameters based on the detected structure. The image and the overlaid lines are then displayed with perspective corrections applied. 🔗module controls Once the initial image analysis is complete, the following controls can be used to perform the perspective corrections. - rotation - Control the rotation of the image around its center to correct for a skewed horizon. - lens shift (horizontal) - Correct converging horizontal lines (i.e. to make the blue lines parallel). - lens shift (vertical) - Correct converging vertical lines (i.e. to make the green lines parallel). In some cases you can obtain a more natural looking image if you correct vertical distortions to an 80 ~ 90% level rather than to the maximum extent. To do this, reduce the correction slider after having performed the automatic correction. - shear - Shear the image along one of its diagonals. This is required when correcting vertical and horizontal perspective distortions simultaneously. - guides - When activated, a grid is overlaid on the image to help you judge the quality of the correction. - automatic cropping - When activated, this feature crops the image to remove any black areas at the edges caused by the distortion correction. You can either crop to the “largest area”, or to the largest rectangle that maintains the original aspect ratio (“original format”). In the latter case you can manually adjust the automatic cropping result by clicking in the clip region and moving it around. The size of the region is modified automatically to exclude any black areas. - lens model - This parameter controls the lens focal length, camera crop factor and aspect ratio that used by the correction algorithm. If set to “generic” a lens focal length of 28mm on a 35mm full-frame camera is assumed. If set to “specific”, the focal length and crop factor can be set manually using the sliders provided. - focal length - If the lens model is set to “specific”, set the lens focal length. The default value is taken from the image’s Exif data, and can be overridden by adjusting the slider manually. - crop factor - If the lens model is set to “specific”, set the camera crop factor. You will normally need to set this value manually. - aspect ratio - If the lens model is set to “specific”, this parameter allows for a free manual adjustment of the image’s aspect ratio. This is useful for “unsqueezing” images taken with an anamorphic lens (which changes the ratio of image height to width). - automatic fit - Click on one of the automatic fit icons to set the distortion correction sliders automatically based on the edge detection analysis. You can choose to automatically apply just the vertical corrections , just the horizontal corrections , or both together . Ctrl+click on any of the icons to apply a rotation without the lens shift. Shift+click on any of the icons to apply the lens shift without any rotation. - get structure - Click on the icon to (re-)analyze the image for suitable line segments. Shift+click to apply a contrast enhancement step before performing further analysis. Ctrl+click to apply an edge enhancement step before performing further analysis. Both variations can be used alone or in combination if the default analysis is not able to detect a sufficient number of lines. Click on the icon to discard any structural information collected during any previous structural analysis. Click on the icon to show or hide the line segments identified by any previous structural analysis. 🔗examples Here is an image with a skewed horizon and converging lines caused by directing the camera upwards: Here is the image after having corrected for vertical and horizontal perspective distortions. Note the framing adjustment made by the automatic cropping feature and the still-visible overlay of structural lines:
https://docs.darktable.org/usermanual/3.6/en/module-reference/processing-modules/perspective-correction/
2022-05-16T22:30:48
CC-MAIN-2022-21
1652662512249.16
[]
docs.darktable.org
After a deployment to a git target, Gearset supports the ability to create a Pull Request (PR) in your git repository from within the Gearset application. From the Deployment successful page you can create a PR without leaving Gearset. By clicking the Create pull request... link you get the following options: Pull request name This will become the title of your PR, so make sure to name it something meaningful for your team! Select target branch This is the branch to merge into. Pull request description If your git provider is GitHub, GitHub Enterprise, Azure DevOps Git, GitLab, Bitbucket or Bitbucket Server you can enter a description for the PR. Draft pull request If your git provider is GitHub or GitHub Enterprise, you can create a draft PR instead of a standard PR - see GitHub's documentation for more details. Skip CI validations When selected, this will prevent any 'validate pull request' child jobs (see blog for more info on this feature) from running for that particular PR. You may want to skip them if the validations take a while to run. By selecting this setting, you don't have to wait for the validations to complete, which therefore may speed up the PR merge process. Special tip: you can also create any PR with the words [NoValidation] (with the square brackets included) in the PR name and it will have the same impact, regardless of how you create it. Include items deployed When selected, this will list the items deployed in the PR in source control. Include Jira ticket(s) links When selected, Gearset will create links to the Jira tickets in the PR comment. Which git providers are supported? GitHub GitHub Enterprise Bitbucket Bitbucket Server Azure DevOps Git GitLab GitLab (Self-managed) Aws CodeCommit
https://docs.gearset.com/en/articles/3954372-create-pull-requests-from-deployment-summary
2022-05-16T22:53:27
CC-MAIN-2022-21
1652662512249.16
[]
docs.gearset.com
GroupDocs.Viewer for .NET 22.1 Release Notes This page contains release notes for GroupDocs.Viewer for .NET 22.1 Major Features There are 11 features, improvements, and bug-fixes in this release, most notable are: - Added addional method compression support for PSD - Make readable exception for old versions of XLS file - Temp files overflow when opening a file from stream for PSD AI files Full List of Issues Covering all Changes in this Release Public API Changes GroupDocs.Viewer namespace Following obsolete members were removed: public Viewer(Func<Stream> getFileStream); public Viewer(Func<Stream> getFileStream, Func<LoadOptions> getLoadOptions); public Viewer(Func<Stream> getFileStream, ViewerSettings settings); public Viewer(Func<Stream> getFileStream, Func<LoadOptions> getLoadOptions, ViewerSettings settings);
https://docs.groupdocs.com/viewer/net/groupdocs-viewer-for-net-22-1-release-notes/
2022-05-16T20:57:56
CC-MAIN-2022-21
1652662512249.16
[]
docs.groupdocs.com
Escrow contracts. For more information please refer to the "Escrow" section of the whitepaper. Proposal Each escrow contract starts with the buyer proposal. Once it's sent the deposit amount will be locked for a Time until response period. If during the period seller accepts the terms, Escrow contract will be activated. To initiate the process navigate to wallet Contracts tab and choose New Purchase. Proposal details are the following. - Description - title or description for contract subject - Seller - wallet address of merchant or seller - Amount - payment amount for goods or services - Your deposit - sum of collateral and payment amount - Seller deposit - collateral from seller required by buyer - Comment - additional information like order ID, delivery address, etc. - Fee - transaction fee amount - Time until response - proposal expiration time - Payment ID - transaction payment identifier provided by seller Confirmation When the seller accepts the proposal a special multi signature transaction will be sent to the blockchain. Then after 10 confirmations a new contract will be started. The seller can now fulfil contract terms like shipping the item to the buyer. The buyers contract window will get three options to continue with: Cancel and return deposits, Terminate and burn deposits and Complete and release deposits. Cancel and return deposits The buyer can send a cancellation offer to return both deposits and close the contract. The seller can accept or ignore this offer within a given response time. This option is useful when deal is mutually canceled. Terminate and burn deposits When parties cannot find mutual agreement on any occasions one can decide to burn the deposits completely and close the contract. In that case deposits will not be returned ever. Complete and release deposits If buyer is satisfied with the delivery or a provided service the contract can be closed. Releasing deposits will return both parties collaterals. Updated over 2 years ago
https://docs.zano.org/docs/escrow
2022-05-16T20:51:31
CC-MAIN-2022-21
1652662512249.16
[]
docs.zano.org
GPU support in Abinit¶ IMPORTANT: GPU support is currently highly EXPERIMENTAL and should be used by experienced developers only. If you encounter any problem, please report it to Yann Pouillon before doing anything else. GPU-related parameters¶ GPU support is activated by the –enable-gpu option of configure. Another option of importance is the –with-gpu-flavor one, which selects the kind of GPU support that will be activated. A convenience option, codename –with-gpu-prefix, is also provided, in order to set automatically all relevant parameters whenever possible. A few other options are available as well, mainly for fine-tuning of the build parameters and testing purposes. Full descriptions of all these options can be found in the ~abinit/doc/build/config-template.ac9 file. Do not hesitate to ask questions on In addition, the permitted GPU-related preprocessiong options are: - HAVE_GPU : generic use; - HAVE_GPU_SERIAL : serial GPU support; - HAVE_GPU_MPI : MPI-aware GPU support. Cuda support¶ At present it is possible to ask for single- or double-precision Cuda support. The configure script will check that the Cuda libraries are properly working, but however not whether double-precision is actually supported by your version of Cuda (this might be added in the future). All calls to Cuda routines should be carefully embedded within ‘#if defined HAVE_GPU_CUDA … #else … #endif’ preprocessing blocks. When a feature does require Cuda and will not work without it, the corresponding ‘#else’ part should display an error and cause Abinit to abort. The permitted Cuda-related preprocessing options are : - HAVE_GPU_CUDA : generic use; - HAVE_GPU_CUDA_SP : single-precision calculations; - HAVE_GPU_CUDA_DP : double-precision calculations. All high-level routines directly accessing Cuda features have to be put in ~abinit/src/52_manage_cuda/, and low-level ones in ~abinit/shared/common/src/17_gpu_toolbox/. All files belonging to nVidia must not be distributed with Abinit. Please discuss with Yann Pouillon if you need them inside the Abinit source tree during the build. In any case, all Cuda-related developments should be done in good coordination with: - Marc Torrent - Yann Pouillon Cuda version¶ To take advantage of the multiple FFT in cuda (FFT in batch), ABINIT have to be compiled with a Cuda version>=3.0. It is possible to build with previous versions (>2.1 tested) but you make some changes. cuda implementation support devices with capabilty (revision)>1.0 Magma support¶ The MAGMA project aims to develop a dense linear algebra library similar to LAPACK but for heterogeneous/hybrid architectures, starting with current “Multicore+GPU” systems. It is recommended to take advantage of MAGMA when using ABINIT with Cuda. Magma is not distributed within ABINIT package; it has to be preliminary installed. To activate MAGMA support during building process, use –wih-linalg-flavor=”…+magma” at configure level. OpenCL support¶ OpenCL support is currently under discussion. More info will come once decisions have been taken. S_GPU support¶ The S_GPU library provides higher performance and better load balancing when each GPU of a hybrid computer is shared by several processes, e.g. MPI tasks. It will be supported in Abinit in the future, from its version 2. See for details.
https://docs.abinit.org/INSTALL_gpu/
2022-05-16T22:08:11
CC-MAIN-2022-21
1652662512249.16
[]
docs.abinit.org
import, rate & tag images When you want to edit some new images in darktable, the first step is to add them to the darktable library with the lighttable import module. This will create entries for the imported images in darktable’s library database so that it can keep track of the changes you make. There are three main methods for importing images: - add images to the library - You can add images to the library without copying them or moving them within your file system using the “add to library” button in the import module. When adding images, darktable will read the image’s internal metadata and any accompanying XMP sidecar file. If an image has already been added to the database, it will be ignored (though any updates to the sidecar file will be loaded). The location of each image is recorded in the library database, but darktable will not copy or move the files anywhere. - copy & import - This is the same as the above option but it also allows you to first take a copy of those images to a new location (following the file naming pattern defined in preferences > import), before loading the copied images into the library. - copy & import from camera - To import images from a camera, first connect the camera to your system with a USB cable. If your system tries to automatically mount the camera’s files, you should abort the mount operation, otherwise the camera cannot be accessed from within darktable. If you don’t see your camera listed in the import module, press the “scan for devices” button. Once your camera is detected the import module should offer the ability to copy & import images from the camera or tether your camera while shooting. As with the “copy & import” button, darktable will physically copy files imported from the camera into a specified directory following the file naming pattern defined in preferences > import. Once images have been added to the library, they will appear in the lighttable view. By default, new images will all be given a one-star rating. There are many different ways to manage a set of newly imported photos, such as giving them tags and adjusting their ratings. Please refer to the digital asset management section for more information. One example workflow might be: - Set the lighttable view to show photos with exactly a 1 star rating. - Perform a quick first-level screening of your photos. If any photos are badly out-of-focus or otherwise unwanted, reject them with the R key, or give them a 0-star rating. If a photo looks reasonable and should pass to the next phase, press 2 to give it a 2 star rating. Any photos that no longer have a 1 star rating will automatically disappear from view. Continue in this manner until you have completed the first level of assessment. - Set the lighttable view to show only photos with exactly a 2 star rating. Go through your previously-selected photos more carefully, and decide whether to promote them to a 3 star rating, or put them back down to a 1 star or rejected rating. - Spend some time performing a quick edit on your 3 star photos, to see if they are worth keeping. If you are happy with the results, you can create a tag for the photo, and promote it to a 4 or even 5 star rating. - Go through your 4 and 5 star photos, perform any final edits on them, print them out, publish on your portfolio site, etc. and bask in the copious amounts of critical acclaim you will receive! - If space is at a premium you might want to consider permanently deleting your rejected or 0-star images. Select these images in the lighttable and use the ’trash’ option in the selected images module. You should probably only do this on photos you are certain you will never need again.
https://docs.darktable.org/usermanual/3.6/en/overview/workflow/import-rate-tag/
2022-05-16T22:01:31
CC-MAIN-2022-21
1652662512249.16
[]
docs.darktable.org
- : - Before you can import a project, you need to export the data first. See Exporting a project and its data for how you can export a project through the UI. - Imports from a newer version of GitLab are not supported. The Importing GitLab version must be greater than or equal to the Exporting GitLab version. - Imports fail unless the import and export GitLab instances are compatible as described in the Version history. - Exports are generated in your configured shared_path, a temporary shared directory, and are moved to your configured uploads_directory. Every 24 hours, a specific worker deletes these export files. - Group members are exported as project members, as long as the user has maintainer or administrator access to the group where the exported project lives. - Project members with owner access are imported as maintainers. - Imported users can be mapped by their primary email on self-managed instances, if an administrative user (not an owner) does the import. Otherwise, a supplementary comment is left to mention that the original author and the MRs, notes, or issues are owned by the importer. - For project migration imports performed over GitLab.com Groups, preserving author information is possible through a professional services engagement. - If an imported project contains merge requests originating from forks, then new branches associated with such merge requests are created within a project during the import/export. Thus, the number of branches in the exported project could be bigger than in the original project. - Deploy keys allowed to push to protected branches are not exported. Therefore, you need to recreate this association by first enabling these deploy keys in your imported project and then updating your protected branches accordingly. Version history 14.0+ In GitLab 14.0, the JSON format is no longer supported for project and group exports. To allow for a transitional period, you can still import any JSON exports. The new format for imports and exports is NDJSON. are exported: - Project and wiki repositories - Project uploads - Project configuration, excluding integrations - Issues with comments, merge requests with diffs and comments, labels, milestones, snippets, time tracking, and other project entities - Design Management files and data - LFS objects - Issue boards - Pipelines history - Push Rules - Awards The following items are not exported: - Build traces and artifacts - Container registry images - CI/CD variables - Webhooks - Any encrypted tokens - Merge Request Approvers import_export.ymlfile. Exporting a project and its data Full project export functionality is limited to project maintainers and owners. You can configure such functionality through project settings: To export a project and its data, follow these steps: appears shortly. Internalvisibility level is restricted, all imported projects are given the visibility of Private. 0(unlimited). As an administrator, you can modify the maximum import file size. To do so, use the max_import_sizeoption in the Application settings API or the Admin Area UI. Default modified from 50MB to 0 in GitLab 13.8., by default, users are rate limited to: Please note that GitLab.com may have different settings from the defaults.
https://docs.gitlab.com/13.12/ee/user/project/settings/import_export.html
2022-05-16T20:53:26
CC-MAIN-2022-21
1652662512249.16
[]
docs.gitlab.com
This ( id attribute was typed as an xsd:ID, which constrained possible characters. As of 3.1, it is now xsd:string. Note that bean id uniqueness is still enforced by the container, though no longer by XML parsers. ,), semicolon ( ;), or white space. As a historical note, in versions prior to Spring 3.1, local= ( getBean() method of the ApplicationContext. So for a given FactoryBean with an id of myBean, invoking getBean("myBean") on the container returns the product of the FactoryBean; whereas, invoking getBean("&myBean") returns the FactoryBean instance itself. &) when calling the
https://docs.spring.io/spring-framework/docs/3.2.6.RELEASE/spring-framework-reference/html/beans.html
2022-05-16T23:16:40
CC-MAIN-2022-21
1652662512249.16
[]
docs.spring.io
SPM Intermezzo: Toolboxes¶ Overview¶ Although SPM comes with an extensive library for analyzing fMRI data, it doesn’t have some of the tools you will need for more advanced analyses, such as the region of interest analyses covered in the next chapter. To meet this need, programmers and researchers have created SPM extensions (also known as toolboxes) which enable the user to do specific analyses. For example, most fMRI researchers use some kind of atlas, or partitioning of the brain into distinct functional or anatomical regions. fMRI data can then be extracted from a specified partition, or region of interest, and then statistics are performed to determine whether there is a significant effect in that region. To create these anatomical regions of interest, we will download and install the WFU Pickatlas toolbox, a popular atlas and region of interest generator. To download this toolbox, click on this link, and then click the Download button - an arrow pointing downwards, just underneath the selection menu. If you do not already have an account with the NITRC (Neuroimaging Tools and Research Collaboratory, a repository for code and toolboxes), you will need to create one. Agree to the terms, and proceed with the download. When the toolbox has been downloaded, unzip it, and type the following code: movefile ~/Downloads/WFU_PickAtlas_3.0.5b/* ~/spm12/toolbox This will move all of the needed folders into the SPM12 toolbox folder, where they will be read each time you open SPM. Now open SPM, click on the Toolbox menu, and you should see the wfupickatlas toolbox as an option. Installing Marsbar¶ Marsbar is another popular SPM toolbox, used primarily for ROI analysis. To download it, click on this link and click the Download button. After the package has been downloaded, unzip it. From the Matlab terminal, navigate to the spm12 toolbox directory and create a directory called marsbar: cd ~/spm12/toolbox mkdir marsbar And then move the required files from the marsbar download into the marsbar folder: movefile ~/Downloads/marsbar-0.44/* ~/spm12/toolbox/marsbar The next time you open SPM, you should see “marsbar” as an option when you click on the Toolbox menu.
https://andysbrainbook.readthedocs.io/en/latest/SPM/SPM_Short_Course/SPM_Intermezzo_Toolboxes.html
2022-05-16T22:25:29
CC-MAIN-2022-21
1652662512249.16
[]
andysbrainbook.readthedocs.io
SchedulerStringId Enum Lists values of localizable strings. Namespace: DevExpress.XtraScheduler.Localization Assembly: DevExpress.XtraScheduler.v19.1.Core.dll Declaration Related API Members The following properties accept/return SchedulerStringId values: Remarks This example illustrates the use of the SchedulerLocalizer class descendant to display the names of the SchedulerStringId enumeration members instead of the localized strings. This approach enables you to change most strings of the Scheduler UI as your needs dictate. Note that labels on the appointment forms cannot be localized with the Localizer class. Use custom forms or satellite resources. For more information, review the Localization topic. Note A complete sample project is available at DevExpress.XtraScheduler.Localization.SchedulerLocalizer.Active = new MySchedulerLocalizer(); DevExpress.XtraEditors.Controls.Localizer.Active = new MyDateNavigatorLocalizer(); Related GitHub Examples The following code snippets (auto-collected from DevExpress Examples) contain references to the SchedulerStringId enum. Note The algorithm used to collect these code examples remains a work in progress. Accordingly, the links and snippets below may produce inaccurate results. If you encounter an issue with code examples below, please use the feedback form on this page to report the issue.
https://docs.devexpress.com/CoreLibraries/DevExpress.XtraScheduler.Localization.SchedulerStringId?v=19.1
2022-05-16T23:23:44
CC-MAIN-2022-21
1652662512249.16
[]
docs.devexpress.com
Where to Find Information¶ Printer Installation, Maintenance & Service Manuals¶ The printer Installation, Maintenance & Service Manuals tell how to install and maintain the printer. They also include a basic troubleshooting guide. The Printer Installation, Maintenance & Service Manuals can be found here. Applicator Manuals¶ There are separate manuals for each applicator. Most applicators have separated manuals for G1 and G2 software. The applicator manual tells how to install, configure and maintain the applicator. In general, they also contain a simple troubleshooting guide. The applicator manuals can be found here. Trello¶ The Trello Troubleshooting and Documentation board contains tips and tricks for printers and applicators. Contact Evolabel support team for access. Finding a Part Number¶ The printer maintenance manual and the applicator manual lists the most common spare parts for each device. The Price List (without prices), includes the PN for sellable spare parts. Contact Evolabel Orders once you consult the printer/applicator manuals and Price List and still cannot find the part needed.
https://docs.evolabel.com/info/finding_information.html
2022-05-16T21:56:07
CC-MAIN-2022-21
1652662512249.16
[]
docs.evolabel.com
Implementing ClaimsXten The PegaClaimsCXT application is provided in a separate jar “PegaClaimsCXT.jar” in the installation file. This needs to be installed to enable the ClaimsXten functionality. The PegaClaimsCXT application is built on the PegaClaims application. Once installed, the following configurations need to be set up to enable the integration.Application system settings These are found in the Settings->Application Settings->ClaimsXten in App Studio These are found in the Records->Sys Admin->Dynamic System Settings in Dev Studio The application version is PegaClaimsCXT 1. This is the application that is configured and certified with ClaimsXten™ version 6.2. Future versions of ClaimsXten that are certified will have an incremental application version number.ClaimsXten Web service The SCE integrates to ClaimsXten using a web service, or for the purposes of testing, a simulated service can be invoked. If the DSS setting IsCXTServiceEnabled is set to true, then the web service is utilized, and the appropriate application settings need to be configured. If this setting is false, then a simulated response is returned. The web service uses two data transforms for the integration that can be extended as needed. - EncodeCXTRequest – encodes a Base64 request to ClaimsXten - DecodeCXTResponse – decodes a Base64 response from ClaimsXten In the instance that the simulated response is required, the CXTSimulatedResponses decision table can be configured to send a response to be handled. The SCE test claim name is used to trigger the response XML structure. Previous topic Smart Claims Engine and ClaimsXten integrations Next topic ClaimsXten integration points
https://docs.pega.com/pega-smart-claims-engine-user-guide/86/implementing-claimsxten
2022-05-16T23:15:51
CC-MAIN-2022-21
1652662512249.16
[]
docs.pega.com
For. ... You can change a column's data type in one of the following ways: Change from ... column menus You can change the data type for individual columns through the following column menus: ... Change ... through Transform Builder You can change data type for multiple columns to a single data type, you a single column or multiple columns through the Transform Builder. You can use a transformation like the following, which changes the columns FirstName, and Address to String data type. For more information, see Valid Data Type Strings. ... Lock Data Type You can lock a column's data type through the Transform Builder. When a column's data type is locked, the data type is no longer automatically checked and updated by the. Via Transform Builder - In the Search panel, enter lock column type. - From the Columns drop-down, select any one of the following options: - Multiple: Select one or more columns from the drop-down list. - Range: Specify a start column and ending column. All columns inclusive are selected. All: Select all columns in the dataset. Advanced: Specify the columns using a comma-separated list. You can combine multiple and range options under Advanced. Example: - Specify the other parameters. - To add the step to your recipe, click Add. Example - lock a column's data type This transformation locks the column data type: Example - lock the data types for all columns This transformation locks the data types for all columns: Unlock Data Type You can unlock a column's data type by following any one of these methods: Via Transform Builder In the Transformer Builder, you can select unlock to the current type option to apply the unlock feature to one or more columns. This transformation unlocks the column data type: Via column menus You can unlock the data type for individual columns through the following column menus: - To the left of the column name, you can click the icon and select Automatically update. The selected column is unlocked.:
https://docs.trifacta.com/pages/diffpages.action?originalId=187024689&pageId=135013694
2022-05-16T23:09:40
CC-MAIN-2022-21
1652662512249.16
[]
docs.trifacta.com
This manual gives you a walk-through on how to use the Molecular Dynamics Plugin: The Molecular Dynamics Plugin calculates the trajectory of a molecular system by integrating the equations of Newton's laws of motion. The generated trajectory can be visualized as a continuous animation or as a sequence of discrete frames. Fig. 1 Trajectory of an MD run, shown as a sequence of frames The following options can be set in the Molecular Dynamics Options panel: Display : display mode Animation : trajectory is displayed as an animation. Frames : trajectory frames are displayed individually (see above). Force field: force field used for calculation. Integrator: integrator type used for solving the equations of. Fig. 2 Molecular Dynamics Options panel
https://docs.chemaxon.com/display/lts-europium/molecular-dynamics-plugin.md
2022-05-16T21:44:31
CC-MAIN-2022-21
1652662512249.16
[]
docs.chemaxon.com
Gearset can deploy Salesforce metadata and Vlocity data packs as part of a single deployment, helping you roll out a feature that makes changes to both metadata and data packs. To start with, you'll need to customize the metadata filter to include both your Salesforce metadata changes and Vlocity data packs. In this case, we're comparing Salesforce custom objects and Vlocity Data Raptors. Gearset shows the dependencies between data packs and metadata so you can select everything you need for the deployment. Once you've selected your changes, click NEXT. Gearset's problem analyzers will check for anything that could cause your deployment to fail, and suggest items to add or remove. Although you can see dependencies between metadata and data packs in the comparison grid, Gearset will not suggest missing data pack → metadata dependencies during problem analysis. Salesforce metadata changes and Vlocity data pack changes will appear in different tabs in the deployment summary. You can add a deployment friendly name and associate Jira tickets in the same way as any other deployment. You can validate the deployment to check that the Salesforce metadata changes will deploy successfully. Validating the deployment won’t check if the Vlocity data packs can be deployed successfully, because there’s no way to check without actually doing the deployment. Once you start the deployment, Gearset will deploy the Salesforce metadata first, moving on to deploy the Vlocity data packs once the metadata deployment is complete. If the metadata deployment fails, Gearset won’t attempt the Vlocity deployment. This table shows what happens for different deployment outcomes. Once the deployment completes, if the target is a Salesforce org, Gearset will start activating any data packs you’ve deployed.
https://docs.gearset.com/en/articles/5821967-deploying-salesforce-metadata-and-vlocity-data-packs-together
2022-05-16T22:52:25
CC-MAIN-2022-21
1652662512249.16
[]
docs.gearset.com
Testing Table of Contents - Introduction - Unit testing - Integration test - Integration test with Springboot Runner with Junit5 - Integration test with Springboot Runner WITHOUT Junit5 - Integration test with Standalone Runner # Introduction This sections exaplains the different levels of testing(unit and integration tests) that can(and should) be applied to a Mongock migration and the tools Mongock provides for it. # Unit testing Unit tests are a good starting point to ensure the correctness of a migration. With this mechanism the ChangeUnits can be validated in isolation, covering all the changeUnit's methods: Execution, RollbackExecution, BeforeExecution and RollbackBeforeExecution Mongock doesn't provide any speicific tool for this, but we illustrate how to do it in our example project A unit test for a change unit looks like this: # Integration test Once you are confident that the ChangeUnits are tested in isolation, we can increase the testing robustness by adding integration tests. The intent with Integration tests is to test the entire migration suite with your application context and validate the expected database results. This is a more complex level of testing as it requires to simulate the application context and implies the integration of the different components within the application. But it's probably the most important level of testing to ensure the correctness of the migration. To see an example, please see our example project # Integration test with Springboot Runner with Junit5 Mongock provides some useful classes making testing easier. In summary, you need to create your test class extending MongockSpringbootJUnit5IntegrationTestBase, which provides the following - BeforeEach method(automatically called): Resets mongock to allow re-utilization(not recommended in production) and build the runner - AfterEach method(automatically called): Cleans both Mongock repositories(lock and migration) - Dependency injections: It ensures the required dependencies(Mongock builder, connectionDriver, etc.) are injected - executeMongock() method: To perform the Mongock migration - @TestPropertySource(properties = {"mongock.runner-type=NONE"}): To prevent Mongock from injecting(and automatically executing) the Mongock runner bean. This is important to allow multiple Mongock runner's executions. Please follow these steps... # 1. Import the mongock-springboot-junit5 dependency to your project Assuming you have imported already mongock-springboot to you project, you only need to add <dependency><groupId>io.mongock</groupId><artifactId>mongock-springboot-junit5</artifactId><scope>test</scope></dependency> # 2. Add the additional dependencies to your project You probably need Springboot starter test, JUnit5, Testcontainers... # 3. Database initialization. Although there are multiple ways of doing this, we present what we think provides a good balance easy-flexible # 4. Create the test class extending the MongockSpringbootJUnit5IntegrationTestBase This class, in addition to extending MongockSpringbootJUnit5IntegrationTestBase, it should also bring the database initialization and the application environment. This is an example. # Integration test with Springboot Runner WITHOUT Junit5 In this case Mongock provides pretty much the same than in the JUnit5 case, with the exception of the before and after methods, what forces you to make these calls explicitly. Based on the previous scenario, the relevant modifications are - Import the dependency mongock-springboot-testinstead mongock-springboot-junit5 - Extend from MongockSpringbootIntegrationTestBaseinstead MongockSpringbootJUnit5IntegrationTestBase - Explicitly call the methods super.mongockBeforeEach()and super.mongockAfterEach() The test class should look like this # Integration test with Standalone Runner The standalone runner provides more control over the process, allowing you to implement integration tests without the need of additional support. You need to take into account the following Mongock Runner cannot be execute multiple times, so you need to build a new runner instance and execute it in every test execution. ConnectionDriver cannot be reused, meaning you need to create a new ConnectionDriver instance in every test execution and provide it to the Mongock builder If you are sharing the same database for multiple tests, make sure you clean the database, in case you want to start fresh each test case.
https://docs.mongock.io/v5/testing/index.html
2022-05-16T21:14:17
CC-MAIN-2022-21
1652662512249.16
[]
docs.mongock.io
The current IME composition string being typed by the user. In some languages such as Chinese, Japanese or Korean, text is input by typing multiple keys to generate one or multiple characters. These characters are visually composed on the screen as the user types. When using Unity's built in GUI system for text input, Unity will take care of displaying the composition strings as the users types. If you want to implement your own GUI, however, you need to take care of displaying the string at the current cursor position. The composition string is only updated when IME compositing is used. See Input.imeCompositionMode for more info. See Also: Input.imeCompositionMode, Input.compositionCursorPos.
https://docs.unity3d.com/ru/2019.2/ScriptReference/Input-compositionString.html
2022-05-16T22:22:42
CC-MAIN-2022-21
1652662512249.16
[]
docs.unity3d.com
Medtronic Pumps¶ >>>> Medtronic pump driver is from 2.5 version part of AndroidAPS (master). While this is the case, Medtronic driver should still be considered beta software. Please install only if you are expirenced user. At the moment we are still fighting with double Bolus issue (We get 2 boluses in treatments, which throws IOB calculation (if you experience this bug, please enable Double Bolus Logging in Medtronic configuration and provide your logs)), this should be fixed with upcomming release. <<<< Works only with older Medtronic pumps (details see below). Does not work with Medtronic 640G or 670G. If you started using Medtronic driver please add yourself to this list. This is just so that we can see which Phones are good and which are not so good (or bad) for this driver. There is one column called “BT restart”. This is to check if yourPhone supports BT enable/disable, which can be used when pump is no longer able to connect, that happens from time to time. If you notice any other problem, please write that in Comments column. 하드웨어와 소프트웨어의 필요 요건¶ - Phone: Medtronic driver should work with any phone supporting BLE. IMPORTANT: While driver works correctly on all phones, enabling/disabling Bluetooth doesn’t (this is required when you loose connection to RileyLink and system can’t recover automatically - happens from time to time). So you need to get device with Android 6.0 - 8.1, in worst case scenario you can install LinegaeOS 15.1 (required 15.1 or lower) on your phone. We are looking into problem with Android 9, but so far we haven’t found resolution (it seems to work on some models and not on others, and on also works sometimes on some models). - RileyLink/Gnarl: For communication with Pump you need device that converts BT commands from Phone into RF commands that Pump understands. Device that does is called RileyLink (you can get it here getrileylink.org). You need stable version of device, which is for older models firmware 0.9 (older versions might not work correctly) or for newer models 2.2 (there are options to upgrade available on RL site). If you are feeling adventurous you can also try Gnarl (here), which is sort-of RileyLink-clone. - Pump: Driver works only with following models and firmware versions: - 512/712 - 515/715 - 522/722 - 523/723 (firmware 2.4A or lower) - 554/754 EU release (firmware 2.6A or lower) - 554/754 Canada release (firmware 2.7A or lower) Configuration of the pump¶ - Enable remote mode on Pump (Utilities -> Remote Options, Select Yes, and on next screen do Add ID and add dummy id (111111 or something). You need to at least one ID on that Remote IDs list. This options might look differently on different model of pump. This step is important, because when set, Pump will listen more often for remote communication. - Set Max Basal on your Pump to your “max basal entry in your STD profile” * 4 (if you want to have 400% TBR as max). This number must be under 35 (as you can see in pump). - Set Max Bolus on your Pump (max is 25) - Set profile to STD. This will be the only profile we will use. You can also disable. - Set TBR type to Absolute (not Percent) Configuration of Phone/AndroidAPS¶ - Do not pair RileyLink with your Phone. If you paired your RileyLink, then AndroidAPS won’t be able to find it in configuration. - Disable Auto-rotate on your phone (on some devices Auto-rotate restarts BT sessions, which is not something we would want). - You can configure pump in AndroidAPS two ways: - Use of Wizard (on new install) - Directly in Config tab (Cog icon on Medtronic driver) If you do new install you will be thrown directly into wizard. Sometimes if your BT connection is not working fully (unable to connect to pump), you might not be able to complete configuration. In such case select virtual pump and after wizard is finished, you can go with option 2, which will bypass pump detection. You need to set following items: (see picture above) - Pump Serial Number: You can find that on back side, entry SN. You need to get only number, your serial is 6 numbers. - Pump Type: Which pump type you have (i.e. 522). - Pump Frequency: According to pump frequency there were two versions of Medtronic pump made (if you are not sure what frequency your pump uses, look at FAQ): - for US & Canada, frequency used is 916 Mhz - for Worldwide, frequency used is 868 Mhz - Max Bolus on Pump (U) (in an hour): This needs to be set to same as on the pump. It limits how much insulin you can Bolus. If you go over this, Bolus won’t be set and error will be returned. Max that can be used is 25, please set correct value for yourself here so that you don’t overdose. - Max Basal on Pump (U/h): This needs to be set to same as on the pump. It limits how much basal you can get in an hour. So for example, if you want to have max TBR set to 500% and highest of your Basal patterns is 1.5 U, then you would need to set Max Basal to at least 7.5. If this setting is wrong (for example, if one of your basal pattern would go over this value, pump would return error). - Delay before Bolus is started (s): This is delay before bolus is sent to pump, so that if you change your mind you can cancel it. Canceling bolus when bolus is running is not supported by pump (if you want to stop bolus when running, you have to suspend pump and then resume). - Medtronic Encoding: This is setting which determines, if 4b6b encoding that Medtronic devices do will be done in AndroidAPS or on RileyLink. If you have a RileyLink with 2.x firmware, default value will be to use Hardware encoding (= done by RileyLink), if you have 0.x firmware this setting will be ignored. - Battery Type (Power View): If you want to see battery power in your pump, you need to select type of battery you use (currently supported Lithium or Alkaline), this will in turn change display to display calculated percent and volts. - RileyLink Configuration: This will find your RileyLink/GNARL device. MEDTRONIC (MDT) Tab¶ On pump tab you can see several lines that are showing pumps (and connections) current status. - RileyLink Status: It shows status of RileyLink connection. Phone should be connected to RileyLink all the time. - Pump Status: Status of pump connection, this can have several values, but mostly we will see sleep icon (when pump connection is not active), when command is beeing executed, we might see “Waking Up”, which is AAPS trying to make connection to your pump or description of any command that might be running on pump (ex.: Get Time, Set TBR, etc.). - Battery: Shows battery status depening on your configuration. This can be simple icon showing if battery is empty or full (red if battery is getting critical, under 20%), or percent and voltage. - Last connection: Time when last connection to pump was successful. - Last Bolus: When last bolus was given. - Base Basal Rate: This is the base basal rate that runs on pump at this hour. - Temp basal: Temp basal that is running or empty. - Reservoir: How much insulin is in reservoir (updated at least every hour). - Errors: Error string if there is problem (mostly shows if there is error in configuration). On lower end we have 3 buttons: - Refresh is for refreshing state. This should be used only after connection was not present for long time, as this action will reset data about pump (retrieve history, get/set time, get profile, get battery status, etc). - Pump History: Shows pump history (see bellow) - RL Stats: Show RL Stats (see bellow) Pump History¶ Pump history is retrieved every 5 minutes and stored localy. We keep history only for last 24 hours, so older entries are removed when new are added. This is simple way to see the pump history (some entries from pump might not be displayed, because they are not relevant - for example configuration of functions that are not used by AndroidAPS). RL Status (RileyLink Status)¶ Dialog has two tabs: - Settings: Shows settings about RileyLink: Configured Address, Connected Device, Connection Status, Connection Error and RileyLink Firmware versions. Device Type is always Medtronic Pump, Model would be your model, Serial number is configured serial number, Pump Frequency shows which frequency you use, Last Frequency is last frequency used. - History: Shows communication history, items with RileyLink shows state changes for RileyLink and Medtronic shows which commands were sent to pump. Actions¶ When Medtronic driver is selected, 3 possible actions can be added to Actions Tab: - Wake and Tune Up - If you see that your AndroidAPS hasn’t contacted your pump in a while (it should contact it every 5 minutes), you can force Tune Up. This will try to contact your pump, by searching all sub frequencies on which Pump can be contacted. If it finds one it will set it as your default frequency. - Reset RileyLink Config - If you reset your RileyLink/GNARL, you need to use this action, so that device can be reconfigured (frequency set, frequency type set, encoding configured). - Clear Bolus Block - When you start bolus, we set Bolus Block, which prevents any commands to be issued to pump. If you suspend your pump and resume (to cancel bolus), you can then remove that block. Option is only there when bolus is running... Important notes¶ OpenAPS users¶ When you start using AndroidAPS, primary controller is AndroidAPS and all commands should go through it. Sending boluses should go through AAPS and not be done on pump. We have code in place that will detect any command done on pump, but if you can you should avoid it (I think we fixed all the problems with pump history and AAPS history synchronization, but small issues still may arrise, especially if you use the “setup” as it was not intended to be used). Since I started using AndroidAPS with my pump, I haven’t touched the pump, except when I have to change the reservoir, and this is the way that AndroidAPS should be used. Logging¶ Since Medtronic driver is very new, you need to enable logging, so that we can debug and fix problems, if they should arise. Click on icon on upper left corner, select Maintainance and Log Settings. Options Pump, PumpComm, PumpBTComm need to be checked. RileyLink/GNARL¶ When you restart RileyLink or GNARL, you need to either do new TuneUp (action “Wake and Tune Up”) or resend communication parameters (action “Reset RileyLink Config”), or else communication will fail. Manual use of pump¶ You should avoid manually doing treatments things on your pump. All commands (bolus, TBR) should go through AndroidAPS, but if it happens that you will do manual commands, do NOT run commands with frequency less than 3 minutes (so if you do 2 boluses (for whatever reason), second should be started at least 3 minutes after first one). Timezone changes and DST (Daylight Saving Time) or Traveling with Medtronic Pump and AndroidAPS¶ Important thing to remember is that you should never disable loop when you are traveling (unless your CGMS can’t do offline mode). AAPS will automatically detect Timezone changes and will send command to Pump to change time, when time on Phone is changed. Now if you travel to East and your TZ changes with adding hours (ex. from GMT+0 to GMT+2), pump history won’t have problem and you don’t have to worry... but if you travel to West and your TZ changes by removing hours (GMT+2 to GMT-0), then sychronization might be little iffy. In clear text, that means that for next x hours you will have to be careful, because your IOB, might be little weird. We are aware of this problem, and we are already looking into possible solution (see), but for now, have that info in mind when traveling. Can I see the power of RileyLink/GNARL?¶ No. At the moment none of this devices support this and it probably won’t even in the future. Is GNARL full replacement for RileyLink?¶ Yes. Author of GNARL added all functions used by Medtronic driver. All Medtronic communication is supported (at time of the writing (June/2019). GNARL can’t be used for Omnipod communication. Downside of GNARL is that you have to build it yourself, and you have to have compatible version of hardware. Note from author: Please note that the GNARL software is still experimental and lightly tested, and should not be considered as safe to use as a RileyLink. Where can I get RileyLink or GNARL?¶ Like mentioned before you can get devices here: - RileyLink - You can get device here - getrileylink.org. - GNARL - You can get info here, but device needs to be ordered elsewhere (github.com/ecc1/gnarl). What to do if I loose connection to RileyLink and/or pump?¶ - Run “Wake Up and Tune” action, this will try to find right frequency to communicate with pump. - Disable Bluetooth, wait 10s and enable it again. This will force reconnecting to RileyLink. - Reset RileyLink, after you do that do not forget to run “Reset RileyLink Config” action. - Try 3 and 2 together. - Reset RileyLink and reset phone. How to determine what Frequency my pump uses¶ If you turn your pump around in first line on right side you will see special 3 letter code. First two letters determine frequency type and last one determines color. Here are possible values for Frequency: - NA - North America (in frequency selection you need to select “US & Canada (916 MHz)”) - CA - Canada (in frequency selection you need to select “US & Canada (916 MHz)”) - WW - Worldwide (in frequency selection you need to select “Worldwide (868 Mhz)”)
https://androidaps.readthedocs.io/en/latest/CROWDIN/ko/Configuration/MedtronicPump.html
2020-01-17T16:10:15
CC-MAIN-2020-05
1579250589861.0
[array(['../../../_images/Medtronic016.png', 'MDT Settings'], dtype=object) array(['../../../_images/Medtronic026.png', 'MDT Tab'], dtype=object) array(['../../../_images/Medtronic036.png', 'Pump History Dialog'], dtype=object) array(['../../../_images/Medtronic046.png', 'RileyLink Status - Settings'], dtype=object) array(['../../../_images/Medtronic056.png', 'RileyLink Status - History'], dtype=object) array(['../../../_images/Medtronic066.png', 'Pump Model'], dtype=object)]
androidaps.readthedocs.io
Start up the BrachioGraph¶ Create a BrachioGraph instance¶ Power up the Raspberry Pi. Run: sudo pigpiod cd BrachioGraph python3 And then, using the inner_arm and outer_arm length measurements (in cm) that you noted earlier: from brachiograph import BrachioGraph bg = BrachioGraph(inner_arm=<inner_arm>, outer_arm=<outer_arm>) The system will create a BrachioGraph instance and initialise itself, adjusting the motors so that the pen will be at a nominal: - x = -inner_arm - y = outer_arm And this will correspond to: - the upper arm at -90 degrees, 1500µS pulse-width - the lower arm at 90 degrees to it, 1500µS pulse-width - the lifting motor in the pen up position, 1700µS pulse width Check the movement¶ We must make sure that the arms move in the direction we expect. Run: bg.set_angles(angle_1=-90, angle_2=90) This shouldn’t do anything; the arms should already be at those angles. Now try changing the values (one at a time) in five-degree increments, e.g.: bg.set_angles(angle_1=-95, angle_2=90) # should move the inner arm 5 degrees anti-clockwise Increasing the values should move the arms clockwise; decreasing them should move them anti-clockwise. To avoid violent movement, don’t move them more than five or ten degrees at a time. The movements may be reversed, because different motors, or the same motor mounted differently, can produce a reversed movement for the same input. In this case you need to incorporate that into your BrachioGraph definition, by explicitly providing servo_1_degree_ms (default: -10) and servo_2_degree_ms (default: 10) values. For example, if the outer arm’s movement were reversed, you’d need to initialise the plotter with: bg = BrachioGraph(inner_arm=<inner_arm>, outer_arm=<outer_arm>, servo_2_degree_ms=-10) Attach the arms¶ Attach the arms in the configuration shown, or as close as possible. Of course the arms may be a few degrees off the perpendicular, but don’t worry about that now. Attach the horn to the lifting motor. You need the pen to be just clear of the paper in the up position. The lifting movement can cause unwanted movement of the pen, so you need to minimise that. You can experiment with: bg.pen.rpi.set_servo_pulsewidth(18, <value>) to find a good pair of up/down values. Then you can include them in your initialisation of the BrachioGraph, by supplying pw_up and pw_down Of course your arms may be a few degrees off. Don’t worry about that now. Take the BrachioGraph for a drive¶ bg.drive_xy() Controls: - 0: exit - a: increase x position 1cm - s: decrease x position 1cm - A: increase x position .1cm - S: decrease x position .1cm - k: increase y position 1cm - l: decrease y position 1cm - K: increase y position .1cm - L: decrease y position .1cm Use this to discover the bounds of the box the BrachioGraph can draw. Take a note of the bounds - the box described by [<minimum x>, <minimum y, <maximum x>, <maximum y>]. Reinitialise your plotter with these values: bg = BrachioGraph(inner_arm=<inner_arm>, outer_arm=<outer_arm>, bounds=[<minimum x>, <minimum y, <maximum x>, <maximum y>]) Test it¶ Draw a box, using the bounds: bg.box() and a test pattern: bg.test_pattern() If the lines are reasonably straight and the box is reasonably square, try plotting a file: bg.plot_file("test_file.json")
https://brachiograph.readthedocs.io/en/latest/get-started/drive.html
2020-01-17T17:36:33
CC-MAIN-2020-05
1579250589861.0
[array(['../_images/starting-position.jpg', "'Starting position'"], dtype=object) array(['../_images/lifting-mechanism.jpg', "'Pen-lifting mechanism'"], dtype=object) ]
brachiograph.readthedocs.io
Brekeke Contact Center Suite Wiki Brekeke CCS basic InfoConfigurationAdvanced customizationAdd-onsAculab CPA settings on Brekeke CCSIntegration with other ServicesAppendix - Brekeke CRMTroubleshooting 4.Add-ons This section describes additional software, features, and peripherals that enhance the functionality of the Brekeke Contact Center Suite (CCS). To view a topic of your interest, please click on the topics listed in the left column. Was this helpful? Yes No « Change RFS recorded files pathAculab CPA settings on Brekeke CCS » Suggest Edit Captcha : Submit
https://docs.brekeke.com/ccs/add-ons-2
2020-01-17T15:23:42
CC-MAIN-2020-05
1579250589861.0
[]
docs.brekeke.com
Setting up LOGO!¶ Requirements - Siemens Edition or Ultimate Edition - Siemens LOGO! PLC (0BA7 or 0BA8) - LOGO!Soft Comfort Sample Project This tutorial gives you step-by-step instructions on how to use a Siemens LOGO! PLC to control FACTORY I/O. Setting up communication between PC and PLC¶ Connect the PLC to the network. Create a new diagram in LOGO!Soft Comfort. In the LOGO! settings dialog change the Hardware type to match your PLC. Note that only 0BA7 and 0BA8 are supported. Now move to the Online settings tab. Press the refresh button to get a list of accessible LOGO! devices. Select the desired device and press Connect. You can now assign a different IP address to the PLC if necessary. Press OK to exit the settings. Open the Ethernet connections configuration. It can be found under the Tools menu. Fill in the IP Address, Subnet, and Gateway fields. Right-click on Ethernet Connections and press Add server connection. Double-click on the newly created connection. Edit the connection as shown in the figure: Check both Connect with an Operator Panel and Accept all connection requests. Set the Remote Properties TSAP value to 02.00. Finally, transfer the new configuration to the PLC. Select Tools > Transfer > PC -> LOGO!. Connecting FACTORY I/O to the PLC¶ In FACTORY I/O click on FILE > Driver Configuration to open the Driver Window. Select Siemens LOGO! on the driver drop-down list. Open the driver Configuration Panel by clicking on CONFIGURATION..
https://docs.factoryio.com/tutorials/siemens/setting-up-logo/
2020-01-17T16:32:57
CC-MAIN-2020-05
1579250589861.0
[]
docs.factoryio.com
Switch. 64-bit systems use considerably more memory than 32-bit systems to store the same keys, especially if the keys and values are small. This is because small keys are allocated full 64 bits resulting in the wastage of the unused bits. Advantages Switching to 32-bit from 64-bit machine can substantialy reduce the cost of the machine used and can optimize the usage of memory. Trade Offs For the 32-bit Redis variant, any key name larger than 32 bits requires the key to span to multiple bytes, thereby increasing the memory usage. When to Avoid Switching to 32 bit If your data size is expected to increase more than 3 GB then you should avoid switching.
https://docs.redislabs.com/latest/ri/memory-optimizations/switch-to-32-bits/
2020-01-17T17:21:05
CC-MAIN-2020-05
1579250589861.0
[]
docs.redislabs.com
User Login Lockout for Security Compliance To help reduce the risk of a brute force attacks on Redis Enterprise Software (RS), RS includes user login restrictions. You can customize the restrictions to align with the security policy of your organization. Every failed login is shown in the logs. User Login Lockout The parameters for the user login lockout are: - Login Lockout Threshold - The number of failed login attempts allowed before the user account is locked. (Default: 5) - Login Lockout Counter Reset - The amount of time during which failed login attempts are counted. (Default: 15 minutes) - Login Lockout Duration - The amount of time that the user account is locked after excessive failed login attempts. (Default: 30 minutes) By default, after 5 failed login attempts within 15 minutes, the user account is locked for 30 minutes. You can view the user login restrictions for your cluster with: rladmin info cluster | grep login_lockout Customizing the User Lockout Parameters You can customize the user lockout parameters with from rladmin. Changing the Login Lockout Threshold You can set the login lockout threshold with the command: rladmin tune cluster login_lockout_threshold <login_lockout_threshold> If you set the lockout threshold to 0, the account is not locked out after failed login attempts, and the cluster settings show: login_lockout_threshold: disabled For example, to set the lockout threshold to 10 failed login attempts. rladmin tune cluster login_lockout_threshold 10 Changing the Login Lockout Counter Reset You can set the login lockout reset in seconds with the command: rladmin tune cluster login_lockout_counter_reset_after <login_lockout_counter_reset_after> For example, to set the lockout reset to 1 hour: rladmin tune cluster login_lockout_counter_reset_after 3600 Changing the Login Lockout Duration You can set the login lockout duration in seconds with the command: rladmin tune cluster login_lockout_duration <login_lockout_duration> If you set the lockout duration to 0, the account must be manually unlocked by an administrator, and the cluster settings show: login_lockout_duration: admin-release For example, to set the lockout duration to 1 hour: rladmin tune cluster login_lockout_duration 3600 Unlocking Locked User Accounts Before the lockout duration ends, an administrator can change the user password in order to manually unlock the user account. To reset a user password from the CLI, run: rladmin cluster reset_password <username> You are asked to enter and confirm the new password.
https://docs.redislabs.com/latest/rs/administering/designing-production/security/login-lockout/
2020-01-17T17:25:28
CC-MAIN-2020-05
1579250589861.0
[]
docs.redislabs.com
Journey PlatformPreviously known as the Transact Platform. The. The Journey platform solves the following key architectural challenges in building omnichannelOmnichannel is a service delivery model that integrates different interaction points for customers such as online, by phone or in store. Omnichannel Banking offers banking access across these three methods. customer acquisition and onboardingThe steps required to get a new customer integrated into a new program. These steps may vary business to business. systems: The Journey platform supports all the key facets of the customer acquisition process, by providing features to the front-end channels as functions, and simplifying the omnichannel experience for the end consumer. It delivers the ability to maintain a loose coupling of the state, user data, and input actions of the consumer to the back-end system, which allows rapid deployment of customer initiatives without complex and costly changes to existing systems. The Journey platform acts as a PaaSPlatform as a Service (PaaS) is a category of cloud computing services that provides a platform allowing customers to develop, run, and manage applications without the complexity of building and maintaining the infrastructure typically associated with developing and launching an app., providing a service platform on which banks can deploy their web applications. Inside the platform, there are multiple services that run independently of one another maintaining the platform functionality, such as security, front-end services, management, and reporting and operational monitoring. These services can be independently scaled and adjusted to fit the need of the customer’s applications. The PaaS architecture allows rapid and simple deployment and scaling of services needed to deliver complex user experience orchestrations. The example of a deposit account opening application built on the Journey platform is shown below: The Journey platform is most often deployed as a cloud-based system and configured as a private secure environment for each bank. End users engage in the customer acquisition journey through the Journey platform’s servers and application data is exchanged to and from bank systems of record via API services. The Journey platform configuration in the Amazon AWS cloud provides for the highest levels of security, reliability, scalability and data integrity. Interacting with the Journey platform PaaS is made simple through the ability to make API calls from the front-end framework of choice, meaning that the complexity of building multiple platform services is reduced to a set of standard REST API calls. This allows the bank development team to dictate the terms of engagement with the platform, choosing to either leverage in-house skills to build experiences or using our tools for building customer interaction workflows, saving time and expense against having to build the entire set of services from scratch. The Journey platform incorporates the following three products, also known as modules, to design, manage, and optimize the customer experience across channels without impacting back-end systems: Journey Maestro is the HTML5Maestro forms are fundamentally built using HTML 5. The structure of the form and the text displayed in the form are all configured and established using HTML5. HTML is the standard markup language for creating Web pages and stands for Hypertext Markup Language. HTML 5 describes the structure of a Maestro form., design environment that allows business teams to build sophisticated and elegant web form applications. Journey Manager is the enterprise management system that hosts the web form applications and enables sophisticated customer interaction and 3rd party system integration between form transactions and back-end business processes. Journey Analytics captures customer behavior to drive detailed form transaction analytics on how to reduce friction and abandonment within the web form applications. It allows you to watch transaction progress and monitor completion rates, bounce rates, and submission volume for different devices and forms. You can test new ideas with A/B testing and instantly monitor results. Then when changes are needed, adjustments are data-driven similar to a web content manager, no coding changes required. This allows rapid deployment in a few clicks, no need for waiting on test and release cycles. Watch this video to learn more about the Journey platform. Next, learn about Journey platform
https://docs.avoka.com/Platform/PlatformOverview.htm
2020-01-17T15:54:55
CC-MAIN-2020-05
1579250589861.0
[]
docs.avoka.com
This partnership simplifies sourcing SIP origination, termination, and other services for Brekeke-enabled telecom carriers SAN MATEO, Calif., July 11, 2019 -Brekeke Software has successfully completed interoperability testing of the SIP trunking origination and termination services provided by VoIP Innovations (VI), a leading global communications provider. With this interoperability validation, users of services that are run on Brekeke software will have immediate access to an abundant selection of DIDs across the USA, Canada, and more than 70 countries, toll-free numbers, and voice termination over 200 countries worldwide. “We are excited to have VoIP Innovations join our network of technology partners. We see that VoIP Innovations’ carrier services are among the highest quality in the industry, and they have everything telephony service providers need or want,” said Shin Yamade, Brekeke’s CEO. “Our software serves a wide range of services and industries, from SIP telecom service providers to contact centers. We are excited to add VoIP Innovations’ extensive DIDs and termination services to our partner’s options.,” added Yamade. “Brekeke has been a key provider of SIP (Session Initiation Protocol) software to the VoIP Industry for many years and they have continued to innovate to enable service provider customers to operate in high growth markets such as Contact Center as a Service and Unified Communications as a Service,” said David Walsh, CEO of VoIP Innovations. “We are looking forward to expanding our cooperation with the Brekeke team to serve mutual customers around the globe.” VoIP Innovations offers a suite of telecommunications services which include origination, termination, E911, Fraud Detection, Fraud Protection, and Hosted Billing. In addition to carrier wholesale services, VI offers Apidaze CPaaS capabilities for programmable voice, SMS, and WebRTC video. Apidaze makes it simple and fast for customers to build unique communications apps and services. The newest addition to the VoIP Innovations offering is the VI Showroom, a marketplace for CPaaS apps and services. The Showroom allows Channel Partners to resell white-labeled CPaaS apps and services, and Developers to monetize the communications services they build. Configuration sample for setting up a VoIP Innovations account and Brekeke PBX: video, voipinnovations.com.
https://docs.brekeke.com/2019/07/11/brekeke-and-voip-innovations-team-up-to-enable-telephony-service-providers/
2020-01-17T15:33:51
CC-MAIN-2020-05
1579250589861.0
[array(['https://docs.brekeke.com/wp-content/uploads/2018/03/news_picture01.jpg', None], dtype=object) ]
docs.brekeke.com
Create an engagement file for the next period (roll forward) This feature is only available with CaseWare Working Papers. Learn more about Working Papers. When the period for the engagement is coming to a close, you'll want to create an engagement file for the next period. In Working Papers, the process of completing an engagement is known as a Year End Close. During this process, you can also create a new file for the next period; this is known as a Roll Forward. In a Roll Forward, the current file's closing balances are used to define the opening balances in the new file. Performing a year end close You can perform a year end close on a Working Papers file when all work for the engagement is complete. By doing this, you create a new Working Papers file and rolling forward chosen balances and data to set up next year’s file. To perform a year end close: Ensure that you have the Owner role or equivalent permissions for the Working Papers file. From the Cloud menu, select Working Papers. Select the Working Papers file for your engagement. In the details pane, select Open Sync Copy. In the Working Papers file, on the Ribbon, click the Engagement tab. In the Manage section, click Year End Close. In the Year End Close and Roll Forward dialog, enter a file name for next year’s engagement file. Choose your options for the roll forward, and select any information to be included in next year’s file. When you have made your selections, click OK. Working Papers will perform the year end close, and once the process is finished, it will automatically open the new file for next year. When you perform a year end close on a published file, you also create a placeholder for the file under the client entity. Cloud displays the placeholder, but the file cannot be opened until you publish your local copy. Publishing the new year’s file to Cloud When you are ready to work on the new file, you can publish it to Cloud. To publish a Working Papers file: Open your engagement file in Working Papers. On the Ribbon, in the Cloud tab, click Publish. In the Publish to Server dialog, choose the client entity that the file belongs to. Click OK. You have successfully published your Working Papers file to Cloud. This creates a parent copy of the file on Cloud, and you can now share it with other staff members. The parent copy replaces the placeholder that was created when you first performed the year end close on the last year’s file.
https://docs.caseware.com/2019/webapps/30/es/Engagements/File-Preparation/Create-an-engagement-file-for-the-next-period-(roll-forward).htm
2020-01-17T15:25:51
CC-MAIN-2020-05
1579250589861.0
[array(['/documentation_files/2019/webapps/30/Content/en/Resources//CaseWare_Logos/workingpaperslogo.png', None], dtype=object) array(['/documentation_files/2019/webapps/30/Content/en/Resources//Images/roll-forward-6-630x272.png', 'The file icon is a Working Papers icon, but it is faded out, indicating that it is a placeholder.'], dtype=object) array(['/documentation_files/2019/webapps/30/Content/en/Resources//Images/publish-WP-file-rollforw2-630x272.png', "The local version of the Working Papers file has been published. Next year's file is now a parent copy on Cloud."], dtype=object) ]
docs.caseware.com
Features in Configuration Manager technical preview version 1904 Applies to: Configuration Manager (technical preview branch) This article introduces the features that are available in the technical preview for Configuration Manager, version 1904.: Office 365 ProPlus upgrade readiness dashboard To help you determine which devices are ready to upgrade to Office 365 ProPlus, there's a new readiness dashboard. It includes the Office 365 ProPlus upgrade readiness tile that released in Configuration Manager current branch version 1902. The following new tiles on this dashboard help you evaluate Office add-in and macro readiness: - Add-in readiness - Add-in support statements - Top add-ins by count of version - Number of devices that have macros - Macro readiness In the Configuration Manager console, go to the Software Library workspace, expand Office 365 Client Management, and select the Office 365 ProPlus Upgrade Readiness node. For more information on prerequisites and using this data, see Integration for Office 365 ProPlus readiness. Configure dynamic update during feature updates Use a new client setting to configure dynamic updates for Windows 10 feature updates. Dynamic update can install language packs, features on demand, drivers, and cumulative updates during Windows setup. This setting changes the setupconfig file used during feature update installation. For more information about Dynamic Update, see The benefits of Windows 10 Dynamic Update. Try it out! Try to complete the tasks. Then send Feedback with your thoughts on the feature. - Go to Administration > Overview > Client Settings. - Double-click on either Default Client Settings or one of your custom client settings. - Click on Software Updates. - Change Enable Dynamic Update for feature updates to either Yes or No. - Not Configured - The default value. No changes are made to the setupconfig file. - Yes - Enable Dynamic Update. - No - Disable Dynamic Update. Community hub and GitHub The IT Admin community has developed a wealth of knowledge over the years. Rather than reinventing Scripts and Reports from scratch, we've built a Configuration Manager Community Hub where IT Admins can share with each other. By leveraging the work of others, you can save hours of. Prerequisites A GitHub account - A GitHub account is only required to contribute and share content from the My Hub page. - If you don't wish to share, you can use contributions from others without having a GitHub account. - If you don't already have a GitHub account, you can create one before you join. The device running the Configuration Manager console used to access the hub needs the following: - Windows 10 build 17110 or higher - .Net Framework version 4.6 or higher To download reports, you'll need Full Administrator rights in Configuration Manager. To download reports, you need to turn on the option Use Configuration Manager-generated certificates for HTTP site systems at the site you're importing into. For more information, see enhanced HTTP. This prerequisite is also needed in 1906 Technical Preview for updating hub objects. - Go to Administration > Site Configuration > Sites. - Select the site and choose Properties in the ribbon. - On the General tab, select the option to Use Configuration Manager-generated certificates for HTTP site systems. Try it out! Try to complete the tasks. Then send Feedback with your thoughts on the feature. Join the community hub to contribute content Go to the Hub node in the Community workspace. Click on My Hub and you'll be prompted to sign into GitHub. If you don't have an account, you'll be redirected to GitHub where you can create one. Once you've signed into GitHub, click the Join button to join the community hub. After joining, you'll see your membership request is pending. Your account needs approval by the Configuration Manager Hub Content Curation team. Approvals are done once a day, so it may take up to 1 business day for your approval to be granted. Once you're granted access, you'll get an email from GitHub. Open the link in the email to accept the invitation. Contribute content Once you've accepted the invitation, you can contribute content. - Go to Community > Hub > My Hub. - Click Add an Item to open the contribution wizard. - Specify the settings for the object: - Type: - PowerShell script for Run Scripts use - Name: Name of your object - Description: The description of the object you're contributing. - Click Next to submit the contribution. - Once the contribution is complete, you'll see the GitHub pull request (PR) link. The link is also emailed to you. You can paste the link into a browser to view the PR. Your PR will go though the standard GitHub merge process. - Click Close to exit the contribution wizard. - Once the PR has been completed and merged, the new item will show up on the community hub home page for others to see. - Currently, the audience is limited to other IT admins who are looking at the community hub in the Tech Preview builds. Use the contributions of others You don't need to sign into GitHub to use contributions others have made. - Go to the. CMPivot standalone You can now use CMPivot as a standalone app. Run it outside of the Configuration Manager console to view the real-time state of devices in your environment. This change enables you to use CMPivot on a device without first installing the console. You can now share the power of CMPivot with other personas, such as helpdesk or security admins, who don’t have the console installed on their computer. These other personas can use CMPivot to query Configuration Manager alongside the other tools that they traditionally use. By sharing this rich management data, you can work together to proactively solve business problems that cross roles. Prerequisites Set up the permissions needed to run CMPivot. For more information, see CMPivot. Try it out! Try to complete the tasks. Then send Feedback with your thoughts on the feature. You'll find the CMPivot app in the following path: <site install path>\tools\CMPivot\CMPivot.exe. You can run it from that path, or copy the entire CMPivot folder to another location. Run CMPivot from the command line using the following parameters: -sms:Connection="<namespace>"(required): The connection path to the SMS Provider for the site. The format of the namespace is \\<ProviderServerFullName>\root\sms\site_<siteCode>. For example, \\prov01\root\sms\site_ABC. -sms:CollectionID="<CollectionID>"(required): The ID of the collection that the tool uses for queries. For example, ABC00014. To change the collection, close the tool, and restart it with a new collection ID. The following command line is a complete example: CMPivot.exe -SMS:Connection="\\prov01\root\sms\site_ABC" -SMS:CollectionID="ABC00014" For more information on the benefits and use of CMPivot, see the following articles: Software Center infrastructure improvements This release includes the following infrastructure improvements to Software Center: Software Center now communicates with a management point for apps targeted to users as available. It doesn't use the application catalog anymore. This change makes it easier for you to remove the application catalog from the site. Previously, Software Center picked the first management point from the list of available servers. Starting in this release, it uses the same management point that the client uses. This change allows Software Center to use the same management point from the assigned primary site as the client. Improved control over WSUS Maintenance You now have more granular control over the WSUS maintenance tasks that Configuration Manager runs to maintain healthy software update points. In addition to declining expired updates in WSUS, Configuration Manager can now add non-clustered indexes to the WSUS database. The indexes improve the WSUS cleanup performance that Configuration Manager initiates. On each SUSDB used by Configuration Manager, indexes are added to the following tables: - tbLocalizedPropertyForRevision - tbRevisionSupersedesUpdate Permissions When the WSUS database is on a remote SQL server, the site server's computer account needs the following SQL permissions: - Creating an index requires ALTERpermission on the table or view. The site server's computer account must be a member of the sysadminfixed server role or the db_ddladminand db_ownerfixed database roles. For more information about creating and index and permissions, see CREATE INDEX (Transact-SQL). - The CONNECT SQLserver permission must be granted to the site server's computer account. For more information, see GRANT Server Permissions (Transact-SQL). Note If the WSUS database is on a remote SQL server using a non-default port, then indexes might not be added. You can create a server alias using SQL Server Configuration Manager for this scenario. Once the alias is added and Configuration Manager can make a connection to the WSUS database, indexes will be added. Try it out! Try to complete the tasks. Then send Feedback with your thoughts on the feature. In the Configuration Manager console, navigate to Administration > Overview > Site Configuration > Sites. Select the site at the top of your Configuration Manager hierarchy. Click Configure Site Components in the Settings group, and then click Software Update Point to open Software Update Point Component Properties. In the WSUS Maintenance tab, select Add non-clustered indexes to the WSUS database. Pre-cache driver packages and OS images Task sequence pre-cache now includes additional content types. Pre-cache content previously only applied to OS upgrade packages. Now you can use pre-caching to reduce bandwidth consumption of OS images and driver packages. Try it out! Try to complete the tasks. Then send Feedback with your thoughts on the feature. Create OS images for specific architectures and languages. Specify the Architecture and Language on the Data Source tab of the package. To determine which OS image it downloads during pre-caching, the client evaluates the architecture and language values. Create driver packages for specific hardware models. Specify the Model on the General tab of the package. To determine which driver package it downloads during pre-caching, the client evaluates the model against the Win32_ComputerSystemProduct WMI property. Create a task sequence with the following steps: More than one Apply OS Image step with conditions for the different languages and architectures. More than one Apply Driver Package step with conditions for the different models. Tip For an example of conditional steps with the Upgrade OS step, see Configure the pre-cache feature. Deploy the task sequence. For the pre-cache feature, configure the following settings: On the General tab, select Pre-download content for this task sequence. On the Deployment settings tab, configure the task sequence as Available. On the Scheduling tab, choose the currently selected time for the setting, Schedule when this deployment will be available. The client starts pre-caching content at the deployment's available time. When a targeted client receives this policy, the available time is in the past, thus pre-cache download starts right away. If the client receives this policy but the available time is in the future, the client doesn't start pre-caching content until the available time occurs. On the Distribution Points tab, configure the Deployment options settings. If the content isn't pre-cached before a user starts the installation, the client uses these settings. For more information on pre-caching behavior and functionality, see Configure pre-cache content. Improvements to OS deployment This release includes the following improvements to OS deployment: Based on your UserVoice feedback, the following two PowerShell cmdlets to create and edit the Run Task Sequence step: New-CMTSStepRunTaskSequence Set-CMTSStepRunTaskSequence Based on your UserVoice feedback, a new task sequence variable, SMSTSRebootDelayNext. Use this new variable with the existing SMSTSRebootDelay variable. If you want any later reboots to happen with a different timeout than the first, set SMSTSRebootDelayNext to a different value in seconds. For example, you want to give users a 60-minute reboot notification at the start of a Windows 10 in-place upgrade task sequence. After that first long timeout, you want additional timeouts to only be 60 seconds. Set SMSTSRebootDelay to 3600, and SMSTSRebootDelayNext to 60. Next steps For more information about installing or updating the technical preview branch, see Technical preview. For more information about the different branches of Configuration Manager, see Which branch of Configuration Manager should I use? Feedback
https://docs.microsoft.com/en-us/configmgr/core/get-started/2019/technical-preview-1904
2020-01-17T17:52:22
CC-MAIN-2020-05
1579250589861.0
[array(['media/4021125-o365-dashboard.png', 'Office 365 ProPlus upgrade readiness dashboard'], dtype=object)]
docs.microsoft.com