content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
Difference between revisions of "Joomla 1.6 Beginners" From Joomla! Documentation Redirect page Revision as of 18:04, 8 May 2010 (view source)Horus 68 (Talk | contribs) (New page: {{incomplete}} {{future|1.6}} {{RightTOC}} == Get Ready to Install == Getting started on your Joomla Web site is easy to do. Many hosting services offer a simple one-click installation, ...) Latest revision as of 21:13, 18 January 2014 (view source) Tom Hutchison (Talk | contribs) m (redirect to overview) (6 intermediate revisions by 4 users not shown)Line 1: Line 1: −{{incomplete}}+#REDIRECT [[Joomla 1.6 Overview]] −{{future|1.6}}+ −{{RightTOC}}+ − + −== Get Ready to Install ==+ − + −Getting started on your Joomla Web site is easy to do. Many hosting services offer a simple one-click installation, but if you'd rather be more hands-on, have more control or are your own host, all you need is a Web server with PHP and MySQL. + −Most hosts provide these as part of their basic package of services. + −* download a copy of Joomla+ −* install+ − +.+ Latest revision as of 21:13, 18 January 2014 Joomla 1.6 Overview Retrieved from ‘’
https://docs.joomla.org/index.php?title=Joomla_1.6_Beginners&diff=107069&oldid=27156
2016-02-06T07:36:03
CC-MAIN-2016-07
1454701146196.88
[]
docs.joomla.org
Car Paint has components for a paint layer with embedded metal flakes, a clear-coat layer, and a Lambertian dirt layer. Car Paint is available as both a mental ray material and shader; both have identical parameters, and support the following unique characteristics of real-world car paint: You can assign a map or shader to every parameter or “component” of the Car Paint material/shader. The small map buttons to the right of the color swatches and numeric controls open the Material/Map Browser , where you can select a map or shader for that component. These buttons are shortcuts: You can also use the corresponding buttons on the Maps rollout. If you have assigned a map or shader to one of these color components, the button displays the letter M. An uppercase M means that the corresponding map is assigned and active. A lowercase m means that the map is assigned but inactive (turned off). Click the color swatch to choose the ambient light component. Click the map button to assign an Ambient Color map. This button is a shortcut: You can also assign an Ambient Color map on the Maps rollout. Click the color swatch to choose the base diffuse color of the material. Click the map button to assign a Diffuse Color map. This button is a shortcut: You can also assign a Diffuse (Base) Color map on the Maps rollout. The falloff rate of the color towards the edge. Higher values make the edge region narrower; lower values make it wider. The useful range is 0.0 to approximately 10.0, where the value 0.0 turns the effect off. Default=1.0. Color shift due to view angle, shifting between a red base color and a blue edge color (atypical colors chosen for demonstration purposes) with varying Edge Bias values The falloff rate of the color towards the light. Higher values make the colored region facing the light smaller/narrower; lower values make it larger/wider. The useful range is 0.0 to approximately 10.0, where the value 0.0 turns the effect off. Default=8.0. Color shift due to view angle, shifting between a red base color and a green light facing color (atypical colors chosen for demonstration purposes) with varying Light Facing Color Bias values The amount of ray-traced reflection in the flakes, which allows glittery reflections of, for example, an HDRI environment. The default value of 0.0 turns the effect off. This effect should generally be very subtle; a value of 0.1 is often enough. The final intensity of reflections also depends on the Flake Color and Flake Weight values. The distance at which the influence of the flakes fades out. The default value of 0.0 disables fading. Any positive value causes the Flake Weight value to be modulated so that it reaches zero at this distance. Because flakes are relatively small, they can introduce rendering artifacts if their visual density becomes significantly smaller than a pixel. If the oversampling of the rendering is set high, small flakes can also potentially trigger massive oversampling and hence overlong rendering times needlessly, because the averaging caused by the oversampling will essentially cancel out the flake effect. If you experience these issues, use Flake Decay Distance to counteract them. Flakes at different distances with no flake decay. The farthest flakes might cause flicker in animations, or trigger unnecessary oversampling and long render times (rendered here with low oversampling for illustrative purposes). Using flake decay. The flake strength diminishes with distance. The same intentionally low oversampling as in the previous image has been used. Specular Reflections rollout Click the color swatch to change the color of the primary specular highlight. Click the map button to assign a Specular Color Map. This button is a shortcut: You can also assign a Specular Color map on the Maps rollout. When on, enables a special mode on the primary specular highlight called glazing. By applying a threshold to the specular highlight, it makes the surface appear more polished and shiny. For a new sports car with a lot of wax, turn this on. For a beat-up car in the junkyard, turn it off. Default=on. Left to right: Flake specularity only; standard specularity; "glazed" mode enabled; "glazed" mode specularity with flakes Click the color swatch to change the color of the reflections in the clear-coat layer. By default this is white, which is the typical color. Click the map button to use Reflection Map. This button is a shortcut: You can also assign a Reflection map on the Maps rollout. Dirty Layer (Lambertian) rollout Real cars are rarely clean. This shows the dirt layer (hand-painted dirt-placement map), including a bump map applied in the dirty regions. A simple Lambertian dirt layer covers the underlaying paint and clear-coat layers. This rollout lets you assign a map or shader to any Car Paint parameter. You can also assign maps and shaders on the rollout where the parameter first appears: The principal value of this rollout is that it also lets you toggle a parameter's shader, using the checkbox, without removing the map. The button to the right of each main shader button is for shaders that can return multiple parameters. If a shader that returns multiple parameters is assigned to the component, the button's tooltip shows the parameter name.
http://docs.autodesk.com/3DSMAX/15/ENU/3ds-Max-Help/files/GUID-1CD21856-588A-4A05-AC0A-88489F5F9C84.htm
2016-02-06T07:05:01
CC-MAIN-2016-07
1454701146196.88
[]
docs.autodesk.com
See also: Differences between standard ECMA-367 and Eiffel Software implementation s := once "abc" io.put_string (once "Hello World!")Once manifest strings are not created every time they are accessed. Instead one instance is created at the first access and then it is reused for subsequent accesses. In multithreaded application one instance is created for one thread. negate alias "-": like Current ... multiply alias "*" (other: like Current): like Current ... item alias "[]" (index: INTEGER): G ...The first two declarations can be used similar to features prefix "-" and infix "*". The last one can be used to make feature calls with bracket expressions like letter := letters [i]Operator and bracket aliases can also be used in rename subclause to change an alias name associated with a feature. item alias "[]" (index: INTEGER): G assign put ... put (value: G; index: INTEGER) ...Given the declaration above the following instructions become equivalent: x.put (x.item (i + 1), i) x.item (i) := x.item (i + 1) x [i] := x [i + 1] if true then [else(if) ...] endi.e. has empty compound for a primary condition which is a constant true. i := (i + i) |>> 1assigned -128 to i of type INTEGER_8 if the initial value of i was -128. Now this instruction assigns 0.
http://docs.eiffel.com/eiffelstudio/tools/eiffelstudio/reference/30_compiler/60_version_history/06_56_release/index.html
2008-05-15T05:35:42
crawl-001
crawl-001-007
[]
docs.eiffel.com
Online Eiffel Documentation Documentation Home > Tools > EiffelStudio > EiffelStudio Reference > EiffelStudio Debugger > EiffelStudio's debugger release notes EiffelStudio Release notes for EiffelStudio 5.6 related to the debugger Graphical environment The stack, objects and evaluator tools now use a grid display instead of simply a tree view A mixed multicolumn list and tree view display which display item only when shown, this make debugger much faster to display values The evaluator tool renamed as "watch tool" has more features than previously, such as browsing the objects value directly in the tool, moving up and down the expression, dopped objects as them self, .... The new watch tool can be used instead of the objects view to display current, and dropped objects (and of course expressions). You can close and create new watch tools Each tool has its hexdecimal/decimal format switch command. Most of the debugging tools are now inside a notebook, and are dockable outside this notebook Various minor bug fixes and improvements regarding the interface Debugger engine Fixed a memory leak with the `estudio' process when conditional breakpoints are enabled. Improved speed of execution when conditional breakpoints are enabled (about 20 times faster). Fixed a bug where after killing a debugged process, the debugger could not be launched anymore. Improved speed regarding STRING and SPECIAL manipulation in the debugger Dotnet: fixed memory/handle leak when debugging dotnet system Dotnet: fixed memory corruption while debugging dotnet system Dotnet: EiffelStudio do not hang (100% CPU) anymore during dotnet debugging. Dotnet: make EiffelStudio much more stable while debugging dotnet system. Dotnet: fixed various issues which occurred on killing debugged application Dotnet: improved support for dotnet v2.0.x debugging Dotnet: fixed expression evaluation on pure dotnet object Evaluation: fixed remaining error status on expression (sometimes when an expression has an error, changing the expression to a valid expression was not reseting the error status) Evaluation: improved error message reporting for expression evaluation Evaluation: improved the stability and the result validity of expression Breakpoint: fixed some issues related to setting, removing breakpoints, and remaining hidden breakpoints Breakpoint: now the debugger stops on a condition breakpoint when the condition's result is True, but also if the expression is not supported, or if it raises an exception Current restrictions and known issues You will find a list here
http://docs.eiffel.com/eiffelstudio/tools/eiffelstudio/reference/40_debugger/80_version_history/Eiffel56.html
2008-05-15T05:35:25
crawl-001
crawl-001-007
[]
docs.eiffel.com
Department of public Health and environment Скачать 0.62 Mb. Название Department of public Health and environment страница 9/10 Дата конвертации 31.01.2013 Размер 0.62 Mb. Тип Документы 1 2 3 4 5 6 7 8 9 10 Appendix A Criteria for Control of Vapors from Gasoline Transfer to Storage Tanks I. Drop Tube Specifications . Submerged fill is specifically required. The drop tube must extend to within 15.24 cm (6 in.) of the tank bottom. II. Vapor Hose Return . Vapor return line and any manifold must be minimum 7.6 cm (3 in.) ID. All tanks must be provided with individual overfill protection. (Liquid must not be allowed in the vent line or vapor recovery line.) Disconnect on liquid line should assure that all liquid in the hose is drained into the storage tank. The requirements for overfill protection as specified may be waived for existing storage tanks when it is demonstrated to the satisfaction of the appropriate local Fire Marshal, and where applicable, the State Oil Inspection Office that the installation of overfill protection devices on existing tanks is physically not possible. III. Size of Vapor Line Connections . For separate vapor lines, nominal three inch (7.6 cm) or larger connections must be utilized at the storage tank and truck. However, short lengths of 2-inch (5.1 cm) vertical pipe no greater than 91.4 cm (3 ft.) long are permissible if the fuel delivery rate is less than 400 gallons per minute. Where concentric (coaxial) connections are utilized, a 45 cm2 (7 sq. in.) area for vapor return shall be provided. Four-inch concentric designs are acceptable only when using a venturi-shaped outer tube or where normal drop rate of 1,700 liters per minute (450 gpm) is reduced by at least 25%. Six-inch (15.24 cm) risers should be installed in new stations with concentric connections. IV. Type of Liquid Fill Connection . Vapor tight caps are required for the liquid fill connection for all systems. A positive closure utilizing a gasket is necessary to prevent vapors from being emitted at ground level. Cam-lock closures meet this requirement. Dry break closures are preferred. V. Tank Truck Inspection. Tank trucks are specifically required to be vapor-tight and to have valid leak-tight certification. The visual inspection procedure must be conducted at least once every six months to ensure properly operating manifolding and relief valves, using the test procedure of Appendix D.B. VI. Dry Break on Underground Tank Vapor Riser . Dry-break closures are required to assure transfer of displaced vapors to the truck and to prevent ground-level, gasoline-vapor emissions caused by failure to connect the vapor return line to the underground tanks (closure on riser to mate with opening on hose). These devices keep the tank sealed until the hose is connected to the underground tank. Concentric couplers without dry-breaks are required to have a dry-break on the vapor line connection to the coupler itself, rather than on the rise pipe from the storage tank. The liquid fill riser should be provided with a gap having a positive closure (threaded or latched). VII. Equipment Ensuring Vapor-Hose Connection During Gasoline Deliveries . An equipment system aboard the tank truck shall insure (barring deliberate tampering) that a vapor return hose is connected from the truck's vapor return line to the tank receiving gasoline. VIII. Vent Line Restriction Devices . Vent line restriction devices are required. They both improve recovery efficiency and, as an integral part of any system, assure that the vapor return line is connected during transfer. If the liquid fill line were attached to the underground tank and the vapor return line were disconnected, then dry break closures would seal the vapor return path to the truck, forcing all vapors out the vent line. In such instances, a restriction device on this vent line greatly reduces fill rate, warning the operator that the vapor line is not connected. Both of the following devices must be used. (a) An orifice of one-half to three-fourth inch (1.25 - 1.9 cm) ID. (b) A pressure/vacuum relief valve set to open at (1) a positive gauge-pressure greater or equal to five inches of water (9 torr) and at (2) a negative gauge-pressure greater or equal to five inches of water (9 torr). IX. Fire and Safety Regulations . All new or modified installations must comply in their entirety with all code requirements including NFPA, Pamphlet 30 (fiberglass is preferred for new manifold lines). For any questions concerning compliance, please contact State Oil Inspection or your local Fire Marshal. X. State Oil Inspection . Requirements of the State Oil Inspection office make accurate measurements of the liquid in the underground tank necessary. Vapor-tight gauging devices will be required in all systems designed such that a pressure other than atmospheric will be held or maintained in the storage tank. The volume of liquid in the tanks maintained at atmospheric pressure may be determined with a stick through the submerged drop tube or through a separate submerged gauging tube extending to within 15.24 cm (6 in.) of the tank bottom. Appendix B Criteria for Control of Vapors From Gasoline Transfer at Bulk Plants (Vapor Balance System) I. Storage Tank Requirements: A. Drop Tube Specification : Underground tanks must contain a drop tube that extends to within six inches (15.24 cm) of the tank bottom. All top loaded above-ground tanks must contain a similar drop tube. Above-ground tanks using bottom loading, where the inlet is flush with the tank bottom, must meet the submerged fill requirement. B. Size of Vapor Lines from Storage Tanks to Loading Rack : See nomograph (Attachment 1). NOTE: Affected sources are free to choose a pipe diameter different from the one suggested by the nomograph if sufficient justification and documentation is presented. C. Pressure Relief Valves: All pressure relief valves and valve connections must be checked periodically for leaks, and be repaired as required. The relief valve pressures should be set in accordance with Sections 2-2.5.1 and 2-2.7.1 inclusive of the current National Fire Protection Agency Pamphlet No. 30. D. Liquid Level Check Port : Access for checking liquid level by other than a vapor-tight gauging system shall be vapor-tight when not being used. Tank level shall be checked prior to filling to avoid overfills. E. Miscellaneous Tank Openings : All other tank openings, e.g., tank inspection hatches, must be vapor tight when not being used, and must be closed at all times during transfer of fuel. F. Storage Tank Overfill Protection : Except for concentric (coaxial) delivery systems, underground tanks must have ball check valves (stainless steel ball). Tanks with concentric delivery systems must have Division-approved overfill protection, (e.g., cutoff pressure-switch in vent line). II. Loading Rack Requirements: A. : A vapor-tight bottom-loading or top-loading system using submerged fill with a positive seal, e.g., the Wiggins (tm) system, is required. NOTE: Bulk plants delivering solely to exempt accounts are required to have submerged fill, but loading need not be vapor-tight. B. Dry-Break on Storage Tank Vapor Return Line : A dry-break is required to prevent ground-level gasoline vapor emissions during periods when gasoline transfer is not being made. This device keeps the tank sealed until the vapor return hose is connected. III. Tank Truck* Requirements: A. Vapor Return Modification: Tank trucks must be modified to recover vapors during loading and unloading operations. NOTE: Tank trucks making deliveries solely to exempt accounts do not require this modification. However, 97% submerged fill is required when top loading. B. : Bottom loading or top loading using submerged fill with a positive seal is required for tank trucks modified for vapor recovery. NOTE: When loading a tank truck with this modification without the vapor return hose connected (this is allowed at bulk plants servicing exempt accounts returning without collected vapors in the tank), the requirements of National Fire Protection Agency Pamphlet No. 385, "Loading and Unloading Venting Protection in Tank Vehicles, Section 2219, Paragraph c", must be met. C. Vapor Return Hose Size : A minimum three-inch (7.6 cm) ID vapor return hose is required. D. Tank Truck Inspection : Tank trucks are required to be vapor-tight and have valid leak-tight certification. Periodic visual inspection is necessary to insure properly operating manifolding and relief valves. * The term "tank truck" is meant to include all trucks with tanks used for the transport of gasoline, such as tank wagons, account trucks and transport trucks. Appendix C Minimum Cooling Capacities for Refrigerated Freeboard Chillers on Vapor Degreasers The specifications in this Appendix apply only to vapor degreasers that have both condenser coils and refrigerated freeboard chillers. (The coolant in the condenser coils is normally water.) The amount of refrigeration capacity is expressed in Calories/Hour per meter of perimeter. This perimeter is measured at the air/vapor interface. For refrigerated chillers operated below 0 o C., the following requirements apply: DEGREASER WIDTH *CALORIES/HR METER OF PERIMETER BTU/HR FOOT OF PERIMETER Less than 1.1 meters (3.5 ft.) 165 200 1.1 - 1.8 meters (3.5 - 6.0 ft.) 250 300 1.8 - 2.4 meters (6.0 - 8.0 ft.) 335 400 2.4 - 3.0 meters (8.0 - 10.0 ft.) 145 500 Greater than 3.0 meters (10 ft.) 500 600 * Kilocalories (1 Kilocalorie = 4184.0 joules) For refrigerated chillers operating above 0 o C., there shall be at least 415 Calories/Hr. - meter of perimeter (500 BTU/Hr-ft.), regardless of size. Definition: "Air/Vapor Interface" - means the surface defined by the top of the solvent vapor layer within the confines of a vapor degreaser. Appendix D Test Procedures for Annual Pressure/Vacuum Testing of Gasoline Transport Tanks A. Testing The delivery tank, mounted on either the truck or trailer, is pressurized isolated from the pressure source, and the pressure drop recorded to determine the rate of pressure change. A vacuum test is to be conducted in a similar manner. The Division shall provide forms which designate all required information to be recorded by the testing agency. B. Visual Inspection The entire tank, including domes, dome vents, cargo tank, piping, hose connections, hoses and delivery elbows, shall be inspected for wear, damage, or misadjustment that could be a potential leak source. Inspect all rubber fittings except those in piping which are not accessible. Any part found to be defective shall be adjusted, repaired, or replaced as necessary. (Safety note: it is strongly recommended that testing be done outside, unless tank is first degassed (e.g., steamcleaned). No "hot work" or spark-producing procedures should be undertaken without first degassing). C. Equipment Requirements 1. Necessary equipment. a. Source of air or inert gas of sufficient quantity to pressurize tanks to 27.7 inches of water (1.0 psi; 52 torr) above atmospheric pressure. b. Water manometer with 0 to 25 inch range (0-50 torr); with scale readings of 0.1 inch (or 0.2 torr). c. Test cap for vapor line with a shut-off valve for connection to the pressure and vacuum supply hoses. The test cap is to be equipped with a separate tap for connecting with manometer. d. Cap for the gasoline delivery hose. e. Vacuum device (aspirator, pump, etc.) of sufficient capacity to evacuate tank to ten (10) inches of water (20 torr). 2. Recommended equipment a. In-line, pressure-vacuum relief valve set to activate at one (1) psi (52 torr) with a capacity equal to the pressurizing or evacuating pumps. (Note: This is a safety measure to preclude the possibility of rupturing the tank). b. Low pressure (5 psi (250 torr) divisions) regulator for controlling pressurization of tank. D. Vacuum and Pressure Tests of Tanks 1. Pressure Test a. The dome covers are to be opened and closed. b. The tank shall be purged of gasoline vapor and tested empty. The tank may be purged by any safe method such as flushing with diesel fuel, or heating oil. (For major repairs it is recommended that the tank be degassed by steam cleaning, etc.) c. Connect static electrical ground connections to tank. Attach the delivery and vapor hoses, remove the delivery elbows and plug the liquid delivery fittings. (The latter can normally be accomplished by shutting the delivery valves). d. Attach the test cap to the vapor recovery line of the delivery tank. e. Connect the pressure (or vacuum) supply hose and, optionally, the pressure-vacuum relief valve to the shut-off valve. Attach a manometer to the pressure tap on the vapor-hose cap. Attach pressure source to the hose. f. Connect compartments of the tank internally to each other if possible. g. Open shut-off valve in the vapor recovery hose cap. Applying air pressure slowly, pressurize the tank, or alternatively the first compartment, to 18 inches of water (35 torr). h. Close the shut-off valve, allow the pressure in the delivery tank to stabilize (adjust the pressure if necessary to maintain 18 inches of water (35 torr), record the time and initial pressure; begin the test period. i. At the end of five (5) minutes, record the final time, pressure, and pressure change. Disconnect the pressure source from the pressure/vacuum supply hose, and slowly open the shut-off valve to bring the tank to atmospheric pressure. j. Repeat for each compartment if they were not interconnected. 2. Vacuum Test a. Connect vacuum source to pressure and vacuum supply hose. b. Slowly evacuate the tank, or alternatively the first compartment, to six (6) inches of water (12 torr). Close the shut-off valve, allow the pressure in the delivery tank to stabilize (adjust the pressure if necessary to maintain six (6) inches of water (12 torr) vacuum), record the initial pressure and time; begin the test period. At the end of five (5) minutes, record the final pressure, time, and pressure change. c. Repeat for each compartment if they were not interconnected. E. Leak Check of Vapor Return Valve 1. After passing the vacuum and pressure tests, by making any needed repairs, pressurize the tank as in D.1. above to eighteen (18) inches of water (35 torr). 2. Close the internal valve(s) including the vapor valve(s) and "fire valves." 3. Relieve the pressure in the vapor return line to atmospheric pressure, leaving relief valve open to atmospheric pressure. 4. After five (5) minutes, seal the vapor return line by closing relief valve(s). Then open the internal valves including the vapor valve(s) and record the pressure, time, and pressure change. (To trace a leaking vapor valve it may be advantageous to open each vapor valve one at a time and record the pressure after each.) 5. The leak rate attributed to the vapor return valve shall be calculated by subtracting the pressure change in the most recent pressure test per D.1.i. above from the pressure change in E.4. 1 2 3 4 5 6 7 8 9 10 Похожие: Department of public health and environment Colorado department of public health and environment A regular meeting of the Massachusetts Department of Public Health’s Public Health Council was held on Wednesday, November 14, 2007, 10: 00 a m., at the Department of Environment, Health, and Natural Resources 105 cmr: department of public health 105 cmr: department of public health 105 cmr: department of public health 105 cmr: department of public health 105 cmr: department of public health Department of Epidemiology and Public Health (R-669) Secondary/Joint App
http://lib.convdocs.org/docs/index-122000.html?page=9
2020-03-28T12:48:16
CC-MAIN-2020-16
1585370491857.4
[]
lib.convdocs.org
Simple workflow is generated by an Alfresco space that has a defined workflow content rule. The content rule dictates how the content entering, leaving, or currently residing in the space is managed. Advanced workflow is any workflow constructed using the Alfresco embedded workflow engine. You can start an advanced workflow with the Start Advanced Workflow Wizard or as part of Web Content Management (WCM). Advanced workflows are defined in your development environment or the Alfresco Workflow Designer. Alfresco includes two out-of-the-box workflows: Adhoc Task (for assigning a task to a colleague) and Review & Approve (for setting up review and approval of content). These are both basic examples of advanced workflows. In both examples, the content items are attached to the workflow. In WCM, the workflow is configured for a web form associated with the web project and/or workflow configured as part of the overall web project. The Submit process in WCM is a complex example of advanced workflow that moves content through review, approval, and publishing actions. Tasks resulting from advanced workflow are managed in your personal dashboard (My Alfresco).
https://docs.alfresco.com/4.2/concepts/cuh-workflow-intro.html
2020-03-28T12:48:47
CC-MAIN-2020-16
1585370491857.4
[]
docs.alfresco.com
Proposals are a means for suggesting changes to Bisq Network software components, infrastructure and processes. Introduction The Bisq DAO is a flat organization, with no command-and-control hierarchy available to make big decisions and carry them out. Usually this is not a problem, as most day-to-day changes happen without any need for organization-wide consensus. Certain kinds of changes, however, benefit from or even require it. What’s needed is a mechanism that allows any contributor to propose a change, and all other contributors to review it in order to arrive at an IETF-style rough consensus.[1] Proposals are that mechanism, and this document covers everything that participants need to know about the process. What proposals are good for Creating an entire new Bisq component or making a significant change to an existing one Changing something about the way contributors work together Infrastructure GitHub Proposals are managed as GitHub issues in the bisq-network/proposals repository. Roles Submitter The contributor(s) who write a proposal and carry it through to completion. Submitters are 100% responsible for the success of their proposals. Reviewer Other contributors who read, discuss and react to proposals. Any contributor may review a proposal, but no contributor is obligated to do so. This intentionally puts the onus on the submitter to ensure their proposal is relevant and well-written per the Guidelines below. Maintainer The contributor(s) responsible for proposals Infrastructure and Process.[2] Role Issue Role Team Duties Enforce the proposals Process detailed below. Monitor communications on the #proposalsSlack channel.[5] Keep this proposals documentation up to date.[6] Write a monthly report on the proposals maintainer Role Issue.[7] Rights Write access to the bisq-network/proposals repository. Process Step 0. Research Before you submit a proposal, do your homework! Discuss your idea with other contributors to see if a proposal is worth submitting at all; Search through existing proposals (both open and closed) to see whether something similar has already been proposed; Notice which among those proposals have been accepted and rejected, and why; Read this document to fully understand the proposal process and guidelines. Step 1. Submit Create a new GitHub issue in the bisq-network/proposals repository containing the text of your proposal, written and formatted per the guidelines below. A maintainer will quickly review your proposal and will either (a) assign it to you to indicate your ownership and responsibility over it, or (b) close it and label it as was:incorrect if it does not follow the guidelines below. Step 2. Review Once a proposal is submitted, a two-week review period follows. During this period, interested reviewers should read, discuss and ultimately react to the proposal as follows: 👍: I agree with the proposal and want to see it enacted 😕: I am uncertain about the proposal and I need more information 👎: I disagree with the proposal and do not want to see it enacted When reacting with a 😕 or 👎, add a comment explaining why. If you don’t, then don’t expect your opinion to have much weight or get addressed. If you do not understand or care about a given proposal, ignore it. Use comments on the proposal issue to discuss, ask questions, and get clarifications. Take lengthy discussions offline to Slack or elsewhere and then summarize them back on the issue. Step 3. Evaluate After the two-week review period is over, a maintainer will evaluate reactions to and discussions about the proposal and will close the issue with a comment explaining that it is approved or rejected based on whether a rough consensus was achieved. Approved proposals will be labeled with was:approved. Rejected proposals will be labeled with was:rejected. If rough consensus has not been achieved, e.g. because discussion is still ongoing, dissenting concerns have not been addressed, or the proposal has turned out to be contentious, the maintainer will indicate that they cannot close the proposal, and that it is up to the submitter to take next steps to move the proposal forward. If the proposal does not move forward after another two weeks, the maintainer will close and label it was:stalled. If there have been no or very few reactions to a proposal after the two-week period, the maintainer will close it and label it as was:ignored. Guidelines Write your proposal in a way that makes it as easy as possible to achieve rough consensus. This means that proposals should be as simple, focused, concrete and well-defined as possible. Your goal should be to make it as easy as possible for your fellow contributors to understand and agree with you. Take full responsibility for your proposal. It is not the maintainers' job, nor anyone else’s, to see your proposal succeed. If people aren’t responding or reacting to your proposal, it’s your job to solicit that feedback more actively. Never assume that anyone other than yourself is going to do the work described in your proposal. If your proposal does place expectations on other contributors, or requires them to change their behavior in any way, be explicit about that. Provide context. Make a strong case for your proposal. Link to prior discussions. Do not make your reader do any more work than they have to to understand your proposal. Format your proposal in Markdown. Make it a pleasure to read. In general, good proposals take time to research and write. Every minute you spend clearly and logically articulating your proposal is a minute that you save other contributors in understanding it. This diligence on your part will be appreciated and rewarded by others' attention. Cheaply written, "drive by" proposals that waste others' time will be closed immediately as was:incorrect.
https://docs.bisq.network/proposals.html
2020-03-28T11:13:35
CC-MAIN-2020-16
1585370491857.4
[]
docs.bisq.network
Sepasoft - MES Modules Cirrus Link - MQTT Modules Knowledge Base Articles Inductive University Forum IA Support SDK Documentation Ignition 8.0 Ignition 7.9 Ignition 7.8 server as a reference to these deprecated features. system.alert system.alarm While documentation for deprecated features is present in this space, we recommend that users switch their systems over to the newer features at their earliest convenience.
https://docs.inductiveautomation.com/pages/viewpage.action?pageId=26021976
2020-03-28T12:11:53
CC-MAIN-2020-16
1585370491857.4
[]
docs.inductiveautomation.com
All content with label adaptor+archetype+configuration+examples+gridfs+hot_rod+infinispan+jboss_cache+jta+locking+read_committed+release+repeatable_read+scala+xaresource. Related Labels: expiration, publish, datagrid, coherence, interceptor, server, replication, recovery, transactionmanager, dist, partitioning, query, deadlock, lock_striping, jbossas, nexus, guide, schema, listener, cache, amazon, s3, grid, test, jcache, api, xsd, ehcache, maven, documentation, wcm, write_behind, ec2, 缓存, s, hibernate, getting, aws, interface, custom_interceptor, clustering, setup, eviction, out_of_memory, concurrency, import, index, events, batch, hash_function, buddy_replication, xa, write_through, cloud, mvcc, tutorial, jbosscache3x, xml, distribution, meeting, started, cachestore, data_grid, cacheloader, resteasy, hibernate_search, cluster, br, development, websocket, transaction, async, interactive, build, gatein, searchable, demo, cache_server, installation, ispn, client, migration, filesystem, jpa, tx, gui_demo, eventing, snmp, deployer, client_server, testng, infinispan_user_guide, standalone, hotrod, webdav, snapshot, docs, consistent_hash, batching, store, faq, as5, 2lcache, jsr-107, lucene, jgroups, rest more » ( - adaptor, - archetype, - configuration, - examples, - gridfs, - hot_rod, - infinispan, - jboss_cache, - jta, - locking, - read_committed, - release, - repeatable_read, - scala, - xaresource ) Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today.
https://docs.jboss.org/author/label/adaptor+archetype+configuration+examples+gridfs+hot_rod+infinispan+jboss_cache+jta+locking+read_committed+release+repeatable_read+scala+xaresource
2020-03-28T12:28:29
CC-MAIN-2020-16
1585370491857.4
[]
docs.jboss.org
All content with label as5+batching+cache+gridfs+infinispan+installation+interactive+loader+non-blocking+notification+publish+replication+s3+xml. Related Labels: podcast, expiration, datagrid, coherence, interceptor, server, transactionmanager, dist, release, query, deadlock, intro, pojo_cache, archetype, lock_striping, jbossas, nexus, demos, guide, schema, listener, amazon,, batch, hash_function, buddy_replication, pojo, write_through, cloud, mvcc, tutorial, presentation, read_committed, jbosscache3x, distribution, started, cachestore, data_grid, cacheloader, hibernate_search, resteasy, cluster, development, websocket, transaction, async, xaresource, build, gatein, searchable, demo, scala, client, jpa, filesystem, tx, user_guide, article, gui_demo, eventing, client_server, testng, infinispan_user_guide, repeatable_read, webdav, hotrod, snapshot, docs, consistent_hash, store, whitepaper, jta, faq, spring, 2lcache, jsr-107, lucene, jgroups, locking, rest, hot_rod more » ( - as5, - batching, - cache, - gridfs, - infinispan, - installation, - interactive, - loader, - non-blocking, - notification, - publish, - replication, - s3, - xml ) Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today.
https://docs.jboss.org/author/label/as5+batching+cache+gridfs+infinispan+installation+interactive+loader+non-blocking+notification+publish+replication+s3+xml
2020-03-28T12:45:00
CC-MAIN-2020-16
1585370491857.4
[]
docs.jboss.org
All content with label buddy_replication+client+coherence+ehcache+grid+gridfs+guide+hotrod+import+infinispan+notification+permission+read_committed+setup+testng+whitepaper. Related Labels: podcast, expiration, publish, datagrid, interceptor, server, replication, transactionmanager, dist, release, partitioning, query, deadlock, intro, archetype, pojo_cache, jbossas, lock_striping, nexus, schema, listener, cache, amazon, memcached, jcache, test, api, xsd, maven, documentation, roadmap, youtube, userguide, write_behind, 缓存, ec2, hibernate, aws, interface, custom_interceptor, clustering, eviction, out_of_memory, concurrency, fine_grained, jboss_cache, events, hash_function, configuration, batch, loader, xa, pojo, write_through, cloud, remoting, mvcc, tutorial, presentation, murmurhash2, xml, jbosscache3x, distribution, jira, cachestore, data_grid, cacheloader, resteasy, hibernate_search, cluster, br, websocket, transaction, async, interactive, xaresource, build, searchable, demo, scala, cache_server, installation, command-line, migration, jpa, filesystem, article, user_guide, gui_demo, eventing, shell, client_server, infinispan_user_guide, murmurhash, standalone, repeatable_read, snapshot, webdav, docs, batching, consistent_hash, store, jta, faq, 2lcache, as5, jsr-107, docbook, lucene, jgroups, locking, rest, hot_rod more » ( - buddy_replication, - client, - coherence, - ehcache, - grid, - gridfs, - guide, - hotrod, - import, - infinispan, - notification, - permission, - read_committed, - setup, - testng, - whitepaper ) Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today.
https://docs.jboss.org/author/label/buddy_replication+client+coherence+ehcache+grid+gridfs+guide+hotrod+import+infinispan+notification+permission+read_committed+setup+testng+whitepaper
2020-03-28T12:55:12
CC-MAIN-2020-16
1585370491857.4
[]
docs.jboss.org
All content with label async+br+client+events+hot_rod+infinispan+jboss_cache+jbosscache3x+jta+listener+mvcc+release+scala+write_behind+缓存. Related Labels: expiration, publish, datagrid, coherence, interceptor, server, replication, recovery, transactionmanager, dist, partitioning, query, deadlock, lock_striping, jbossas, nexus, guide, schema, cache, amazon, s3, grid, memcached, test, jcache, api, xsd, ehcache, maven, documentation, ec2, hibernate, aws, custom_interceptor, setup, clustering, eviction, gridfs, concurrency, out_of_memory, import, index, batch, configuration, hash_function, buddy_replication, loader, xa, write_through, cloud, remoting, tutorial, notification, read_committed, distribution, meeting, cachestore, data_grid, cacheloader, hibernate_search, resteasy, cluster, development, transaction, interactive, xaresource, build, searchable, demo, installation, cache_server, command-line, migration, non-blocking, jpa, filesystem, tx, gui_demo, eventing, shell, client_server, testng, infinispan_user_guide, standalone, webdav, hotrod, snapshot, repeatable_read, docs, batching, consistent_hash, store, faq, as5, 2lcache, jsr-107, jgroups, lucene, locking, rest more » ( - async, - br, - client, - events, -.
https://docs.jboss.org/author/labels/viewlabel.action?ids=4456514&ids=4456534&ids=4456499&ids=4456528&ids=4456485&ids=4456479&ids=4456491&ids=4456573&ids=4456487&ids=4456527&ids=4456586&ids=4456467&ids=4456506&ids=4456559&ids=4456590
2020-03-28T13:20:37
CC-MAIN-2020-16
1585370491857.4
[]
docs.jboss.org
All content with label amazon+api+client+ec2+eventing+gridfs+gui_demo+infinispan+jcache+mvcc+query+snapshot+test+testng+缓存. Related Labels: publish, datagrid, coherence, interceptor, server, replication, transactionmanager, dist, release, deadlock, archetype, jbossas, lock_striping, nexus, guide, schema, listener, cache, s3, grid, memcached, xsd, maven, documentation, wcm, write_behind, hibernate, aws, interface, custom_interceptor, setup, clustering, eviction, concurrency, out_of_memory, jboss_cache, import, index, events, configuration, hash_function, batch, buddy_replication, loader, write_through, cloud, remoting, tutorial, notification, xml, read_committed, jbosscache3x, distribution, cachestore, data_grid, cacheloader, hibernate_search, resteasy, integration, cluster, development, websocket, async, transaction, interactive, xaresource, build, gatein, searchable, demo, installation, command-line, migration, non-blocking, filesystem, jpa, tx, shell, client_server, infinispan_user_guide, standalone, webdav, hotrod, repeatable_read, docs, batching, consistent_hash, store, jta, faq, 2lcache, as5, jsr-107, jgroups, lucene, locking, rest, hot_rod more » ( - amazon, - api, - client, - ec2, - eventing, - gridfs, - gui_demo, - infinispan, - jcache, - mvcc, - query, - snapshot, - test, - testng, - 缓存 ) Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today.
https://docs.jboss.org/author/labels/viewlabel.action?ids=4456596&ids=4456516&ids=4456499&ids=4456597&ids=4456529&ids=4456481&ids=4456547&ids=4456479&ids=4456544&ids=4456586&ids=4456470&ids=4456460&ids=4456602&ids=4456456&ids=4456590
2020-03-28T11:40:39
CC-MAIN-2020-16
1585370491857.4
[]
docs.jboss.org
Entity API The Lasso Entity API provides a RESTful way of getting data from the Danish Company Registry (CVR) and other data providers. With it you'll be able to get data about companies, people and places (production units). The following sections describe how to connect, get and search for entities. Looking for yearly reports (nøgletal)? This page contains information on how to get base information for entities. Check out our Module APIs if you are looking for data for more specific data. Getting the query parameter accessKey to your request. for entities. In order to query for a specific entity like a company, you will need its unique identifier. In CVR this unique id is a CVR number for companies, and a person id for a person. On Facebook it is a facebook user id and so on.. Examples of other data sources in Lasso are MSCRM (Microsoft Dynamics CRM) and ECO (e-conomic)), and for e-conomic it would be a an integer. With this in mind, you can query any supported data source using only the lasso Id. The following section explains how. Now that you know the basics, click here to learn how to get entities.
https://docs.lassox.com/entityapi/gettingstarted/
2020-03-28T10:57:37
CC-MAIN-2020-16
1585370491857.4
[array(['/img/api_key.png', 'Locating your API key'], dtype=object) array(['/img/lasso-id.png', 'Understanding the Lasso Id'], dtype=object)]
docs.lassox.com
13th January 2020 Version 1.1.0 of the SAML service is now available, and has been updated to include the following: Fixes: - FLOW-692: Previously, the SAML service did not perform a re-check of assertion values when the service authorization point was called for a second time (joining the flow for example). With this fix, the SAML service now performs a re-check of assertion values for the user, ensuring that user authorization is performed correctly. This fix also aligns the token validity window with the SAML assertion conditions NotBefore and NotOnOrAfter. - FLOW-862: Assertions without expiration time configured are now by default only valid for a period of up to 14 minutes. This means that after 15 minutes a user running a flow protected by the SAML Service will be redirected to the Identity Provider (IdP) to obtain a new assertion if the IdP is sent an assertion without an expiration time (NotAfter condition). - FLOW-863: The login URL is no longer populated when a SAML authenticated user tries to run a flow that is then restricted for them by user or group permissions configured within Boomi Flow itself.
https://docs.manywho.com/saml-service-1-1-0/
2020-03-28T11:32:14
CC-MAIN-2020-16
1585370491857.4
[]
docs.manywho.com
Architecture description language Here are my pontifications… In current software systems, an Architecture Description Language (ADL) is used for architectures of component-based software systems. Let’s take an example of a Windows Phone Application, you dear reader, may start with a game engine for an existing application from AppHub. Then you figure out how to modify the application using the poorly documented classes and pump something out that sells well on AppHub, with solid revenue from the ad banner. After a few months, a entrepreneur with some cash burning a hole in their pocket and wants to purchase your app for a high five figures (figures on the left side of the decimal point). There is a caveat, the entrepreneur is a software smartie and wants solid documentation or no deal. You write some pages in MS Word, run a class diagram and hand to the entrepreneur. They look at the documentation and then tell you the documentation is unacceptable they need an architecture description. Oh-oh. Always in your life you simply powered through code. Now someone is calling for documentation. And because the deal is delayed the buyer thinks of a couple more things they want: a test plan, and playability survey. Crap. The entrepreneur backs off and decides to pay your price for the app, but you have agreed to maintain it for the next year. Which shouldn’t be too hard to do. The entrepreneur has some more cash as your app is doing well and they want to create a version that uses gyro. It turns out that your agreement with them shows that for the price of purchase you have to do the work for the money you already got from them. And you spend most of the money you got on the graphic artist payout. When you dig into the code you realize that it appears that monkeys wrote the code, sadly the monkey was you and you forgot a bunch of stuff. Now this is just a story, but if you used the automatic architectural tools in Visual Studio Ultimate you would have been able to get the app sold without the maintenance clause, and save the wear and tear on your brain and soul. Here are some arcane articles on the concept of architecture description: - Software Architecture Description supporting Component Deployment and System Runtime Reconfiguration - Abstractions for Software Architecture and Tools to Support Them Using the apparently secret Architectural Explorer in Visual Studio Ultimate: How to generate Dependency graphs for .NET code: Some of these tools require that you install the Visual Studio 2010 Feature Pack 2, which you are suppose to be an MSDN subscriber. If you are a student or professor you can gain access to this Feature Pack through MSDNAA.
https://docs.microsoft.com/en-us/archive/blogs/socal-sam/architecture-description-language
2020-03-28T12:25:08
CC-MAIN-2020-16
1585370491857.4
[]
docs.microsoft.com
Annotations¶ Description How to use annotation design pattern to store arbitrary values on Python objects (Plone site, HTTP request) for storage and caching purposes. Introduction¶ Annotations is conflict-free way to stick attributes on arbitrary Python objects. Plone uses annotations for: Storing field data in Archetypes (Annotation storage). Caching values on HTTP request object (plone.memoize cache decorators). Storing settings information in portal or content object (various add-on products). See zope.annotation package. HTTP request example¶ Store cached values on HTTP request during the life cycle of one request processing. This allows you to cache computed values if the computation function is called from the different, unrelated, code paths. from zope.annotation.interfaces import IAnnotations # Non-conflicting key KEY = "mypackage.something" annotations = IAnnotations(request) value = annotations.get(KEY, None) if value is None: # Compute value and store it on request object for further look-ups value = annotations[KEY] = something() Content annotations¶ Overview and basic usage¶ If you want to extend any Plone content to contain “custom” settings annotations is the recommended way to do it. Your add-on can store its settings in Plone site root object using local utilities or annotations. You can store custom settings on content objects using annotations. By default, in content annotations are stored: Assigned portlets and their settings. Archetypes content type fields using AnnotationStorage(like textfield on Document). Behavior data from plone.behaviorpackage. Example: # Assume context variable refers to some content item # Non-conflicting key KEY = "yourcompany.packagename.magicalcontentnavigationsetting" annotations = IAnnotations(context) # Store some setting on the content item annotations[KEY] = True Advanced content annotation¶ The above example is enough for storing simple values as annotations. You may provide more complex annotation objects depending on your application logic on various content types. This example shows how to add a simple “Like / Dislike” counter on a content object. class LikeDislike(object): def __init__(self):) At this step it is essential to check that your custom annotation class can be pickled. In the Zope world, this means that you cannot hold in your annotation object any reference to a content too. Tip Use the UID of a content object if you need to keep the reference of that content object in an annotation. The most pythonic recipe to get (and set if not existing) your annotation for a given key is: from zope.annotation import IAttributeAnnotatable, IAnnotations KEY = 'content.like.dislike' # It's best place is config.py in a real app def getLikesDislikeFor(item): """Factory for LikeDislike as annotation of a contentish @param item: any annotatable object, thus any Plone content """ # Ensure the item is annotatable assert IAttributeAnnotatable.providedBy(item) # Won't work otherwise annotations = IAnnotations(item) return annotations.setdefault(KEY, LikeDislike()) This way, you’re sure that : You won’t create annotations on an object that can’t support them. You will create a new fresh annotation mastered with your LikeDislikefor your context object if it does not already exist. You can play with your LikeDislikeannotation object as with any Python object, all attributes changes will be stored automatically in the annotations of the associated content object. Wrapping your annotation with an adapter¶ zope.annotation comes with the factory() function that transforms the annotation class into an adapter (possibly named as the annotation key). In addition the annotation created this way have location awareness, having __parent__ and __name__ attributes. Let’s go back to the above sample and use the zope.annotation.factory() function. import zope.interface import zope.component import zope.annotation from zope.interface import implements from zope.annotation import factory from some.contenttype.interfaces import ISomeContent KEY = 'content.like.dislike' # It's best place is config.py in a real app class ILikeDislike(zope.interface.Interface): """Model for like/dislike annotation """ def reset(): """Reinitialize everything """ def likedBy(user_id): """User liked the associated content """ def dislikedBy(user_id): """User disliked the associated content """ class LikeDislike(object): implements(ILikeDislike) zope.component.adapts(ISomeContent) def __init__(self): # Does not expect argument as usual adapters # You can access annotated object through ``self.__parent__``) # Register as adapter (you may do this in ZCML too) zope.component.provideAdapter(factory(LikeDislike, key=KEY)) # Lets play with some content item = getSomeContentImplementingISomeContent() # Guess what :) # Let's have its annotation like_dislike = ILikeDislike(item) # Play with this annotation like_dislike.likedBy('joe') like_dislike.dislikedBy('jane') assert like_dislike.status() == (1, 1) assert like_dislike.__parent__ is item assert like_dislike.__name__ == KEY Tip Read a full doc / test / demo of the zope.annotation.factory() in the README.txt file in the root of zope.annotation package for more advanced usages. Cleaning up content annotations¶ Warning If you store full Python objects in annotations you need to clean them up during your add-on uninstallation. Otherwise if Python code is not present you can no longer import or export Plone site (annotations are pickled objects in the database and pickles do no longer work if the code is not present). How to clean up annotations on content objects: def clean_up_content_annotations(portal, names): """ Remove objects from content annotations in Plone site, This is mostly to remove objects which might make the site un-exportable when eggs / Python code has been removed. @param portal: Plone site object @param names: Names of the annotation entries to remove """ output = StringIO() def recurse(context): """ Recurse through all content on Plone site """ annotations = IAnnotations(context) #print >> output, "Recusring to item:" + str(context) print annotations for name in names: if name in annotations: print >> output, "Cleaning up annotation %s on item %s" % (name, context.absolute_url()) del annotations[name] # Make sure that we recurse to real folders only, # otherwise contentItems() might be acquired from higher level if IFolderish.providedBy(context): for id, item in context.contentItems(): recurse(item) recurse(portal) return output Make your code persistence free¶ There is one issue with the above methods: you are creating new persistent classes so your data need your source code. That makes your code hard to uninstall (have to keep the code BBB + cleaning up the DB by walking throw all objects) Another pattern to store data in annotations: Use already existing persistent base code instead of creating your own. Please use one of theses: BTrees PersistentList PersistentDict This pattern is used by cioppino.twothumbs and collective.favoriting addons. How to achieve this:
https://docs.plone.org/develop/plone/misc/annotations.html
2020-03-28T12:44:25
CC-MAIN-2020-16
1585370491857.4
[]
docs.plone.org
deafult). The beauty of Processing implementation is that you can add your own scripts, simplex or complex ones, and they may then be used as any other module, piped into more complex workflows, etc. Test some of the preinstalled examples, if you have R already installed (remember to activate R modules from the General configuration of Processing). OTB (also known as Orfeo ToolBox) is a free and open source library of image processing algorithms. It is installed by deafult on Windows through the OSGeo4W standalone installer (32 bit). Paths should be configured in Processing. In a standard OSgeo4W Windows installation, the paths will be: OTB application folder C:\OSGeo4W\apps\orfeotoolbox\applications OTB command line tools folder C:\OSGeo4W\bin On Debian and derivatives, it will be /usr/bin TauDEM is a suite of Digital Elevation Model (DEM) tools for the extraction and analysis of hydrologic information. Availability in various operating system varies.: Dissolve features based on a common attribute: Warning The last one is broken in SAGA <=2.10 Exercise for the reader: find the differences (geometry and attributes) between different methods.
https://docs.qgis.org/2.6/en/docs/training_manual/processing/more_backends.html
2020-03-28T12:45:23
CC-MAIN-2020-16
1585370491857.4
[]
docs.qgis.org
CouplingModel¶ full name: tenpy.models.model.CouplingModel parent module: tenpy.models.model type: class Inheritance Diagram Methods - class tenpy.models.model. CouplingModel(lattice, bc_coupling=None, explicit_plus_hc=False)[source]¶ Bases: tenpy.models.model.Model Base class for a general model of a Hamiltonian consisting of two-site couplings. In this class, the terms of the Hamiltonian are specified explicitly as OnsiteTermsor CouplingTerms. Deprecated since version 0.4.0: bc_coupling will be removed in 1.0.0. To specify the full geometry in the lattice, use the bc parameter of the Lattice. - Parameters lattice ( Lattice) – The lattice defining the geometry and the local Hilbert space(s). bc_coupling ((iterable of) { 'open'| 'periodic'| int}) – Boundary conditions of the couplings in each direction of the lattice. Defines how the couplings are added in add_coupling(). A single string holds for all directions. An integer shift means that we have periodic boundary conditions along this direction, but shift/tilt by -shift*lattice.basis[0](~cylinder axis for bc_MPS='infinite') when going around the boundary along this direction. explicit_plus_hc (bool) – If True, the Hermitian conjugate of the MPO is computed at runtime, rather than saved in the MPO. onsite_terms¶ The OnsiteTermsordered by category. - Type {‘category’: OnsiteTerms} coupling_terms¶ The CouplingTermsordered by category. In a MultiCouplingModel, values may also be MultiCouplingTerms. - Type {‘category’: CouplingTerms} explicit_plus_hc¶ If True, self represents the terms in onsite_termsand coupling_termsand their hermitian conjugate added. The flag will be carried on the MPO, which will have a reduced bond dimension if self.add_coupling(..., plus_hc=True)was used. Note that add_onsite()and add_coupling()respect this flag, ensuring that the represented Hamiltonian is indepentent of the explicit_plus_hc flag. - Type - add_local_term(strength, term, category=None, plus_hc=False)[source]¶ Add a single term to self. The repesented term is strength times the product of the operators given in terms. Each operator is specified by the name and the site it acts on; the latter given by a lattice index, see Lattice. Depending on the length of term, it can add an onsite term or a coupling term to onsite_termsor coupling_terms, respectively. - Parameters strength (float/complex) – The prefactor of the term. term (list of (str, array_like)) – List of tuples (opname, lat_idx)where opname is a string describing the operator acting on the site given by the lattice index lat_idx. Here, lat_idx is for example [x, y, u] for a 2D lattice, with u being the index within the unit cell. category – Descriptive name used as key for onsite_termsor coupling_terms. plus_hc (bool) – If True, the hermitian conjugate of the terms is added automatically. add_onsite(strength, u, opname, category=None, plus_hc=False)[source]¶ Add onsite terms to onsite_terms. Adds \(\sum_{\vec{x}} strength[\vec{x}] * OP\) to the represented Hamiltonian, where the operator OP=lat.unit_cell[u].get_op(opname)acts on the site given by a lattice index (x_0, ..., x_{dim-1}, u), The necessary terms are just added to onsite_terms; doesn’t rebuild the MPO. - Parameters strength (scalar | array) – Prefactor of the onsite term. May vary spatially. If an array of smaller size is provided, it gets tiled to the required shape. u (int) – Picks a Site lat.unit_cell[u]out of the unit cell. opname (str) – valid operator name of an onsite operator in lat.unit_cell[u]. category (str) – Descriptive name used as key for onsite_terms. Defaults to opname. plus_hc (bool) – If True, the hermitian conjugate of the terms is added automatically. See also add_coupling() Add a terms acting on two sites. add_onsite_term() Add a single term without summing over \(vec{x}\). add_onsite_term(strength, i, op, category=None, plus_hc=False)[source]¶ Add an onsite term on a given MPS site. Wrapper for self.onsite_terms[category].add_onsite_term(...). - Parameters strength (float) – The strength of the term. i (int) – The MPS index of the site on which the operator acts. We require 0 <= i < L. op (str) – Name of the involved operator. category (str) – Descriptive name used as key for onsite_terms. Defaults to op. plus_hc (bool) – If True, the hermitian conjugate of the term is added automatically. all_onsite_terms()[source]¶ Sum of all onsite_terms. add_coupling(strength, u1, op1, u2, op2, dx, op_string=None, str_on_first=True, raise_op2_left=False, category=None, plus_hc=False)[source]¶ Add twosite coupling terms to the Hamiltonian, summing over lattice sites. Represents couplings of the form \(\sum_{x_0, ..., x_{dim-1}} strength[shift(\vec{x})] * OP0 * OP1\), where OP0 := lat.unit_cell[u0].get_op(op0)acts on the site (x_0, ..., x_{dim-1}, u1), and OP1 := lat.unit_cell[u1].get_op(op1)acts on the site (x_0+dx[0], ..., x_{dim-1}+dx[dim-1], u1). Possible combinations x_0, ..., x_{dim-1}are determined from the boundary conditions in possible_couplings(). The coupling strength may vary spatially if the given strength is a numpy array. The correct shape of this array is the coupling_shape returned by tenpy.models.lattice.possible_couplings()and depends on the boundary conditions. The shift(...)depends on dx, and is chosen such that the first entry strength[0, 0, ...]of strength is the prefactor for the first possible coupling fitting into the lattice if you imagine open boundary conditions. The necessary terms are just added to coupling_terms; this function does not rebuild the MPO. Deprecated since version 0.4.0: The arguments str_on_first and raise_op2_left will be removed in version 1.0.0. - Parameters strength (scalar | array) – Prefactor of the coupling. May vary spatially (see above). If an array of smaller size is provided, it gets tiled to the required shape. u1 (int) – Picks the site lat.unit_cell[u1]for OP1. op1 (str) – Valid operator name of an onsite operator in lat.unit_cell[u1]for OP1. u2 (int) – Picks the site lat.unit_cell[u2]for OP2. op2 (str) – Valid operator name of an onsite operator in lat.unit_cell[u2]for OP2. dx (iterable of int) – Translation vector (of the unit cell) between OP1 and OP2. For a 1D lattice, a single int is also fine. op_string (str | None) – Name of an operator to be used between the OP1 and OP2 sites. Typical use case is the phase for a Jordan-Wigner transformation. The operator should be defined on all sites in the unit cell. If None, auto-determine whether a Jordan-Wigner string is needed, using op_needs_JW(). str_on_first (bool) – Whether the provided op_string should also act on the first site. This option should be chosen as Truefor Jordan-Wigner strings. When handling Jordan-Wigner strings we need to extend the op_string to also act on the ‘left’, first site (in the sense of the MPS ordering of the sites given by the lattice). In this case, there is a well-defined ordering of the operators in the physical sense (i.e. which of op1 or op2 acts first on a given state). We follow the convention that op2 acts first (in the physical sense), independent of the MPS ordering. Deprecated. raise_op2_left (bool) – Raise an error when op2 appears left of op1 (in the sense of the MPS ordering given by the lattice). Deprecated. category (str) – Descriptive name used as key for coupling_terms. Defaults to a string of the form "{op1}_i {op2}_j". plus_hc (bool) – If True, the hermitian conjugate of the terms is added automatically. Examples When initializing a model, you can add a term \(J \sum_{<i,j>} S^z_i S^z_j\) on all nearest-neighbor bonds of the lattice like this: >>> J = 1. # the strength >>> for u1, u2, dx in self.lat.pairs['nearest_neighbors']: ... self.add_coupling(J, u1, 'Sz', u2, 'Sz', dx) The strength can be an array, which gets tiled to the correct shape. For example, in a 1D Chainwith an even number of sites and periodic (or infinite) boundary conditions, you can add alternating strong and weak couplings with a line like: >>> self.add_coupling([1.5, 1.], 0, 'Sz', 0, 'Sz', dx) Make sure to use the plus_hc argument if necessary, e.g. for hoppings: >>> for u1, u2, dx in self.lat.pairs['nearest_neighbors']: ... self.add_coupling(t, u1, 'Cd', u2, 'C', dx, plus_hc=True) Alternatively, you can add the hermitian conjugate terms explictly. The correct way is to complex conjugate the strength, take the hermitian conjugate of the operators and swap the order (including a swap u1 <-> u2), and use the opposite direction -dx, i.e. the h.c. of add_coupling(t, u1, 'A', u2, 'B', dx)` is ``add_coupling(np.conj(t), u2, hc('B'), u1, hc('A'), -dx), where hc takes the hermitian conjugate of the operator names, see get_hc_op_name(). For spin-less fermions ( FermionSite), this would be >>> t = 1. # hopping strength >>> for u1, u2, dx in self.lat.pairs['nearest_neighbors']: ... self.add_coupling(t, u1, 'Cd', u2, 'C', dx) ... self.add_coupling(np.conj(t), u2, 'Cd', u1, 'C', -dx) # h.c. With spin-full fermions ( SpinHalfFermions), it could be: >>> for u1, u2, dx in self.lat.pairs['nearest_neighbors']: ... self.add_coupling(t, u1, 'Cdu', u2, 'Cd', dx) # Cdagger_up C_down ... self.add_coupling(np.conj(t), u2, 'Cdd', u1, 'Cu', -dx) # h.c. Cdagger_down C_up Note that the Jordan-Wigner strings for the fermions are added automatically! See also add_onsite() Add terms acting on one site only. MultiCouplingModel.add_multi_coupling_term() for terms on more than two sites. add_coupling_term() Add a single term without summing over \(vec{x}\). add_coupling_term(strength, i, j, op_i, op_j, op_string='Id', category=None, plus_hc=False)[source]¶ Add a two-site coupling term on given MPS sites. Wrapper for self.coupling_terms[category].add_coupling_term(...). Warning This function does not handle Jordan-Wigner strings! You might want to use add_local_term()instead. -. category (str) – Descriptive name used as key for coupling_terms. Defaults to a string of the form "{op1}_i {op2}_j". plus_hc (bool) – If True, the hermitian conjugate of the term is added automatically. all_coupling_terms()[source]¶ Sum of all coupling_terms. calc_H_onsite(tol_zero=1e-15)[source]¶ Calculate H_onsite from self.onsite_terms. Deprecated since version 0.4.0: This function will be removed in 1.0.0. Replace calls to this function by self.all_onsite_terms().remove_zeros(tol_zero).to_Arrays(self.lat.mps_sites()). You might also want to take explicit_plus_hcinto account. - Parameters tol_zero (float) – prefactors with abs(strength) < tol_zeroare considered to be zero. - Returns H_onsite (list of npc.Array) onsite terms of the Hamiltonian. If explicit_plus_hcis True, – Hermitian conjugates of the onsite terms will be included. calc_H_bond(tol_zero=1e-15)[source]¶ calculate H_bond from coupling_termsand onsite_terms. - Parameters tol_zero (float) – prefactors with abs(strength) < tol_zeroare considered to be zero. - Returns H_bond – Bond terms as required by the constructor of NearestNeighborModel. Legs are ['p0', 'p0*', 'p1', 'p1*'] - Return type - :raises ValueError : if the Hamiltonian contains longer-range terms.: calc_H_MPO(tol_zero=1e-15)[source]¶ Calculate MPO representation of the Hamiltonian. Uses onsite_termsand coupling_termsto build an MPO graph (and then an MPO). coupling_strength_add_ext_flux(strength, dx, phase)[source]¶ Add an external flux to the coupling strength. When performing DMRG on a “cylinder” geometry, it might be useful to put an “external flux” through the cylinder. This means that a particle hopping around the cylinder should pick up a phase given by the external flux [Resta1997]. This is also called “twisted boundary conditions” in literature. This function adds a complex phase to the strength array on some bonds, such that particles hopping in positive direction around the cylinder pick up exp(+i phase). Warning For the sign of phase it is important that you consistently use the creation operator as op1 and the annihilation operator as op2 in add_coupling(). - Parameters strength (scalar | array) – The strength to be used in add_coupling(), when no external flux would be present. dx (iterable of int) – Translation vector (of the unit cell) between op1 and op2 in add_coupling(). phase (iterable of float) – The phase of the external flux for hopping in each direction of the lattice. E.g., if you want flux through the cylinder on which you have an infinite MPS, you should give phase=[0, phi]souch that particles pick up a phase phi when hopping around the cylinder. - Returns strength – The strength array to be used as strength in add_coupling()with the given dx. - Return type complex array Examples Let’s say you have an infinite MPS on a cylinder, and want to add nearest-neighbor hopping of fermions with the FermionSite. The cylinder axis is the x-direction of the lattice, so to put a flux through the cylinder, you want particles hopping around the cylinder to pick up a phase phi given by the external flux. >>> strength = 1. # hopping strength without external flux >>> phi = np.pi/4 # determines the external flux strength >>> strength_with_flux = self.coupling_strength_add_ext_flux(strength, dx, [0, phi]) >>> for u1, u2, dx in self.lat.pairs['nearest_neighbors']: ... self.add_coupling(strength_with_flux, u1, 'Cd', u2, 'C', dx) ... self.add_coupling(np.conj(strength_with_flux), u2, 'Cd', u1, 'C', -dx) enlarge_MPS_unit_cell(factor=2)[source]¶ Repeat the unit cell for infinite MPS boundary conditions; in place. This has to be done after finishing initialization and can not be reverted. - classmethod from_hdf5(hdf5_loader, h5gr, subpath)[source]¶ Load instance from a HDF5 file. This method reconstructs a class instance from the data saved with save_hdf5(). group_sites(n=2, grouped_sites=None)[source]¶ Modify self in place to group sites. Group each n sites together using the GroupedSite. This might allow to do TEBD with a Trotter decomposition, or help the convergence of DMRG (in case of too long range interactions). This has to be done after finishing initialization and can not be reverted. - Parameters n (int) – Number of sites to be grouped together. grouped_sites (None | list of GroupedSite) – The sites grouped together. - Returns grouped_sites – The sites grouped together. - Return type list of GroupedSite'.
https://tenpy.readthedocs.io/en/latest/reference/tenpy.models.model.CouplingModel.html
2020-03-28T10:50:00
CC-MAIN-2020-16
1585370491857.4
[array(['../_images/inheritance-824b0c54c316ea16d500f8fc1a44c44855af152d.png', 'Inheritance diagram of tenpy.models.model.CouplingModel'], dtype=object) ]
tenpy.readthedocs.io
Let's see how we can implement a Hello World page with the old school Surf Page framework.) Let's start out with the page definition file, create a file called helloworldhome.xml in the alfresco/tomcat/shared/classes/alfresco/web-extension/site-data/pages directory. You will have to create the site-data and pages directories. We are not using a build project to be able to focus solely on Surf. <?xml version='1.0' encoding='UTF-8'?> <page> <title>Hello World Home</title> <title-id>page.helloworldhome.title</title-id> <description>Hello World Home Description</description> <description-id>page.helloworldhome.description</description-id> <template-instance>helloworldhome-three-column</template-instance> <authentication>none</authentication> </page> Here we are defining the title and description of the page both hard-coded in the definition, and as references to a properties file with labels (i.e. the title-id and description-id elements). The page will not require any authentication, which means we cannot fetch any content from the Alfresco Repository from it. It is also going to use a three column template, or that is the idea, you can name the template instance whatever you want. <?xml version='1.0' encoding='UTF-8'?> <template-instance> <template-type>org/alfresco/demo/helloworldhome</template-type> </template-instance>This file just points to where the FreeMarker template for this page will be stored. So create the alfresco/tomcat/shared/classes/alfresco/web-extension/templates/org/alfresco/demo directory path. Then add the helloworldhome.ftl template file to it: This is just a test page. Hello World! page.helloworldhome.title=Hello World page.helloworldhome.description=Hello World Home DescriptionThis file just points to where the FreeMarker template for this page will be stored. We also need to tell Alfresco Share about the new resource file, rename the custom-slingshot-application-context.xml.sample to custom-slingshot-application-context.xml, it is located in the web-extension directory. Then define the following bean: <bean id="org.alfresco.demo.resources" class="org.springframework.extensions.surf.util.ResourceBundleBootstrapComponent"> <property name="resourceBundles"> <list> <value>alfresco.web-extension.messages.helloworldhome</value> </list> </property> </bean>To test this page you will have to restart Alfresco. It can then be accessed via the. The page does not look very exciting: <#include "/org/alfresco/include/alfresco-template.ftl" /> <@templateHeader></@> <@templateBody> <@markup <div id="alf-hd"> <@region </div> </@> <@markup <div id="bd"> <h1>This is just a test page. Hello World!</h1> </div> </@> </@> <@templateFooter> <@markup <div id="alf-ft"> <@region </div> </@> </@> What we are doing here is first bringing in another FreeMarker file called alfresco-template.ftl that contains, you guessed it, FreeMarker template macros. We then use these macros (elements starting with @) to set up the layout of the page with header and footer. The header and footer content is fetched via the share-header and footer global scope components (Web Scripts). To view the result of our change we need to restart the server again, after this we should see the following: <#include "/org/alfresco/include/alfresco-template.ftl" /> <@templateHeader></@> <@templateBody> <@markup <div id="alf-hd"> <@region </div> </@> <@markup <div id="bd"> <@region </div> </@> </@> <@templateFooter> <@markup <div id="alf-ft"> <@region </div> </@> </@> We have called the new region body and set page scope for it. This requires us to define a new component for this region. This can be done either in the page XML, or as a separate file in the site-data/components directory, we will do the latter. Create the components directory and add a file called page.body.helloworldhome.xml to it: <?xml version='1.0' encoding='UTF-8'?> <component> <scope>page</scope> <region-id>body</region-id> <source-id>helloworldhome</source-id> <url>/components/helloworld/body</url> </component> The component file names follow a naming convention: global | template | page>.<region-id>.[<template-instance-id | page-id>].xml The URL for this component points to a Web Script that will return the Hello World message. Start implementing it by creating a descriptor file called helloworld-body.get.desc.xml located in the alfresco/tomcat/shared/classes/alfresco/web-extension/site-webscripts/org/alfresco/demo directory: <webscript> <shortname>helloworldbody</shortname> <description>Returns the body content for the Hello World page.</description> <url>/components/helloworld/body</url> </webscript> Note that the URL is the same as we set in the component definition. Now implement the controller for the Web Script, create a file called helloworld-body.get.js in the same place as the descriptor: model.body = "This is just a test page. Hello World! (Web Scripting)"; The controller just sets up one field in the model with the Hello World message. Now implement the template for the Web Script, create a file called helloworld-body.get.html.ftl in the same place as the descriptor: <h1>${body}</h1> Restart the server. Then access the page again, you should see the Hello World message change to "This is just a test page. Hello World! (Web Scripting)". To summarize a bit, the following is a picture of all the files that were involved in creating this Surf page the old school way: What you could do now is extend the Hello World page with some more sophisticated presentation using the YUI library. If you do that you end up with the pattern for how most of the old school Share pages have been implemented. Next we will have a look at how to implement the same Hello World page the new way with Aikau. Back to Share Architecture and Extension Points.
https://docs.alfresco.com/5.2/concepts/dev-extensions-share-architecture-extension-points-intro-surf-pages.html
2020-03-28T11:27:49
CC-MAIN-2020-16
1585370491857.4
[array(['https://docs.alfresco.com/sites/docs.alfresco.com/files/public/images/docs/default5_2/dev-extensions-share-surf-page-helloworld-noheaderfooter.png', None], dtype=object) array(['https://docs.alfresco.com/sites/docs.alfresco.com/files/public/images/docs/default5_2/dev-extensions-share-surf-page-helloworld-headerfooter.png', None], dtype=object) array(['https://docs.alfresco.com/sites/docs.alfresco.com/files/public/images/docs/default5_2/dev-extensions-share-surf-page-helloworld-involvedfiles.png', None], dtype=object) ]
docs.alfresco.com
User Interfaces In contrast to modules that hold backend logic of your app, user interfaces (UIs) contain the frontend code of your app. You can create your own UIs and share them across your projects. Furthermore UIs may be connected to one or more modules to access your backend data. ApiOmat Studio Mobile and ApiOmat Studio Web helps you to implement your UI projects. ApiOmat's UI market is the place to orchestrate your UIs. Relevant Links: Release Notes | UI Market | Modules | ApiOmat Studio Tutorials: UIs for Developer | UIs for Orchestrators Key Capabilities How it works UIs are the components containing frontend code in ApiOmat. The goal is to provide frontends for your modules. Basically, UIs are ZIP archives which include the frontend project code and some meta data. In near future, this e.g. enables the possiblity to automatically deploy an UI to a self-hosted website. Roles and Workflow In general, you can distinguish between the user interfaces as a template to develop frontends and the backend specific user interface. The idea is that frontend developers can upload such a user interface template with the predefined configuration and everything that is necessary to deploy it. Another person, who just orchestrates the backend application will then add the template to the backend, adjust the UI configuration (which may be, for instance, some color values) and download it to build and/or deploy it then. To sum up, there are two roles that work together: the UI Developer and the UI Orchestrator. The following table gives an overview of their possible actions and the workflow: More details are available within the sections UIs for Developer and UIs for Orchestrators.
https://docs.apiomat.com/1911/User-Interfaces.html
2020-03-28T11:57:36
CC-MAIN-2020-16
1585370491857.4
[]
docs.apiomat.com
Boomi Flow is a low-code platform for app development. With Boomi Flow, we can create and deploy secure, scalable, mobile-ready apps using our existing IT infrastructure, and third-party services like Salesforce. The documentation here shows us how to build apps that work online or offline with computers, mobile devices like tablets or phones, even smartwatches or an intelligent car. - Looking for things you can do with the Flow API? You’ve got to check out the Flow reference documentation. - Wondering how to build an end-to-end implementation? Is easier than you think! See this implementation guide of how we built a smart farm IoT app. - Did you know you can store Flow runtime data in your own storage? See how here. - Multi-region support? Why, yes, of course! - Looking for workflow patterns? Here’s one for you. Happy building! Tutorial: Leo’s choice (Routing and rules) Tutorial: Building a form Tutorial: Getting user input Tutorial: Sending a mail with attachments Tutorial: Uploading files in Box using an app Tutorial: Starting a flow from a text message Tutorial: TextBot app with Twilio Tutorial: Text-to-Speech (TTS) app with Twilio Tutorial: Salesforce Lead manager app Tutorial: Using Flow with AtomSphere Elements (Flow building blocks) Starting with the Start element Know your map elements Design thinking – Map elements Identifying elements – Name, Label, ID Duplicating an element Searching for elements Editing element metadata Elements – Naming convention Flow – Quick concepts Flow – Glossary Flow – API endpoints Flow – Live platform status Flow – Reference documentation Flow – Repositories Flow – Listeners Finding the Flow Run URL Finding the Flow ID Finding the Flow Version ID Everything you want to know about Version ID Rolling back to a previous flow version Flow insights with Dashboard Changelog Creating a new flow Editing flow properties Renaming a flow Saving a flow Deleting a flow Running a flow Running instances of a flow (flow states) Publishing a flow Importing a flow Exporting a flow Exporting a flow – without passwords Sharing a flow Using tokens to share a flow Restricting access to a flow Debugging a flow Creating subflows Executing subflows in parallel Working with Date/Time in Flow Configuring user approvals with voting Creating a page layout Saving a page layout Previewing a page Deleting a page layout Importing a page layout Cloning page layouts Finding the ID of a page layout Page layout — Dependencies Making a column editable in a table Page layout – Configuring selective search Adding a new container Containers — Creating columns Working with page conditions Editing page conditions Deleting page conditions Page layout — Hidden Page layout — Presentation Page layout — List Page layout — Outcomes Page layout — Image Page layout — Input Page layout — Radio Page layout — Toggle Page layout — Text Page layout — Rich text Page layout — Combo box Page layout — File upload Page layout — Table Page layout — Tiles Page layout — Charts Working with the Flow player Creating a new player Deleting a player Using the same player in a different tenant Changing the title of the app Embedding the Flow player Flow player – Supported configurations Flow player – Localization settings NEW Turning on location in flows Changing the color of buttons Displaying buttons with icons Enabling the time picker Tutorial: Custom component – Click to call Creating a basic container Creating a custom component Overriding a default component Custom style for components The three states of custom components Components : Client side validation Enabling client side validation Custom validation of components Player for a video background Quick start – Salesforce integration Configuring Salesforce Using Salesforce for authentication Setting up the Salesforce player Cloning the ManyWho Visualforce page Hard coding the Flow ID in Visualforce Setting up the Service Console Finding the Salesforce base URL Finding the Consumer Key and Secret Finding the ManyWhoFlow Visualforce page
https://docs.manywho.com/
2020-03-28T10:55:24
CC-MAIN-2020-16
1585370491857.4
[]
docs.manywho.com
Summary On Linux, by default USB dongles can't be accessed by users, for security reasons. To allow user access, so-called "udev rules" must be installed. For some users, things will work automatically: - Fedora seems to use a "universal" udev rule for FIDO devices - Our udev rule made it into libu2f-host v1.1.10 - Arch Linux has this package - Debian sid and Ubuntu Eon can use the libu2f-udevpackage - Debian Buster and Ubuntu Disco still distribute v1.1.10, so need the manual rule - FreeBSD has support in u2f-devd There is hope that udev itself will adopt the Fedora approach (which is to check for HID usage page F1D0, and avoids manually whitelisting each U2F/FIDO2 key):. Further progress is tracked in:. If you still need to setup a rule, a simple way to do it is: git clone cd solo/udev make setup Or, manually, create a file like 70-solokeys-access.rules in your /etc/udev/rules.d directory. Additionally, run the following command after you create this file (it is not necessary to do this again in the future): sudo udevadm control --reload-rules && sudo udevadm trigger How do udev rules work and why are they needed In Linux, udev (part of systemd, read man 7 udev) handles "hot-pluggable" devices, of which Solo and U2F Zero are examples. In particular, it creates nodes in the /dev filesystem (in Linux, everything is a file), which allow accessing the device. By default, for security reasons often only the root user can access these nodes, unless they are whitelisted using a so-called "udev rule". So depending on your system setup, such a udev rule may be necessary to allow non-root users access to the device, for instance yourself when using a browser to perform two-factor authentication. What does a udev rule do? It matches events it receives (typically, comparing with the == operator), and performs actions (typically, setting attributes of the node with the = or += operators). What is hidraw? HID are human-interface devices (keyboards, mice, Solo keys), attached via USB. The hidraw system gives software direct ("raw") access to the device. Which node is my Solo or U2F Zero security key? You can either compare ls /dev before and after inserting, or use the udevadm tool, e.g., by running udevadm monitor --environment --udev | grep DEVNAME Typically, you will detect /dev/hidraw0. Using the symlinks above, you can follow symlinks from /dev/solokey and /dev/u2fzero. How do you know if your system is configured correctly? Try reading and writing to the device node you identified in the previous step. Assuming the node is called /dev/hidraw0: - read: try cat /dev/solokey, if you don't get "permission denied", you can access. - write: try echo "hello, Solo" > /dev/solokey. Again, if you don't get denied permission, you're OK. Which rule should I use, and how do I do it? Simplest is probably to copy Yubico's rule file to /etc/udev/rules.d/fido.rules on your system, for instance: $ (cd /etc/udev/rules.d/ && sudo curl -O) This contains rules for Yubico's keys, the U2F Zero, and many others. The relevant line for U2F Zero is: KERNEL=="hidraw*", SUBSYSTEM=="hidraw", ATTRS{idVendor}=="10c4", ATTRS{idProduct}=="8acf", TAG+="uaccess" It matches on the correct vendor/product IDs of 10c4/8acf, and adds the TAG uaccess. Older versions of udev use rules such as KERNEL=="hidraw*", SUBSYSTEM=="hidraw", ATTRS{idVendor}=="10c4", MODE="0644", GROUP="plugdev" which sets MODE of the device node to readable by anyone. Now reload the device events. udevadm trigger What about vendor and product ID for Solo? You got this all wrong, I can't believe it! Are you suffering from us being wrong? Please, send us a pull request and prove us wrong :D
https://docs.solokeys.io/solo/udev/
2020-03-28T12:12:31
CC-MAIN-2020-16
1585370491857.4
[]
docs.solokeys.io
The next step is to understand basic dependencies dependencies. In this example, we use the job's dependency field to set up a job-based dependency. Job based dependencies are those that will wait for the entire dependent job to finish before the current job starts. There are also subjob (instance) and agenda (frame) based dependencies.Below is the ... Running this script will create 2 jobs. The first will run immediately, the second will be in a blocked state until the first job completes, then it will automatically start running. For more information on the dependency attribute syntax, see Specifying Job Dependencies Continue to Advanced Dependencies
http://docs.pipelinefx.com/pages/diffpagesbyversion.action?pageId=4238244&selectedPageVersions=12&selectedPageVersions=13
2020-03-28T12:48:55
CC-MAIN-2020-16
1585370491857.4
[]
docs.pipelinefx.com
Key three shorthand methods to save and read records from its default key-value store - Apify.setValue() [see docs], Apify.getInput() [see docs], and Apify.getValue() [see docs]. So to fetch an actor's INPUT and set OUTPUT value, call: Method Apify.getInput() is not only a shortcut to Apify.getValue('INPUT') but it's also compatible with Apify.metamorph() [see docs] as metamorphed actor run has input stored in key INPUT-METAMORPH-1 instead of INPUT which hosts original input. const Apify = require('apify'); Apify.main(async () => { // Get input of your actor const input = await Apify.getInput(); const value = await Apify.getValue('my-key'); ....
https://docs.apify.com/storage/key-value-store
2020-03-28T12:36:40
CC-MAIN-2020-16
1585370491857.4
[]
docs.apify.com
The TrueSight Infrastructure Management Server can be installed on the following operating systems: Note For memory, hard disk space, and database space requirements, see Hardware requirements to support small, medium, and large environments.. Ensure that the following libraries are present to install TrueSight Infrastructure Management Server. To install the libraries, the following commands can be used. Notes yum install pam.i686 pam.x86_64(RHEL) or zypper pam-32bit pam(SUSE) Hardware requirements to support small, medium, and large environments 7 Comments Pawel Hnatyk Mukta Kirloskar Aki Naakka You should also add csh/tcsh as required libraries on RHEL Mukta Kirloskar Thank you, Aki. I will update the document as soon as possible. Charles Kelley Mukta Kirloskar Michael Funke Mukta, It looks like the csh/tcsh has yet to be added as required libraries to the requirements. We've been struggling with an install this whole time and didn't realize of this requirement until we saw the comments here. Can you please make sure this is added, ASAP?
https://docs.bmc.com/docs/display/TSOMD107/TrueSight+Infrastructure+Management+Server+system+requirements?focusedCommentId=793954304
2020-03-28T11:23:54
CC-MAIN-2020-16
1585370491857.4
[]
docs.bmc.com
What data type should I use?: - API Class types - XML-only types - Custom types.
https://docs.microsoft.com/en-us/archive/blogs/ericgu/what-data-type-should-i-use
2020-03-28T10:53:13
CC-MAIN-2020-16
1585370491857.4
[]
docs.microsoft.com
Introduction Provide is an enterprise low-code application platform for distributed systems that makes it quick and easy to add blockchain to your organization's technology stack. This documentation describes the Provide architecture and a holistic suite of APIs made available by Provide for organizations and developers to build, test and scale Web3 applications and distributed systems. Platform Overview This overview provides an inventory of core APIs which are suitable for building all kinds of Web3 applications. The APIs enable users to orchestrate public and permissioned peer-to-peer infrastructure, connect that infrastructure with legacy systems and build sophisticated applications in a low-code environment. Infrastructure APIs Core APIs Composite APIs Client Libraries The following open-source client libraries are available on GitHub: Provide has a web UI and CLI to manage and consume our APIs. Note that the web UI is meant only for developers at this time. Authorization Provide requires the presence of a bearer API token to authorize most API calls. A bearer API token is an encoded JWT which contains a subject claim ( sub) which references the authorized entity (i.e., a User or Application). platform resources for which the Token was authorized. Unless otherwise noted, all API requests must include a header such as: Authorization: bearer <JWT>. The bearer Authorization header is scoped to an authorized platform User or an Application as described above. The encoded JWT is signed using the RS256(RSA Signature with SHA-256)w DV6zZzup4RHLcln0xfGSm6dMPBDM1G96fuHhOwH5+uU5MQHJP7RqW71Bu5dLIG8Z RX+XyUtb0sxCV/7X27Nm/bKpDysaSWQ36reAmw5wVaB1SoFeN519FY5rhoCWmH3W auBAHTzpjg57p7uR0XynYXf8NSGXlysWHppkppqwrPH64G6UZaB7SMl1PFfkJeqZ zJpzBGYWsixdF1EjXn+Yz0mhUZO2OSPWifOuN7cpn3BuNqegg4iVdz5HDoQhJW7N uRhf3buKd/mjat8XA3e2Rkrr2h835GloScJkj7I4BZUNkzKQuEK6C9xW/zJtbPqQ RYEq84A1hMfSZ3G5HFe2JkqiyvXkFwS3qMc5Pur8tZSzBj6AYMoJJso/aOdphpR8 6MaaWXWTwvwfpZbMRqehOcsmQcNLF2gLJPuHzR5WtVCnWrDgvjsWyeDD1WISKusi aOeHxZjS3Bjl4Imq48l1wi2eI/11F/Xg70F4FJaMYLVHJA2nsmBuuQ9UDYHHq876 clKvIvgIItzJcv9lnmjl1Jks1DwCUF3qF2ugYcs9A3EoEcNzhMgZNJ2j5OUzfx1E bzVKkqoC9MQpZWXgqV0KQqKK4I3rMY+1hLqk4S4eF9ZAVlT33qfMzlf0qWTOcP1Z i2dsm0fy4NxWxknlEn5/LhMCAwEAAQ== -----END PUBLIC KEY----- Pagination Collection. Status Codes The following status codes describe Provide API responses: Rate Limits We reserve the right to enforce rate limits on certain API calls in the future.
https://docs.provide.services/
2020-03-28T11:30:15
CC-MAIN-2020-16
1585370491857.4
[]
docs.provide.services
. click on: Совет Изменение нескольких стилей The Symbology allows you to select multiple symbols and right click to change color, transparency, size, or width of selected entries. Single Symbol Renderer 3: In any spinbox in this dialog you can enter expressions. E.g. you can calculate simple math like multiplying the existing size of a point by 3 without resorting to a calculator. Figure Symbology 4: If you click on the second level in the Symbol layers dialog a ‘Data-defined override’ for nearly all settings is possible. When using a data-defined color one may want to link the color to a field ‘budged’. Here a comment functionality is inserted. /* This expression will return a color code depending on the field value. * Negative value: red * 0 value: yellow * Positive value: green */ CASE WHEN value < 0 THEN '#DC143C' -- Negative value: red WHEN value = 0 THEN '#CCCC00' -- Value 0: yellow ELSE '#228B22' -- Positive value: green END Figure Symbology 5: Data-defined symbol with Edit... menu Categorized Renderer The Categorized Renderer is used to render all features from a layer, using a single user-defined symbol whose color reflects the value of a selected feature’s attribute. The Style menu allows you to select: Then click on Classify button to create classes from the distinct value of the attribute column. Each classes can be disabled unchecking the checkbox at the left of the class name. You can change symbol, value and/or label of the class, just double click on the item you want to change. Right-click_6 shows the category rendering dialog used for the rivers layer of the QGIS sample dataset. Figure Symbology 6: Graduated Renderer The Graduated Renderer is used to render all the features from a layer, using a single user-defined symbol whose color reflects the assignment of a selected feature’s attribute to a class. Figure Symbology.-click shows a contextual menu to Copy/Paste, Change color, Change transparency, Change output unit, Change symbol width. The example in figure_symbology_7 shows the graduated rendering dialog for the rivers layer of the QGIS sample dataset. Совет Thematic maps using an. Rule-based rendering_8.8 the rules appear in a tree hierarchy in the map legend. Just double-klick the rules in the map legend and the Style menu of the layer properties appears showing the rule that is the background for the symbol in the tree. Figure Symbology 8: Point displacement The Point Displacement Renderer works to visualize all features of a point layer, even if they have the same location. To do this, the symbols of the points are placed on a displacement circle around a center symbol. Figure Symbology 9: Совет Export vector symbology subrenderers. These subrenderers are the same as for the main renderers. Figure Symbology 10: Совет Switch quickly between styles Once you created one of the above mentioned styles you can right-klick on the layer and choose Styles ‣ Add to save your style. Now you can easily switch between styles you created using the Styles ‣ menu again. Heatmap With the Heatmap renderer you can create live dynamic heatmaps for (multi)point layers. You can specify the heatmap radius in pixels, mm or map units, choose a color ramp for the heatmap style and use a slider for selecting a tradeoff between render speed and quality. When adding or removing a feature the heatmap renderer updates the heatmap style automatically.. Совет quick color picker + copy/paste colors You can quickly choose from Recent colors, from Standard colors or simply copy or paste a color by clicking the drop-down arrow that follows a current color box. Figure color picker 3:: Тень Let us see how the new menus can be used for various vector layers. Labeling point layers Start QGIS and load a vector point layer. Activate the layer in the legend and click on the Layer Labeling Options icon in the QGIS toolbar menu. priority section you can define with which priority the labels are rendered. It interacts with labels of the other vector layers in the map canvas. If there are labels from different layers in the same location then the label with the higher priority will be displayed and the other will be left out., below or on the line), distance from the line and for Curved, the user can also setup inside/outside max angle between curved label. As for point vector layers you have the possibility to define a Priority for the labels. choice of Label Placement, several options will appear. As for Point Placement you can choose the distance for the polygon outline, repeat the label around the polygon perimeter. As for point and line vector layers you have the possibility to define a Priority for the polygon vector layer. of functions available to create simple and very complex expressions to label your data in QGIS. See Expressions chapter for more information and examples ). Figure Labels 5: Figure Labels 6: Within the Fields menu, the field attributes of the selected dataset can be manipulated. The buttons New Column and Delete Column can be used when the dataset is in Editing mode. Элемент редактирования: Имя файла: Упрощает процесс выбор файлов за счёт добавления соответствующего диалога. Изображение: Виджет для вывода изображения. Генератор UUID: атрибут только для чтения, в которое будет записан UUID (Universally Unique Identifiers), если поле пустое. Примечание With the Attribute editor layout, you can now define built-in forms (see figure_fields_2). This is usefull’.: Dialog to create categories with the Attribute editor layout Figure Fields 3: Resulting built-in form with tabs and named groups Вкладка Общие очень схожа с аналогичной вкладкой диалога свойств растрового слоя. Она позволяет: Информация изменять отображаемое в легенде имя слоя Система координат обновить информацию об охвате слоя, при помощи кнопки [Обновить границы] Scale dependent visibility Feature subset Параметры).: There are several examples included in the dialog. You can load them by clicking on [Add default actions]. One. Два примера действий показаны ниже:" После вызова этого действия для нескольких записей таблицы, результирующий файл будет выглядеть примерно так:: Убедитесь, что слой lakes загружен. Open the Layer Properties dialog by double-clicking on the layer in the legend, or right-click and choose Properties from the pop-up menu. Click on the Actions menu. Введите имя действия, например, Google Search. Для действия нам нужно задать имя внешней запускаемой программы. В этот раз мы будем использовать веб-браузер Firefox. Если программы нет в текущей директории, необходимо задать полный путь к ней. Following the name of the external application, add the URL used for doing a Google search, up to but not including the search term: Теперь текст в поле Действие должен выглядеть так: firefox Щёлкните на выпадающем списке, содержащем имена полей слоя lakes. Он расположен непосредственно слева от кнопки [Вставить поле]. From the drop-down box, select ‘NAMES’ and click [Insert Field]. Теперь текст вашего действия выглядит так: firefox To finalize the action, click the [Add to action list] button. This completes the action, and it is ready to use. The final text of the action should look like this: firefox Теперь мы можем использовать это действие. Закройте диалог Свойства слоя и приблизьтесь к области интереса. Убедитесь, что слой lakes активный и выберите озеро. В окне результатов вы теперь видите, что ваше действие показывается:: Сохранить связанный слой в виртуальной памяти Создать индекс на основе объединяемого поля Примеры данных). Сделайте двойной щелчок на слое climate в легенде карты и откройте диалог Свойства слоя.:
https://docs.qgis.org/2.8/ru/docs/user_manual/working_with_vector/vector_properties.html
2020-03-28T12:25:34
CC-MAIN-2020-16
1585370491857.4
[]
docs.qgis.org
Welcome to Splunk Enterprise 7.3.3 was first released on June 4, 2019..3.0 What's New in 7.3.1 Splunk Enterprise 7.3.1 introduces enhancements to several features and resolves the issues described in Fixed issues. What's New in 7.3.2 Splunk Enterprise 7.3.2 was released on October 2, 2019. It introduces the following enhancements. What's New in 7.3.3 Splunk Enterprise 7.3.3 was released on November 18, 2019. It introduces the following enhancement. What's New in 7.3.4 Splunk Enterprise 7.3.4 was released on January 14, 2020. It introduces the following enhancement..4, 7.3.5 Feedback submitted, thanks!
https://docs.splunk.com/Documentation/Splunk/7.3.4/ReleaseNotes/MeetSplunk
2020-03-28T12:53:51
CC-MAIN-2020-16
1585370491857.4
[array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)]
docs.splunk.com
General conception¶ The idea of this environment template is to provide a base for small and medium projects, deployed on a single server. With a high focus on ease of use. The environment should be also almost the same on development server as on production server. There are a few design patterns, that are the basis of the environment conception. Management from Makefile¶ Everything important goes into the Makefile. There are no plain bash scripts outside of the Makefile. The project is built in purely Makefile + YAML + docs + misc files. make start make check_status make list_all_hosts make stop make build_docs make encrypt_env_prod # ... # and others Multiple compose files can be toggled on/off¶ Service definitions in Docker Compose format are kept in ./apps/conf directory. Services that are temporarily disabled are marked with “.disabled” at the end of filename. ✗ make list_configs dashboard deployer health service-discovery smtp ssl technical uptimeboard ✗ make config_disable APP_NAME=ssl >> Use APP_NAME eg. make config_disable APP_NAME=iwa-ait OK, ssl was disabled. ✗ make config_enable APP_NAME=ssl >> Use APP_NAME eg. make config_disable APP_NAME=iwa-ait OK, ssl is now enabled. Configuration in one file that could be encrypted¶ Good practice is to extract environment variables into .env files, instead of hard-coding values into services YAML definitions. That makes a .env file from which we can use environment variables in YAML files with syntax eg. ${VAR_NAME} As the .env cannot be pushed into the repository, there is a possibility to push .env-prod as a encrypted file with ansible-vault. make encrypt_env_prod Main domain and domain suffix concept¶ MAIN_DOMAIN can be defined in .env and reused in YAML files together with DOMAIN_SUFFIX. It opens huge possibility of creating test environments, which have different DNS settings. Sounds like a theory? Let’s see a practical example! It’s so much flexible that you can host multiple subdomains on main domain, but you can also use totally different domains. No /etc/hosts entries are required, it’s a standard Linux DNS behavior. Example: MAIN_DOMAIN=iwa-ait.org DOMAIN_SUFFIX=.localhost first: environment: - VIRTUAL_HOST=some-service.${MAIN_DOMAIN}${DOMAIN_SUFFIX} second: environment: - VIRTUAL_HOST=other-service.example.org${DOMAIN_SUFFIX} In result of above example you will have services under domains in test environment: - some-service.iwa-ait.org.localhost - other-service.example.org.localhost Complete example In .env file: MAIN_DOMAIN=iwa-ait.org DOMAIN_SUFFIX=.localhost In ./apps/conf/docker-compose.phpmyadmin.yaml: db_mysql_admin: image: phpmyadmin/phpmyadmin environment: - PMA_HOST=db_mysql # gateway configuration - VIRTUAL_HOST=pma.${MAIN_DOMAIN}${DOMAIN_SUFFIX} - VIRTUAL_PORT=80 labels: org.riotkit.dashboard.enabled: true org.riotkit.dashboard.description: 'MySQL database management' org.riotkit.dashboard.icon: 'pe-7s-server' org.riotkit.dashboard.only_for_admin: true Now you can access in your browser. On production server just remove the DOMAIN_SUFFIX value to have - simple enough, huh? Automatic distinction between development and production server¶ There should be no need to have separated configuration files for local development environment, and for production environment. Everything should be REALLY the same, except DOMAIN_SUFFIX variable, which should point to .localhost on development environment. Whenever you will need to pass information to some docker container, that we are in debug mode you can use ${IS_DEBUG_ENVIRONMENT} in YAML definition. IS_DEBUG_ENVIRONMENT is a result of auto-detection if the environment is local or production, you may also set ENFORCE_DEBUG_ENVIRONMENT=1 if you want to enforce debug environment. HINT: File Repository’s Bahub integration configuration integrates with IS_DEBUG_ENVIRONMENT by stopping cronjobs, no backups are done from developer environment HINT: Ansible deployment is able to modify .env variables when pushing changes to production. Applications pulled from git repositories¶ Not always it’s possible to package an application into container. If we have a private application without public source code, and we do not have a private docker registry - then it’s possible to use a generic eg. PHP + NGINX container and mount the application files as a volume.
https://environment.docs.riotkit.org/en/latest/general_concept.html
2020-03-28T11:35:03
CC-MAIN-2020-16
1585370491857.4
[array(['_images/env-differences.png', '_images/env-differences.png'], dtype=object) ]
environment.docs.riotkit.org
.. module:: iot ******************************* Amazon Web Services IoT Library ******************************* The Zerynth AWS IoT Library can be used to ease the connection to the `AWS IoT platform `_. It allows to make your device act as an AWS IoT Thing which can be registered through AWS tools or directly from the :ref:`Zerynth Toolchain `. Check this video for a live demo: .. raw:: html =============== The Thing class =============== .. class:: Thing(endpoint, mqtt_id, clicert, pkey, thingname=None, cacert=None) Create a Thing instance representing an AWS IoT Thing. The Thing object will contain an mqtt client instance pointing to AWS IoT MQTT broker located at :samp:`endpoint` endpoint. The client is configured with :samp:`mqtt_id` as MQTT id and is able to connect securely through AWS authorized :samp:`pkey` private key and :samp:`clicert` certificate (an optional :samp:`cacert` CA Certificate can also be passed). :ref:`Refer to Zerynth SSL Context creation ` for admitted :samp:`pkey` values. The client is accessible through :samp:`mqtt` instance attribute and exposes all :ref: :samp:`thingname` different from chosen MQTT id can be specified, otherwise :samp:`mqtt_id` will be assumed also as Thing name. .. method:: update_shadow(state) Update thing shadow with reported :samp:`state` state. :samp:`state` must be a dictionary containing only custom state keys and values:: my_thing.update_shadow({'publish_period': 1000}) .. method:: on_shadow_request(shadow_cbk) Set a callback to be called on shadow update requests. :samp:`shadow_cbk` callback will be called with a dictionary containing requested state as the only parameter:: def shadow_callback(requested): print('requested publish period:', requested['publish_period']) my_thing.on_shadow_request(shadow_callback) If a dictionary is returned, it is automatically published as reported state.
https://testdocs.zerynth.com/latest/_sources/official/lib.aws.iot/docs/official_lib.aws.iot_iot.txt
2020-03-28T12:01:00
CC-MAIN-2020-16
1585370491857.4
[]
testdocs.zerynth.com
The default DC/OS Couchbase Couchbase instance. In each case, you would create a new Couchbase instance using the custom configuration as follows: dcos package install couchbase --options=sample-couchbase.json We recommend that you store your custom configuration in source control. Installing multiple instancesInstalling multiple instances By default, the Couchbase service is installed with a service name of couchbase. You may specify a different name using a custom service configuration as follows: { "service": { "name": "couchbase-other" } } When the above JSON configuration is passed to the package install couchbase command via the --options argument, the new service will use the name specified in that JSON configuration: dcos package install couchbase --options=couchbase-other.json Multiple instances of Couchbase may be installed into your DC/OS cluster by customizing the name of each instance. For example, you might have one instance of Couchbase named couchbase-staging and another named couchbase-prod, each with its own custom configuration. After specifying a custom name for your instance, it can be reached using dcos couchbase CLI commands or directly over HTTP as described below. Installing into foldersInstalling into folders In DC/OS 1.10 and later, services may be installed into folders by specifying a slash-delimited service name. For example: { "service": { "name": "/foldered/path/to/couchbase" } } The above example will install the service under a path of foldered => path => to => couchbase. It can then be reached using dcos couchbase CLI commands or directly over HTTP as described below. Addressing named instancesAddressing named instances After you’ve installed the service under a custom name or under a folder, it may be accessed from all dcos couchbase CLI commands using the --name argument. By default, the --name value defaults to the name of the package, or couchbase. For example, if you had an instance named couchbase-dev, the following command would invoke a pod list command against it: dcos couchbase --name=couchbase-dev pod list The same query would be over HTTP as follows: curl -H "Authorization:token=$auth_token" <dcos_url>/service/couchbase-dev/v1/pod Likewise, if you had an instance in a folder like /foldered/path/to/couchbase, the following command would invoke a pod list command against it: dcos couchbase --name=/foldered/path/to/couchbase pod list Similarly, it could be queried directly over HTTP as follows: curl -H "Authorization:token=$auth_token" <dcos_url>/service/foldered/path/to/couchbase-dev/v1/pod You may add a -v (verbose) argument to any dcos couchbase command to see the underlying HTTP queries that are being made. This can be a useful tool to see where the CLI is getting its information. In practice, dcos couchbase commands are a thin wrapper around an HTTP interface provided by the DC/OS Couchbase"]] Virtual networksVirtual networks DC/OS Couchbase supports deployment on virtual networks on DC/OS (including the dcos overlay network), allowing each container (task) to have its own IP address and not use port resources on the agent machines. This can be specified by passing the following configuration during installation: { "service": { "virtual_network_enabled": true } } Configuring for ProductionConfiguring for Production In a production deployment, each Couchbase Server service type ( data, index, query, full text search, eventing, and analytics) runs in its own container. In the respective service type configuration sections, you select the count you want. The following sample shows 2 data, 1 index, 1 query, 1 fts, 1 eventing, and 1 analytics service in the DC/OS dashboard and the Couchbase dashboard. Figure 1. Sample configuration in DC/OS dashboard Figure 2. Sample configuration in Couchbase dashboard Since all Couchbase service nodes require the same ports, in the Figure 2, you must have a DC/OS cluster with 7 private agents. Higher density can be achieved by using virtual networking where each container get its own IP. In combination with placement constraints, you can then also co-locate services on the same DC/OS agent, as fits your specific needs. Configuring for DevelopmentConfiguring for Development In a development deployment, data nodes have all Couchbase Server service types ( data, index, query, full text search, eventing, and analytics). In the Data Service configuration section, check the all services enabled box: Figure 3. Enabling all services The following images show the deployment of two data nodes that have all the Couchbase service types. Figure 4. Data node configuration shown on DC/OS dashboard Figure 5. Data node configuration shown on Couchbase dashboard
http://docs-staging.mesosphere.com/mesosphere/dcos/services/couchbase/1.0.0-6.0.0/configuration/
2020-03-28T12:30:17
CC-MAIN-2020-16
1585370491857.4
[]
docs-staging.mesosphere.com
Sol. Sharding is important for two primary reasons: - It allows you to horizontally split or scale your content volume. - It allows you to distribute operations, for example, index tracking, across shards (potentially on multiple nodes) therefore increasing performance/throughput. Documents in the repository are distributed evenly across shards. You may have more than one shard, but a document will only be located in one shard and its instances. A conceptual shard can have any number of real instances. A shard tracks the appropriate subset of information from the repository. Note: Alfresco Content Services does not support slave shards or slave replicas. A shard can have zero or more shard instances. Multiple shard instances have the following advantages: - It provides high availability in case a shard/node fails. - It allows you to scale out your search throughput because searches can be executed on all the instances in parallel. - It increases performance: search requests are handled by the multiple shard instances. Note that if your Solr indexes are sharded, then index backup will be disabled.
https://docs.alfresco.com/sie/concepts/solr-shard-overview.html
2020-03-28T11:48:42
CC-MAIN-2020-16
1585370491857.4
[]
docs.alfresco.com
The components of TrueSight Infrastructure Management enable event management, service impact management, performance monitoring, and performance analytics. Collectively, these components serve as a data provider to the Presentation Server. The following diagram provides the high-level architecture of the Infrastructure Management components, which are part of the larger TrueSight Operations Management system. TrueSight Infrastructure Management components The following sections describe the interfaces, servers, and data collectors that make up the TrueSight Infrastructure Management system.. In addition to the Operations Management interfaces, the TrueSight Infrastructure Management system provides additional interfaces for consuming the data collected and processed by this system and for configuring and customizing the system. The following table describes each interface and it's purpose. The Infrastructure Management Server receives events and data from the following sources: After the Infrastructure Management Server collects this information, it processes events and data using a powerful analytics engine and additional event processing instructions stored in its database. The Infrastructure Management Server can also leverage a service model (built within the Infrastructure Management Server or published from the Atrium CMDB) to map data and events in context with business services. You can deploy one or more Infrastructure Management servers. For more information, see Deployment use cases and best practices for Infrastructure Management in the deployment documentation. The TrueSight Infrastructure Management server consists of the following functions and components. For more information, see Infrastructure Management Server. For more information about the integration service, see Infrastructure Management Integration Service. For information about the deployment of these components, see Integration Service host deployment and best practices for event processing and propagation . Events are collected from the following sources: Performance data is collected from PATROL Agents or from other sources such as BMC Portal and Microsoft System Center Operations Manager, using the appropriate adapters. Not all performance data has to be forwarded to the Infrastructure Management Server. Performance data can be collected and stored at the PATROL Agents and visualized as trends in the Infrastructure Management console without streaming the data to the Infrastructure Management Server. For information about event and performance data propagation, see Integration Service host deployment and best practices for event processing and propagation .
https://docs.bmc.com/docs/display/tsim107/TrueSight+Infrastructure+Management+architecture+and+components
2020-03-28T11:19:44
CC-MAIN-2020-16
1585370491857.4
[]
docs.bmc.com
Description The WRITEBLK statement writes a block of data to a file opened for sequential processing. It takes the general form: WRITEBLK expression ON file.variable {THEN statements [ELSE statements] | ELSE statements} Where: file.variable specifies a file opened for sequential processing. The value of expression is written to the file, and the THEN statements are executed. If no THEN statements are specified, program execution continues with the next statement. If the file is neither accessible or does not exist, it executes the ELSE statements; and ignores any THEN statements. If either expression or file.variable evaluates to null, the WRITEBLK statement fails and the program enters the debugger with a run-time error message. Each WRITEBLK statement writes the value of expression starting at the current position in the file. The current position is incremented to beyond the last byte written. WRITEBLK does not add a new line at the end of the data. INTERNATIONAL MODE When using the WRITEBLK statement in International Mode, care must be taken to ensure that the write variable is handled properly before the WRITEBLK statement. The WRITEBLK statement expects the output variable to be in “bytes”, however when manipulating variables in International Mode character length rather than byte lengths are usually used and hence possible confusion or program malfunction can occur. If requiring byte count data the output variable can be converted from the UTF-8 byte sequence to ‘binary/latin1’ via the LATIN1 function. It is not recommended that the READBLK/WRITEBLK statements be used when executing in International Mode. Similar functionality may be obtained via the READSEQ/WRITESEQ statement, which can be used to read/write, characters a line at a time from a file. Go back to jBASE BASIC.
https://docs.jbase.com/34463-mv-migration-station/278663-writeblk
2020-03-28T11:56:41
CC-MAIN-2020-16
1585370491857.4
[]
docs.jbase.com
Update synchronization schedule for a CloudCache Phoenix Editions: Business Enterprise Elite - Log on to Phoenix Management Console. - On the menu bar, click All Organizations, and select the required organization from the drop-down list. - On the menu bar, click Manage > CloudCache. - Under the Configured CloudCache tab, click the Phoenix CloudCache for which you want to update synchronization schedule. - Under the Schedule and Resources panel, click Edit. - On the Edit Sync Schedule page, enter the appropriate values in the following details. - (Optional) Click Add More to add additional schedules. Note: To delete a schedule, click the delete icon next to the schedule. - Click Save.
https://docs.druva.com/Phoenix/030_Configure_Phoenix_for_Backup/200_Phoenix_CloudCache/030_Manage_Phoenix_CloudCache/090_Update_synchronization_schedule_for_a_CloudCache
2018-06-17T22:12:30
CC-MAIN-2018-26
1529267859817.15
[array(['https://docs.druva.com/@api/deki/files/3643/cross.png?revision=2', 'File:/cross.png'], dtype=object) array(['https://docs.druva.com/@api/deki/files/3644/tick.png?revision=2', 'File:/cross.png'], dtype=object) array(['https://docs.druva.com/@api/deki/files/3644/tick.png?revision=2', 'File:/tick.png'], dtype=object) ]
docs.druva.com
In EasySendy Pro you can send the particular campaign for single subscriber for sending the campaign to single subscribers then no need to create the campaign for that particular subscriber, just follow the below steps and send the campaign. 1. At the EasySendy Pro dashboard, click on Email List >> Click on List.>> Click on Overview. 3.Click on Subscriber configuration sign >>Create campaign for this subscribers. 4. Once you will click on create campaign for this subscriber, it will re-direct to campaign page and it will automatic create the campaign name and subscriber segment with the same subscriber list name. And then you need to follow the same process for sending the campaign for the particular subscriber. 5. Then click on Save and closed to processed for next step and follow the normal process for sending the campaigns.
https://docs.easysendy.com/email-campaigns/kb/how-to-create-an-email-campaign-only-for-a-single-subscriber/
2018-08-14T11:00:29
CC-MAIN-2018-34
1534221209021.21
[]
docs.easysendy.com
When you use the Check Compliance button, a virtual machine object does not change its status from Not Compliant to Compliant even though vSAN resources have become available and satisfy the virtual machine profile. Problem When you use force provisioning, you can provision a virtual machine object even when the policy specified in the virtual machine profile cannot be satisfied with the resources available in the vSAN cluster. The object is created, but remains in the non-compliant status. vSAN is expected to bring the object into compliance when storage resources in the cluster become available, for example, when you add a host. However, the object's status does not change to compliant immediately after you add resources. Cause This occurs because vSAN regulates the pace of the reconfiguration to avoid overloading the system. The amount of time it takes for compliance to be achieved depends on the number of objects in the cluster, the I/O load on the cluster and the size of the object in question. In most cases, compliance is achieved within a reasonable time.
https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.virtualsan.doc/GUID-D8240016-0644-46A3-A31C-D9D232CB8961.html
2018-08-14T10:58:22
CC-MAIN-2018-34
1534221209021.21
[]
docs.vmware.com
Step 1: Provision an IAM User Follow these instructions to prepare an IAM user to use AWS CodeDeploy: Create an IAM user or use an existing one associated with your AWS account. For more information, see Creating an IAM User in IAM User Guide. Grant the IAM user access to AWS CodeDeploy—and AWS services and actions AWS CodeDeploy depends on—by copying the following policy and attaching it to the IAM user: { "Version": "2012-10-17", "Statement" : [ { "Effect" : "Allow", "Action" : [ "autoscaling:*", "codedeploy:*", "ec2:*", "lambda:*", "elasticloadbalancing:*", :ListRolePolicies", "iam:ListRoles", "iam:PassRole", "iam:PutRolePolicy", "iam:RemoveRoleFromInstanceProfile", "s3:*" ], "Resource" : "*" } ] } The preceding policy grants the IAM user the access required to deploy to both an AWS Lambda compute platform and an EC2/On-Premises compute platform. To learn how to attach a policy to an IAM user, see Working with Policies. To learn how to restrict users to a limited set of AWS CodeDeploy actions and resources, see Authentication and Access Control for AWS CodeDeploy. You can use the AWS CloudFormation templates provided in this documentation to launch Amazon EC2 instances that are compatible with AWS CodeDeploy. To use AWS CloudFormation templates to create applications, deployment groups, or deployment configurations, you must grant the IAM user access to AWS CloudFormation—and AWS services and actions that AWS CloudFormation depends on—by attaching an additional policy to the IAM user, as follows: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "cloudformation:*" ], "Resource": "*" } ] } For information about other AWS services listed in these statements, see:
https://docs.aws.amazon.com/codedeploy/latest/userguide/getting-started-provision-user.html
2018-08-14T10:51:16
CC-MAIN-2018-34
1534221209021.21
[]
docs.aws.amazon.com
StopTrainingJobAsync. Namespace: Amazon.SageMaker Assembly: AWSSDK.SageMaker.dll Version: 3.x.y.z Container for the necessary parameters to execute the StopTrainingJob service method. .NET Framework: Supported in: 4.5, 4.0, 3.5 Portable Class Library: Supported in: Windows Store Apps Supported in: Windows Phone 8.1 Supported in: Xamarin Android Supported in: Xamarin iOS (Unified) Supported in: Xamarin.Forms
https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/SageMaker/MSageMakerStopTrainingJobStopTrainingJobRequest.html
2018-08-14T10:57:31
CC-MAIN-2018-34
1534221209021.21
[]
docs.aws.amazon.com
RAID (redundant array of independent disks) A system of using multiple hard drives for sharing or replicating data among the drives. The Sophos ES4000, ES5000 and ES8000 use RAID disk mirroring for data redundancy: if one disk fails, the other disk takes over, and the appliance continues to function normally.
https://docs.sophos.com/msg/sea/help/en-us/msg/sea/references/RAID.html
2018-08-14T10:37:24
CC-MAIN-2018-34
1534221209021.21
[]
docs.sophos.com
复制集故障排除. 注解ongo shell connected to the primary, call the rs.printSlaveReplicationInfo() method. Returns the syncedTo value for each member, which shows the time when the last oplog entry was written to the secondary, as shown in the following example: A delayed member may show as 0 seconds behind the primary when the inactivity period on the primary is greater than the members[n].slaveDelay value. 注解 复制集选举. For related information on connection errors, see TCP 的 keepalive 时间会影响分片集群和复制集吗?.:): - Oplog大小, - 延时节点, and - Check the Replication Lag. 注解 You normally want the oplog to be the same size on all members. If you resize the oplog, resize it on all members. To change oplog size, see the 修改Oplog大小 在 3.0.0 版更改. MongoDB 3.0.0 removes the local.slaves collection. For local.slaves error in earlier versions of MongoDB, refer to the appropriate version of the MongoDB Manual.
http://docs.mongoing.com/tutorial/troubleshoot-replica-sets.html
2018-08-14T10:18:56
CC-MAIN-2018-34
1534221209021.21
[]
docs.mongoing.com
Node.js Driver for SQL Server To download Node.js SQL driver The tedious module is a javascript implementation of the TDS protocol, which is supported by all modern versions of SQL Server. The driver is an open-source project, available on Github. You can connect to a SQL Database using Node.js on Windows, Linux, or Mac. Getting started - Step 1: Configure development environment for Node.js development - Step 2: Create a SQL database for Node.js development - Step 3: Proof of concept connecting to SQL using Node.js Documentation Tedious module documentation on Github
https://docs.microsoft.com/en-us/sql/connect/node-js/node-js-driver-for-sql-server?view=sql-server-2017
2018-08-14T11:00:38
CC-MAIN-2018-34
1534221209021.21
[]
docs.microsoft.com
Map Orchestration activity parsing rules for instructions on configuring parsing for output variables.
https://docs.servicenow.com/bundle/istanbul-servicenow-platform/page/administer/orchestration-activity-designer/task/t_MapAnOutputField.html
2018-08-14T10:39:06
CC-MAIN-2018-34
1534221209021.21
[]
docs.servicenow.com
List Selector The List Selector dialog box is displayed when you click the item in the To or Except column of a message-filtering option on the , Anti-Virus, or Additional Policy pages. To define the users for which this filtering option will apply: - On the Select users drop-down list, select one of the following options: - All end users: Includes or excludes all groups of users previously defined on the configuration page. - No end users: Includes or excludes none of the users. - Custom users: Allows you to select a subset of all the users by selecting from the groups previously defined on the page or to define a custom group of users. - If you selected Custom users in the previous step, select an option button (Existing groups or Custom groups). - Do one of the following: Note Email addresses must be actual addresses and not alias addresses (unless alias map support is turned off). - If you selected Existing groups, select groups in the Available list and click the add button (>>) to add groups to the Current list; select groups in the Current Users list and click the remove button (<<) to remove groups from the Current list. - If you selected Custom groups, type the email addresses of individual users in the text box and separate the addresses with a comma (but no space). - Click OK to save your changes and close the List Selector dialog box, click Apply to save without closing the Group Editor dialog box, or click Cancel to close the dialog box without saving the changes.
https://docs.sophos.com/msg/sea/help/en-us/msg/sea/tasks/DBListSelector.html
2018-08-14T10:39:38
CC-MAIN-2018-34
1534221209021.21
[]
docs.sophos.com
Finding your SputnikNet Express Account URL and Resetting Your Password How to find your SputnikNet Express Account control center if you don't know its web address (URL). Also, how to reset your SputnikNet Express password if you've forgotten it. Finding your SputnikNet Express Account home page.Finding your SputnikNet Express Account home page. If you lose the bookmark, you can return to your SputnikNet Express control center in the future by going to:. Use the email and password you assigned in previous steps. Click "Submit" to see a list of your SputnikNet Express Accounts. Select the appropriate SputnikNet Express Account.Select the appropriate SputnikNet Express Account. ALTERNATIVE: Finding your SputnikNet Account home page by MAC address.ALTERNATIVE: Finding your SputnikNet Account home page by MAC address. Resetting your SputnikNet Account password.Resetting your SputnikNet Account password. Your Sputnik-powered device's password is the same as your SputnikNet Account password. The way to reset both of them back to the default password is to reset the Sputnik-powered device. To do this, hold the device's reset button in for 10 seconds. Re-associate with your Sputnik-powered device and browse to your SputnikNet Express Account home page. (If you see your SputnikNet Express splash page, click through to log into the internet.) You will be prompted to log in. Enter the default password (typically, "admin). Restarting the Sputnik Agent (optional).Restarting the Sputnik Agent (optional). Some versions of Sputnik-powered firmware do not restart the Sputnik Agent automatically. If this happens, after restarting, your Sputnik-powered device will be "open". In other words, it will not be connected to your SputnikNet Express Account, and will allow unblocked access to the internet. To restart the Sputnik Agent, log into your Sputnik-powered device's control panel (using the default username and password), browse to the appropriate page, click "Enable" next to the Sputnik Agent, and apply settings. For more information, see the appropriate chapter for your device in the section titled "Sputnik Agent Firmware and Sputnik-Powered Devices".
http://docs.sputnik.com/m/express/l/4056-finding-your-sputniknet-express-account-url-and-resetting-your-password
2018-08-14T11:15:50
CC-MAIN-2018-34
1534221209021.21
[]
docs.sputnik.com
Configuring custom scripts to run before and after backups Scripts can run before or after backup operations for further customization. Configure custom scripts to run before or after a backup. Specify custom scripts in the Pre- and Post-Backup Script fields in the Create Backup dialog. Custom backup scripts must be located in: - Package installs: /usr/share/datastax-agent/bin/backup-scripts - Tarball installs: install location/agent/bin/backup-scripts The backup-scripts directory also contains example scripts. The scripts must be executable, and run as the DataStax agent user (by default the Cassandra user). Any custom scripts should exit with a status of 0 if all operations completed successfully. Otherwise, the script should exit with a non-zero status to indicate a failure. Post-backup scripts are sent a list of files in the backup to stdin, one file per line, but do not have any arguments passed to them.
https://docs.datastax.com/en/opscenter/6.5/opsc/online_help/services/opscBackupsCustomScripts_c.html
2018-08-14T11:09:30
CC-MAIN-2018-34
1534221209021.21
[]
docs.datastax.com
Warning! This page documents an earlier version of Kapacitor, which is no longer actively developed. Kapacitor v1.5 is the most recent stable version of Kapacitor.
http://docs.influxdata.com/kapacitor/v1.4/nodes/
2018-08-14T10:34:33
CC-MAIN-2018-34
1534221209021.21
[]
docs.influxdata.com
JSON Decode Node The JSON Decode Node allows a workflow to decode a JSON string into an object on the payload. Configuration There are two configuration properties for the JSON Decode Node - the payload path to the string to decode, and the payload path for where to store the resulting decoded object. Both paths are allowed to be the same, in which case the JSON string will be replaced by the decoded object. In the above example, the workflow will decode the string at the container.jsonString path and place the resulting object at the container.jsonObject path. So, for example, given the following payload: { "time": Fri Feb 19 2016 17:26:00 GMT-0500 (EST), "container": { "jsonString": "{\"messageBody\":[12,24],\"name\":\"Trigger52\"}" }, ... } The payload after execution of the JSON Decode Node would look like: { "time": Fri Feb 19 2016 17:26:00 GMT-0500 (EST), "container": { "jsonString": "{\"messageBody\":[12,24],\"name\":\"Trigger52\"}", "jsonObject": { "messageBody": [ 12, 24 ], "name": "Trigger52" } }, ... }
http://docs.prerelease.losant.com/workflows/logic/json-decode/
2018-08-14T10:33:46
CC-MAIN-2018-34
1534221209021.21
[array(['../../../images/workflows/logic/json-decode-node.png', 'JSON Decode Node JSON Decode Node'], dtype=object) array(['../../../images/workflows/logic/json-decode-node-config.png', 'JSON Decode Node Config JSON Decode Node Config'], dtype=object)]
docs.prerelease.losant.com
You can allow access to the View Connection Server instance from outside the network to users and groups while restricting access for other users and groups. Prerequisites An Access Point appliance, security server, or load balancer must be deployed outside the network as a gateway to the View Connection Server instance to which the user is entitled. For more information about deploying an Access Point appliance, see the Deploying and Configuring Access Point document. The users who get remote access must be entitled to desktop or application pools. Procedure - In View Administrator, select Users and Groups. - Click the Remote Access tab. - Click Add and select one or more search criteria, and click Find to find users or groups based on your search criteria. - To provide remote access for a user or group, select a user or group and click OK. - To remove a user or group from remote access, select the user or group, click Delete, and click OK.
https://docs.vmware.com/en/VMware-Horizon-7/7.2/com.vmware.horizon.published.desktops.applications.doc/GUID-C21CD0F5-20F6-4298-B686-604D5064F13A.html
2018-08-14T10:51:54
CC-MAIN-2018-34
1534221209021.21
[]
docs.vmware.com
Fix: Additional CSS and JS editor does not display content until has focus. Fix: Product quantity number alignment within the checkout table. Fix: Out of stock product quantity error notice button alignment. Fix: JetPack Gutenberg editor "Slideshow" block styling. Fix: 404 (not found) category widget responsive style. Fix: "View order" my-account button font-size. Fix: Spacing adjustment in my-account page layout with sidebar. Fix: Layout adjustment to display long content within the cookieBubble container. Fix: JetPack Gutenberg editor "Contact form" button styling. Fix: Parallax background attachment within the container and core/cover blocks — FireFox browser. Fix: Remove empty recent-posts widget container div in 404 (not found) page. Fix: Missing rel attribute "noopener noreferrer" in Gutenberg editor button block URLs that set to open in a new tab. Fix: Accidental horizontal scrollbar in single product view page — Safari browser. Fix: Out of stock product added via the "Add to Cart" Gutenberg block styling. Fix: Vertical alignment styling within the Gutenberg "core/columns" block. Feature: Customized REST API /products/reviews endpoint. Feature: Gutenberg "(Blog) Posts" block. Feature: Gutenberg "Service" block. Feature: Gutenberg "(WooCommerce) Reviews" block. Feature: Gutenberg "(WooCommerce) Shop by Category" block. Feature: Aksimet comment privacy contest notice. Feature: Feature: Telegram product share button. Feature: New control to span the latest column within the core/columns Gutenberg block. Update: Language files. Update: Customized REST API /products/categories endpoint. Update: Gutenberg editor styles. Update: Classic editor styles. Update: Feather CSS font icon pack to version 4.19.0 Update: Escaping the pingback URL. Update: One-click demo import files. Tweak: Convert Gutenberg blocks grid styling to flex-box. Tweak: RTL (Right-to-left) language styling. Tweak: Layout adjustment in distraction-free checkout page. Tweak: JetPack comment submission form (WordPress.com) styling. Tweak: JetPack newsletter subscription Gutenberg block styling. Tweak: Mini-cart styling added by header-customizer module. Tweak: RTL (Right-to-left) language styling. Tweak: Search results page template styling. Tweak: Responsive Gutenberg editor core/quote block style. Remove: Customized REST API /products endpoint. Remove: Buffer product share button. Remove: flexslider standalone JavaScript library. Performance: Move two-steps checkout script to checkout view page. Performance: Move add to cart bar script to single view page. Performance: Move single product Ajax add to cart script to single view page. Compatibility: IE 11 (Internet Explorer) browser. Compatibility: JetPack 7.1.1 Compatibility: WordPress 5.1.1 Compatibility: MailChimp for WordPress 4.4 Compatibility: WooCommerce 3.5.6 Compatibility: WooCommerce Brands 1.6.7 Compatibility: WooCommerce Stripe gateway 4.1.15 Compatibility: WooCommerce Variation Swatches and Photos 3.0.11 Fix: Minor adjustment in site (image) logo. Fix: Sticky header overlapping content. Fix: Text alignment in core "Image" block. Fix: MailChimp newsletter subscription widget styling. Fix: Transition effect adjustment in "Pressed Button" block. Feature: Responsive font-size control added to "Fancy Heading" block. Feature: Site (image) logo maximum width control. Feature: Additional CSS code editor. Feature: Additional JavaScript code editor. Feature: Gutenberg "Add to Cart" block. Feature: Gutenberg "Social Badge" block. Feature: Socicon social font icon pack. Update: Language file. Update: Plugin settings page styling. Update: One-click demo import files. Update: Feather CSS font icons to version 4.7.3 Update: Gutenberg editor styles. Update: Theme and child theme screenshot. Tweak: Site title toggle method. Tweak: Copy URL button style in core "File" block. Tweak: Image caption style in core "Classic" editor block. Tweak: Core "Separator" block style. Tweak: Shop list product style layout. Tweak: Site tagline toggle method. Tweak: RTL (Right-to-left) language styling. Remove: Display site title and tagline checkbox. Compatibility: Gravity Forms WP Forms Plugin. Fix: Typo in JavaScript file handle. Fix: Extra spacing in quote block. Fix: Font-size in code block. Tweak: Prepended use-strict to all JavaScript files. Compatibility: WordPress 5.0.2 Compatibility: Gutenberg 4.7.1. Fix: WooCommerce pagination styling issue. Feature: Gutenberg "Card" block. Feature: Gutenberg "Logos" block. Feature: Gutenberg "Countdown" block. Update: Language file. Compatibility: Gutenberg 4.5.1 Compatibility: WooCommerce 3.5.1 Feature: Gutenberg "Container" block. Feature: Gutenberg "Dual buttons" block. Feature: Gutenberg "MacBook device" block. Feature: New stylings to the "Fancy Heading" block. Feature: Remove site header from an individual page. Feature: Remove default container spacings from an individual page. Feature: Remove site footer from an individual page. Feature: CSS code editor to append inline-styles to individual posts, pages and custom post types. Update: Language file. Update: Gutenberg editor styles. Update: One-click demo import files. Tweak: RTL (Right-to-left) language styling. Tweak: Styling in IE 11 (Internet Explorer) browser. Tweak: Styling in Edge browser. Compatibility: Gutenberg 4.2 Compatibility: WooCommerce 3.5.1 Feature: Gutenberg "Features" block. Feature: Gutenberg "Divider" block. Feature: Gutenberg "Fancy Heading" block. Update: Gutenberg "Accordion" block. Update: Gutenberg "Alert" block. Update: Gutenberg "Google Maps" block. Update: Language file. Update: Gutenberg editor styles. Compatibility: Gutenberg 4.0 Compatibility: WooCommerce 3.4.7 Feature: Gutenberg "Accordion" block. Feature: Gutenberg "Alert" block. Feature: Gutenberg "Google Maps" block. Update: Language file. Tweak: RTL (Right-to-left) language styling. Tweak: Styling in IE 11 (Internet Explorer) browser. Tweak: Styling in Edge browser. Compatibility: Gutenberg 3.9 Feature: Instagram feed widget. Feature: WooCommerce product quick view component. Feature: RTL support in WooCommerce toast messages. Feature: Text highlighting support in search results page. Update: Language file. Update: Gutenberg color palette. Update: Container width of the Gutenberg editor. Update: Child theme parent styling dependencies. Update: iziToast JS plugin to version 1.4.0 Update: Flickity JS plugin to version 2.1.2 Update: JS-Offcanvas JS plugin to version 1.2.7 Update: Waypoints JS plugin to version 4.0.1 Tweak: RTL (Right-to-left) language styling. Tweak: Styling in IE 11 (Internet Explorer) browser. Compatibility: Gutenberg 3.8 Compatibility: WooCommerce 3.4.5 Compatibility: JetPack (gallery and comments) 6.5 Compatibility: WooCommerce Stripe gateway 4.1.9 Compatibility: WooCommerce Product Search 2.8.0 Compatibility: Variation Swatches and Photos 3.0.9 Compatibility: WPML translation plugin Fix: Minor styling issues. Fix: Customizer header background image visibility issue. Fix: Customizer site background image visibility issue. Feature: RTL (Right-to-left) language support. Feature: Cart quantity to hand-held menu. Feature: Gutenberg editor font-size support. Update: Gutenberg default component styles. Update: Gutenberg editor color palette args. Tweak: Styling in IE 11 (Internet Explorer) browser. Compatibility: WordPress 4.9.8 Compatibility: Gutenberg 3.4 Compatibility: WooCommerce 3.4.4 Fix: JS error in promo widget uploader field. Fix: Styling issues in forms and buttons. Update: Gutenberg editor styles. Compatibility: IE 11 (Internet Explorer) browser. Compatibility: Gutenberg 3.2.0 Update: JS handlers switched to .on() method instead of using shortcut method. Update: Capitalization of the theme name. Fix: WooCommerce search form styling issue. Tweak: Typography hierarchy. Tweak: Form and button styling. Tweak: Overall spacing. Tweak: Post pagination style. Fix: Mobile styling issues. Tweak: Push menu (mobile) styles. Tweak: Prepended use-strict to all JavaScript files. Fix: Mobile styling issues. Fix: Inconsistent spacing/padding issues. Fix: Typography scaling issues. Fix: Styling issues with Theme Unit test XML file. Fix: Tag cloud widget overflow styling issue. Fix: Content and image alignment issues. Feature: Typography subset control. Feature: Typography font-weight (variants) control. Update: Language file. Update: TinyMCE editor styles. Update: Gutenberg editor styles. Tweak: General/overall typography hierarchy. Tweak: Blog post/archive styling. Tweak: Footer widget title styling. Tweak: Registered third-party libraries to load independently. Remove: _authorname_ to prevent double prefixing. Compatibility: JetPack (gallery) 6.2.1 Compatibility: WooCommerce 3.4.3 Compatibility: WooCommerce Stripe gateway. Compatibility: WooCommerce Product Search. Fix: Mobile styling issues. Fix: HTML validation errors. Fix: Required/warning issues raised by ThemeCheck plugin. Fix: Styling issues with Theme Unit test XML file. Feature: Offline version of the documentation. Feature: "Gutenberg" WordPress (new) editor support. Feature: Sticky order review offset control. Feature: Sticky sidebar offset control. Feature: Sticky post color controls. Update: Language file. Update: Demo content assets with placeholder images. Remove: Unnecessary spacing from components. Tweak: Typography hierarchy and content formatting. Tweak: Moved slider post type to a pre-packaged powerpack plugin. Compatibility: WooCommerce 3.4.1 Fix: Minor styling issues. Feature: Shop by category style. Feature: Slider style. Feature: Product countdown component. Feature: Service component style. Feature: Copyright alignment control. Feature: Product column support on list view layout. Update: Language file. Update: Product list view style. Compatibility: WordPress 4.9.6 Compatibility: WooCommerce 3.4.0 Compatibility: GDPR (General Data Protection Regulation) Fix: Minor styling issues. Feature: Product loop style. Feature: Sticky sidebar control. Feature: Page title toggle meta box. Feature: Footer bar height adjustment control. Feature: Basic IE 11 support. Update: Language file. Fix: Minor styling issues. Feature: Color controls. Feature: Footer background image control. Feature: Shop by category style. Feature: Service component style. Feature: Product loop style. Feature: Slider custom post type. Initial release.
https://docs.conj.ws/about-the-theme/changelog
2019-04-18T17:12:17
CC-MAIN-2019-18
1555578517745.15
[]
docs.conj.ws
lineOpen A version of this page is also available for Windows Embedded CE 6.0 R3 4/8/2010 This function opens the line device specified by its device identifier and returns a line handle for the corresponding opened line device. This line handle is used in subsequent operations on the line device. Syntax ); Parameters - hLineApp [in] Handle to the application's registration with TAPI. - dwDeviceID [in] Identifier of the line device to be opened. It must be a valid device identifier. - lphLine [out] Pointer to an HLINE handle that is then loaded with the handle representing the opened line device. Use this handle to identify the device when invoking other functions on the open line device. - dwAPIVersion [in] TAPI version number under which the application and TAPI have agreed to operate. This number is obtained with the lineNegotiateAPIVersion function. - dwExtVersion [in] Unsupported; set to zero. - dwCallbackInstance [in] User-instance data passed back to the application with each message associated with this line or with addresses or calls on this line. This parameter is not interpreted by TAPI. dwPrivileges [in] Privilege the application wants for the calls it is notified for. This parameter can be a combination of the LINECALLPRIVILEGE. The following table shows the values the parameter can take. Other flag combinations are invalid. - dwMediaModes [in] Media mode or modes of interest to the application. This parameter is used to register the application as a potential target for incoming call and call handoff for the specified media mode. This parameter is meaningful only if the bit LINECALLPRIVILEGE_OWNER in dwPrivileges is set (and ignored if it is not). This parameter uses LINEMEDIAMODE. - lpCallParams [in] Pointer to a structure of type LINECALLPARAMS. This pointer should be set to NULL for Windows Embedded CE 2.x. Return Value Zero indicates success. A negative error number indicates that an error occurred. The following table shows the return values for this function. Remarks If LINEERR_ALLOCATED is returned, the line cannot be opened due to a persistent condition, such as that of a serial port being exclusively opened by another process. If LINEERR_RESOURCEUNAVAIL is returned, the line cannot be opened due to a dynamic resource overcommitment such as in DSP processor cycles or memory. This overcommitment can be transitory, caused by monitoring of media mode or tones, and changes in these activities by other applications can make it possible to reopen the line within a short time period. Opening a line always entitles the application to make calls on any address available on the line. The ability of the application to deal with incoming calls or to be the target of call handoffs on the line is determined by the dwMediaModes parameter. The lineOpen function registers the application as having an interest in monitoring calls or receiving ownership of calls that are of the specified media modes. If the application just wants to monitor calls, then it can specify LINECALLPRIVILEGE_MONITOR. If the application just wants to make outgoing calls, it can specify LINECALLPRIVILEGE_NONE. If the application is willing to control unclassified calls (calls of unknown media mode), it can specify LINECALLPRIVILEGE_OWNER and LINEMEDIAMODE_UNKNOWN. Otherwise, the application should specify the media mode it is interested in handling. The media modes specified with lineOpen add to the default value for the provider's media mode monitoring for initial incoming call type determination. If a line device is opened with owner privilege and an extension media mode is not registered, then the error LINEERR_INVALMEDIAMODE is returned. An application that has successfully opened a line device can initiate calls using the lineMakeCall function. A single application can specify multiple flags simultaneously to handle multiple media modes. Conflicts can arise if multiple applications open the same line device for the same media mode. These conflicts are resolved by a priority scheme in which the user assigns relative priorities to the applications. Only the highest priority application for a given media mode will ever receive ownership (unsolicited) of a call of that media mode. Ownership can be received when an incoming call first arrives or when a call is handed off. An application that handles automated voice should also select the interactive voice open mode and be assigned the lowest priority for interactive voice. The reason for this is that service providers report all voice media modes as interactive voice. If media mode determination is not performed by the application for the UNKNOWN media type, and no interactive voice application has opened the line device, voice calls would be unable to reach the automated voice application, and would be dropped. The same application, or different installations of the same application, can open the same line multiple times with the same or different parameters. When an application opens a line device it must specify the negotiated TAPI version, which is obtained with a call to the lineNegotiateAPIVersion function. Version numbering enables the mixing and matching of different application versions with different TAPI versions. Requirements See Also Reference lineGetID lineInitialize lineMakeCall lineNegotiateAPIVersion lineShutdown LINECALLPARAMS
https://docs.microsoft.com/en-us/previous-versions/aa920113%28v%3Dmsdn.10%29
2019-04-18T16:30:35
CC-MAIN-2019-18
1555578517745.15
[]
docs.microsoft.com
Windows Setup Configuration Passes Applies To: Windows 7 Note This content applies to Windows 7. For Windows 8 content, see Windows Deployment with the Windows ADK. Configuration passes are used to specify different phases of Windows® Setup. Unattended installation settings can be applied in one or more configuration passes. The following topics describe the configuration passes used with Windows Setup.
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-7/dd744580%28v%3Dws.10%29
2019-04-18T16:59:47
CC-MAIN-2019-18
1555578517745.15
[]
docs.microsoft.com
Security Bulletin Microsoft Security Bulletin MS13-032 - Important Vulnerability in Active Directory Could Lead to Denial of Service (2830914) Published: April 09,, Active Directory Application Mode (ADAM), Active Directory Lightweight Directory Service (AD LDS), and Active Directory Services on Microsoft Windows servers (excluding Itanium-based systems) and rated Low on Microsoft Windows clients. The security update addresses the vulnerability by correcting how the LDAP service handles specially crafted LDAP queries. [1]These editions of Microsoft Windows are not affected because they do not include ADAM, AD LDS, Active Directory, or Active Directory Services. Update FAQ I am running one of the operating systems that is listed in the affected software table. Why am I not being offered the update? The update will only be offered to systems on which the affected component is installed.. Memory Consumption Vulnerability - CVE-2013-1282 A denial of service vulnerability exists in implementations of Active Directory that could cause the service to stop responding. The vulnerability is caused when the LDAP service fails to handle a specially crafted query. To view this vulnerability as a standard entry in the Common Vulnerabilities and Exposures list, see CVE-2013-1282. Mitigating Factors An attacker must have valid logon credentials to exploit this vulnerability. The vulnerability could not be exploited remotely by anonymous users. However, the affected component is available remotely to users who have standard user accounts. In certain configurations, anonymous users could authenticate as the Guest account. Workarounds Microsoft has not identified any workarounds for this vulnerability. What is the scope of the vulnerability? This is a denial of service vulnerability. What causes the vulnerability? The vulnerability is caused when the LDAP Services (AD LDS), and Active Directory Rights Management Services (AD RMS). What is Active Directory Lightweight Directory Service (AD LDS)? Active Directory Lightweight Directory Services the TechNet article, Active Directory Lightweight Directory Services. What is Active Directory Application Mode? Active Directory Application Mode (ADAM) is a Lightweight Directory Access Protocol (LDAP) directory service that runs as a user service for Windows XP and Windows Server 2003, rather than as a system service. For more information on ADAM, please see the TechNet article, Introducing ADAM. What is the Lightweight Directory Access Protocol (LDAP)? The Lightweight Directory Access Protocol (LDAP) is a directory service protocol that runs directly over the TCP/IP stack. The information model (for both. What might an attacker use the vulnerability to do? An attacker who successfully exploited this vulnerability could cause the LDAP service to become non-responsive. How could an attacker exploit the vulnerability? An attacker could exploit this vulnerability by sending a specially crafted query to the LDAP service. What systems are primarily at risk from the vulnerability? Servers are primarily at risk from this vulnerability. What does the update do? The update addresses the vulnerability by correcting how the LDAP service handles specially crafted LDAP queries.
https://docs.microsoft.com/en-us/security-updates/SecurityBulletins/2013/ms13-032
2019-04-18T16:35:41
CC-MAIN-2019-18
1555578517745.15
[]
docs.microsoft.com
At 4:30 pm, Google gave the all-clear with the Google Apps Suite. Google Drive, Google Sheets, Google Docs and Google Slides should all be operational now. Please contact your divisional representative, the Help Desk or me if you have any questions. Thank you for your patience and understanding, Mike -- Mike Dewey Director – Campus Services
https://docs.rice.edu/confluence/pages/viewpage.action?pageId=49984104
2019-04-18T17:35:14
CC-MAIN-2019-18
1555578517745.15
[]
docs.rice.edu
All Files Clearing Scenes in Control Center If you must restart a scene from scratch, you can completely erase a scene's exposure sheet. When you clear a scene, you delete all exposure sheet database information and reset the scene as if it were completely new. You are only deleting the files from the database; Control Center does not actually delete any image files (drawings, scan, final frames). If you have questions concerning the contents of a scene, you should contact the technical director responsible for the scene in question. If you must clear a scene from Control Center, make sure no one else is currently using the scene's data. If you clear a scene that someone else is working on at that moment, you run the risk of corrupting the files and losing work. If you must clear a scene, you should wait until everyone is offline to ensure that no one can open the data files while you clear them from Control Center. How to clear a scene Select the environment, job or scene you want to clear. Do one of the following: From the top menu, select Scene > Clear. Right-click anywhere in the Scenes list and select Clear. A confirmation dialog box appears. Click OK. The Scene list refreshes itself and displays blank elements and drawings lists, showing that the Control Center node has deleted the exposure sheet for the scene you cleared.
https://docs.toonboom.com/help/harmony-15/advanced/server/control-center/clear-scene-control-center.html
2019-04-18T16:57:12
CC-MAIN-2019-18
1555578517745.15
[]
docs.toonboom.com
Distributed Rendering Overview Distributed. Organization V-Ray supports DR. It See the Distributed Rendering section in the install instructions. How to test First start with the testing of the render server: - Start the vrayspawnerXX.exe program from the Start menu (Start menu > Programs > Chaos Group > V-Ray for SketchUp > Distributed rendering > Launch the distributed rendering spawner). It will automatically open a command prompt window. Now test the render client: - Open SketchUp as you normally would. - Open a scene you would like to render (preferably a simple one for this test). - Choose V-Ray as your current renderer and make sure you have checked Distributed Rendering ON in the V-Ray System section. - Press the Distributed Rendering section. button in the - Add the machines used for rendering - either with their IP address or their network name and close the dialog. - Render the scene as you normally would. You should see the buckets rendered by the different servers. V-Ray Distributed Rendering Settings The Distributed rendering settings dialog is accessible from theSystem rollout of the renderer settings. - this button allows you to manually add a server by entering its IP address or network name. - this button deletes the currently selected server(s) from the list. - this button resolves the IP addresses of all servers. Enabling 64-bit Rendering Local Machine Only 1. Enable Distributed Rendering by making sure "On" is enabled from the System rollout. 2. Enable "Don't use local machine". One limitation with this feature is that Light Cache calculations are NOTvisible in the the V-Ray frame buffer. 3. Open the Distributed Rendering Settings and choose 127.0.0.1 and chose OK. and input the IP address of 4. Render the scene as you normally would. You should now see the rendering buckets labeled with the name of your local machine. In the task manager your will see that XMLDRSpawner CPU usage is high, at around 90%, and the SketchUp process should be using very little CPU. To render using distributed rendering the XMLDRSpawner must be running. By default on your local machine V-Ray will try to identify if your machine is running a 64-bit opertating system and it will automatically start the 64-bit version of the XMLDRSpawner. However, you can start or restart the 64-bit render server manually as follows: - Start the XMLDRSpawner.exe program from the Start menu (Start menu > Programs > Chaos Group > V-Ray for SketchUp > Distributed rendering 64-bit > Launch the distributed rendering spawner). It will automatically open a command prompt window. Rendering Servers Only 1. Start the 64-bit render server on your rendering servers manually as follows: - Start the XMLDRSpawner.exe program from the Start menu (Start menu > Programs > Chaos Group > V-Ray for SketchUp > Distributed rendering 64-bit > Launch the distributed rendering spawner). It will automatically open a command prompt window. 2. Enable Distributed Rendering on your local machine by making sure "On" is enabled from the System rollout. 3. Enable "Don't use local machine". 4. Open the Distributed Rendering Settings and choose and input the IP address of each of your rendering servers. 5. Render the scene as you normally would. You should see the buckets rendered by the Server Name. In the task manager of your local machine your will see that XMLDRSpawner has no CPU usage and the SketchUp process should be using very little CPU. Notes - Every render server must have all texture maps in their proper directories loaded so that the scene you are sending will not cause them to abort. For example if you have mapped your object with a file named JUNGLEMAP.JPG and you do not have the path to the directory mapped to the render server you will get bucket rendered at that machine as if the map was turned off. - When you cancel a DR rendering, it may take some time for the render servers to finish working and they may not be immediately available for another render.
https://docs.chaosgroup.com/display/VRAY2SKETCHUP/Distributed+Rendering
2019-04-18T17:21:27
CC-MAIN-2019-18
1555578517745.15
[]
docs.chaosgroup.com
This page provides details on the settings found in the Color Mapping rollout, which is used when setting up renders. Page Contents Overview Color mapping (sometimes also called tone mapping) dictates which color operations are performed between the user interface inputs and the values rendered and the way the rendered pixels are displayed through the VFB on the user monitor. With the default Type of Linear multiply and other default values, Color Mapping ensures a 1:1 mapping of all the user operations and the final result. For example, doubling a light's intensity exactly doubles its contribution to the final pixel, and cutting a shader's light reflectance in half cuts its contribution to the final pixel in half. This approach corresponds to Linear Workflow. Changing the Color Mapping settings might be desirable for artistic purposes, but doing so will deviate from the linear correspondence between user actions and the rendered result, and will also veer away from physical accuracy in the scene. To ensure the most accurate results, it's best to leave the Color Mapping settings at their default values and perform artistic color transformations during post-production. This will also ensure repeatability, consistency, and a very accurate rendered solution. UI Path: ||Render Setup window|| > V-Ray tab > Color mapping rollout (when V-Ray Adv is the Production renderer) ||Render Setup window|| > V-Ray RT tab > Color mapping rollout (when V-Ray RT is the Production renderer) Default Parameters The following parameters are visible from the Color Mapping rollout when set to the Default Render UI Mode. Type – Sets the type of color transformation. For more information, please see the Color Mapping Types example below. Linear multiply – Simply multiplies the final image colors based on their brightness without applying any changes. The default selection. Exponential – Saturates the colors based on their brightness. This can be useful in preventing burn-outs in very bright areas (for example, around light sources). This mode will clamp colors so that no value exceeds 255, or 1 in floating point values. HSV exponential – Similar to Exponential, but preserves the color hue and saturation instead of washing out the color towards white. Intensity exponential – Similar to Exponential, but preserves the ratio of the RGB color components and will only affect the intensity of the colors. Gamma correction – This option is deprecated. Applies a gamma curve to the colors. Intensity gamma – This option is deprecated. Applies a gamma curve to the intensity of the colors instead of each channel (RGB) independently. Reinhard – A blend between Exponential and Linear multiply. The degree to which one method or the other is applied to the image is specified by the Burn value parameter. The default settings for color mapping mean that V-Ray renders out the image in linear space (Reinhard color mapping with Burn value 1.0 produces a linear result). Multiplier – A general multiplier for the colors before they are corrected when Type is set to Gamma correction, Intensity gamma, or Reinhard. Burn value – Available when Type is set to Reinhard. If this value is 1.0, the result is the same as setting Type to Linear multiply. If this value is 0.0, the result is the same as Exponential. Values between 0.0 and 1.0 blend the two types. Dark multiplier – Specifies the multiplier applied to dark colors when Type is set to Linear multiply, Exponential, HSV exponential, or Intensity exponential. The default value is 1.0. Bright multiplier – Specifies the multiplier applied to bright colors when Type is set to Linear multiply, Exponential, HSV exponential, or Intensity exponential. The default value is 1.0. Inverse gamma – The inverse of the gamma value when Type is set to Gamma correction or Intensity gamma. For example, for a gamma value of 2.2, this value is 1/2.2, or 0.4545. Example: Color Mapping Types This example demonstrates the differences between some of the color mapping types. Note: The Sibenik Cathedral model was created by Marko Dabrovic () and was one of the models for the CGTechniques Radiosity competition. Linear multiply color mapping Linear multiply color mapping Exponential color mapping Exponential color mapping HSV exponential color mapping HSV exponential color mapping As visible in the above images, the Linear multiply mapping method clamps bright colors to white, causing bright parts of the image to appear "burnt out". Both the Exponential and HSV exponential types avoid this problem. While Exponential tends to wash out the colors and desaturate them, HSV exponential preserves the color hue and saturation. Advanced Parameters The following parameters are added to the list of visible settings available from the Color Mapping rollout when set to the Advanced Render UI Mode. Gamma – Controls the gamma correction for the output image regardless of the color mapping mode. For example, to correct the image for a 2.2-gamma display, set this parameter to 2.2. Sub-pixel mapping – Controls whether color mapping will be applied to the final image pixels or to the individual sub-pixel samples. In older versions of V-Ray, this option was always assumed to be enabled. However it is disabled by default in newer versions as this produces more correct renderings, especially when using the universal settings approach. The subpixel color mapping option is incompatible with the adaptive lights and can lead to blocky artifacts due to the different sampling rate of the light sources in different cells of the light grid. Clamp output –When enabled, colors will be clamped after color mapping. In some situations, this may be undesirable. For example, if you wish to antialias HDR parts of the image as well, turn clamping off. The value to the right specifies the level at which color components will be clamped if the Clamp output option is enabled. Affect background – When disabled, color mapping will not affect colors belonging to the background. Mode – Specifies if color mapping and gamma are burned into the final image. This option replaces the Don't affect colors (adaptation only) parameter from previous V-Ray versions. Color mapping and gamma – Both color mapping and gamma are burned into the final image. This corresponds to Don't affect colors (adaptation only) parameter set to disabled in previous V-Ray versions. None (don't apply anything) – Don't affect colors (adaptation only) parameter set to enabled in previous V-Ray versions. Color mapping only (no gamma) – Only color mapping is burned into the final image and not gamma correction. This is the default option. V-Ray will still proceed to sample the image as though both color mapping and gamma are applied, but will only apply the color correction (Linear, Reinhard, etc.) to the final result. Note: The Clamp output option, when enabled, will have an effect regardless of the value of the Mode option. Expert Parameters The following parameters are added to the list of visible settings available from the Color Mapping rollout when set to the Expert Render UI Mode. Linear workflow – This option is deprecated and will be removed in future versions of V-Ray. When enabled, V-Ray will automatically apply the inverse of the Gamma correction that is set in the Gamma field to all V-Ray Material (VRayMtl) materials in your scene. Note: this option is intended to be used only for quickly converting old scenes which are not set up with proper linear workflow in mind. This option is not a replacement for a proper linear workflow. For more information, please see the Linear Workflow example below. Example: Linear Workflow This example shows the same image rendered with 3 different settings for Gamma and Linear Workflow. Gamma = 1; Linear Workflow = Off Gamma = 2.2; Linear Workflow = Off Gamma = 2.2; Linear Workflow = On Recommended Settings for Proper Linear Workflow There are different ways to approach a proper linear workflow. The simplest and most effective approach is to leave all gamma-related settings in 3ds Max and V-Ray at their default values.
https://docs.chaosgroup.com/display/VRAY3MAX/Color+Mapping
2019-04-18T16:44:11
CC-MAIN-2019-18
1555578517745.15
[]
docs.chaosgroup.com
An Act to amend 618.43 (1) (a) 2.; and to create 655.001 (8c) and 655.23 (3) (am) of the statutes; Relating to: authorizing out-of-state risk retention groups to provide health care liability insurance. (FE) Amendment Histories Bill Text (PDF: ) Fiscal Estimates AB808 ROCP for Committee on Insurance (PDF: ) LC Bill Hearing Materials Wisconsin Ethics Commission information 2013 Senate Bill 609 - Available for scheduling
https://docs.legis.wisconsin.gov/2013/proposals/ab808
2019-04-18T17:05:03
CC-MAIN-2019-18
1555578517745.15
[]
docs.legis.wisconsin.gov
Superv and share them with others" in this manual. You access the Job Manager by clicking the Jobs link in the upper right hand corner of the screen. When you click Jobs, Splunk opens a separate browser window for the Job Manager. The Job Manager displays a list of search jobs that breaks down into a few categories: - Search jobs resulting from searches that you have recently run manually. - Search jobs that are artifacts of searches that are run when dashboards are loaded. - Search jobs that are artifacts of scheduled searches (searches that are designed to run on a regular interval). - Search "Use search actions" in this jobs and jobs management" in the Admin manual. This documentation applies to the following versions of Splunk® Enterprise: 4.3, 4.3.1, 4.3.2, 4.3.3, 4.3.4, 4.3.5, 4.3.6, 4.3.7 Feedback submitted, thanks!
https://docs.splunk.com/Documentation/Splunk/4.3/User/ManageYourSearchJobs
2019-04-18T16:45:29
CC-MAIN-2019-18
1555578517745.15
[array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)]
docs.splunk.com
Set a default host for a Splunk server An event's host value is the IP address, host name, or fully qualified domain name of the physical device on the network from which the event originates. Because Splunk Manager or edit inputs.conf. Set the default host value using Manager Use Manager to set the default host value for a server: 1. In Splunk Web, click on the Manager link in the upper right-hand corner of the screen. 2. In Manager, click System settings under System. 3. On the System Splunk installation. You can modify the host value by editing that file in $SPLUNK_HOME/etc/system/local/ or in your own custom application directory in $SPLUNK_HOME/etc/apps/. Splunk places the host assignment". Restart Splunk to enable any changes you make to inputs.conf. Note: By default, the host attribute is set to the variable $decideOnStartup, which means that it's set to the hostname of the machine splunkd is running on. The value is re-interpreted each time splunkd starts up. Override the default host value for data received from a specific input If you are running Splunk Splunk,!
https://docs.splunk.com/Documentation/Splunk/5.0.18/Data/SetadefaulthostforaSplunkserver
2019-04-18T16:52:02
CC-MAIN-2019-18
1555578517745.15
[array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)]
docs.splunk.com
Searching data / Working in the search window / Building a query / Build a query using LINQ / Create new columns using LINQDownload as PDF Create new columns using LINQ These are the necessary clauses you need to add to your LINQ query in order to perform a create column operation: For example, the following query creates two columns showing the corresponding latitude and longitude values of the IPs in the ClientIpAddress column of the table. And this one displays the definitions of the status codes in a new column. Create column operations are classified into the following groups: [ Order group ] [ Arithmetic group ] [ String create ] [ General group ] [ Date create ] [ Name group ] [ Network group ] [ Geolocation group ] [ Logic group ] [ Flow group ] [ Web group ] [ Mathematical group ] [ Conversion group ] [ Cryptography group ] [ Packet group ]
https://docs.devo.com/confluence/ndt/searching-data/working-in-the-search-window/building-a-query/build-a-query-using-linq/create-new-columns-using-linq
2019-04-18T16:20:30
CC-MAIN-2019-18
1555578517745.15
[]
docs.devo.com
Energy UK responds to the announcement of the Feed-In Tariffs Scheme consultation Responding to the opening of the Feed-In Tariffs Scheme consultation, an Energy UK spokesperson said: “The Feed-in-Tariff transformed the small-scale renewable energy industry leading to over 800,000 installations, creating new markets and driving down costs. It also played a role in tackling fuel poverty and improving the energy efficiency of UK houses. “We are looking forward to working with Government as it is vital to establish a predictable route to market for small-scale renewable generation in order to achieve our ambitious climate change goals and to support the ongoing transition to an energy market which benefits consumer and environment alike.”
https://docs.energy-uk.org.uk/media-and-campaigns/press-releases/412-2018/6706-energy-uk-responds-to-the-announcement-of-the-feed-in-tariffes-scheme-consultation.html
2019-04-18T16:31:44
CC-MAIN-2019-18
1555578517745.15
[]
docs.energy-uk.org.uk
GlusterFS driver uses GlusterFS, an open source distributed file system, as the storage backend for serving file shares to manila clients. The following parameters in the manila’s configuration file need to be set: The following configuration parameters are optional: If Ganesha NFS server is used ( glusterfs_nfs_server_type = Ganesha), then by default the Ganesha server is supposed to run on the manila host and is managed by local commands. If it’s deployed somewhere else, then it’s managed via ssh, which can be configured by the following parameters: In lack of glusterfs_ganesha_server_password ssh access will fall back to key based authentication, using the key specified by glusterfs_path_to_private_key, or, in lack of that, a key at one of the OpenSSH-style default key locations (~/.ssh/id_{r,d,ecd}sa). Layouts have also their set of parameters, see Layouts about that. New in Liberty, multiple share layouts can be used with glusterfs driver. A layout is a strategy of allocating storage from GlusterFS backends for shares. Currently there are two layouts implemented: directory mapped layout (or directory layout, or dir layout for short): a share is backed by top-level subdirectories of a given GlusterFS volume. Directory mapped layout is the default and backward compatible with Kilo. The following setting explicitly specifies its usage: glusterfs_share_layout = layout_directory.GlusterfsDirectoryMappedLayout. Options: glusterutility. If it’s of the format <username>@<glustervolserver>:/<glustervolid>, then we ssh to <username>@<glustervolserver> to execute gluster(<username> is supposed to have administrative privileges on <glustervolserver>). /mnt, where $state_path defaults to /var/lib/manila) Limitations: volume mapped layout (or volume layout, or vol layout for short): a share is backed by a whole GlusterFS volume. Volume mapped layout is new in Liberty. It can be chosen by setting glusterfs_share_layout = layout_volume.GlusterfsVolumeMappedLayout. Options (required): Volume mapped layout is implemented as a common backend of the glusterfs and glusterfs-native drivers; see the description of these options in GlusterFS Native driver: Manila driver configuration setting. A special configuration choice is glusterfs_nfs_server_type = Gluster glusterfs_share_layout = layout_volume.GlusterfsVolumeMappedLayout that is, Gluster NFS used to export whole volumes. All other GlusterFS backend configurations (including GlusterFS set up with glusterfs-native) require the nfs.export-volumes = off GlusterFS setting. Gluster NFS with volume layout requires nfs.export-volumes = on. nfs.export-volumes is a cluster-wide setting, so a given GlusterFS cluster cannot host a share backend with Gluster NFS + volume layout and other share backend configurations at the same time. There is another caveat with nfs.export-volumes: setting it to on without enough care is a security risk, as the default access control for the volume exports is “allow all”. For this reason, while the nfs.export-volumes = off setting is automatically set by manila for all other share backend configurations, nfs.export-volumes = on is not set by manila in case of a Gluster NFS with volume layout setup. It’s left to the GlusterFS admin to make this setting in conjunction with the associated safeguards (that is, for those volumes of the cluster which are not used by manila, access restrictions have to be manually configured through the nfs.rpc-auth-{allow,reject} options). Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.
https://docs.openstack.org/manila/queens/admin/glusterfs_driver.html
2019-04-18T16:46:33
CC-MAIN-2019-18
1555578517745.15
[]
docs.openstack.org
Step 7: Create a AWS DMS Replication Instance After we validate the schema structure between source and target databases, as described preceding, we proceed to the core part of this walkthrough, which is the data migration. The following illustration shows a high-level view of the migration process. A and choose Create Migration. If you are signed in as an AWS Identity and Access Management (IAM) user, as shown following. For the Advanced section, leave the default settings as they are, and choose Next.
https://docs.aws.amazon.com/dms/latest/sbs/CHAP_RDSOracle2Aurora.Steps.CreateReplicationInstance.html
2019-04-18T16:54:55
CC-MAIN-2019-18
1555578517745.15
[array(['images/datarep-conceptual2.png', 'AWS Database Migration Service migration process'], dtype=object)]
docs.aws.amazon.com
Whenever, after the trial period has ended, you decide to continue using Review Assistant on the permanent basis, you may want to move the Review Assistant server to another PC. Install Review Assistant on the new server PC (make sure to include the server part into installation). Stop the Devart Review Assistant service (to quickly locate it, click Windows Start orb, type in the search field “View Local Services”, press Enter). Copy settings.xml to the new server, it is located: Restart the Devart Review Assistant service.
https://docs.devart.com/review-assistant/server-maintenance/moving-review-server.html
2019-04-18T16:47:54
CC-MAIN-2019-18
1555578517745.15
[]
docs.devart.com
Using OData sources with Business Connectivity Services in SharePoint Learn how to get started creating external content types based on OData sources and using that data in SharePoint or Office 2013 components. OData and the OData connector. The new OData connector enables SharePoint to communicate with OData providers. In SharePoint,. Prerequisites for working with the BCS OData connector To develop OData-based external content types, you will need the following: Visual Studio 2012 SharePoint Office Developer Tools for Visual Studio 2012 For information about how to set up your development environment, see Set up a general development environment for SharePoint. Creating external content types for OData data sources. In this section How to: Create an external content type from an OData source in SharePoint How to: Create an OData data service for use as a BCS external system How to: Create an external list using an OData data source in SharePoint See also Feedback Send feedback about:
https://docs.microsoft.com/en-us/sharepoint/dev/general-development/using-odata-sources-with-business-connectivity-services-in-sharepoint
2019-04-18T16:49:10
CC-MAIN-2019-18
1555578517745.15
[]
docs.microsoft.com
Microduino Core (Atmega168PA16m ID for board option in “platformio.ini” (Project Configuration File): [env:168pa16m] platform = atmelavr board = 168pa16m You can override default Microduino Core (Atmega168PA@16M,5V) settings per build environment using board_*** option, where *** is a JSON object path from board manifest 168pa16m.json. For example, board_build.mcu, board_build.f_cpu, etc. [env:168pa16m] platform = atmelavr board = 168pa16m ; change microcontroller board_build.mcu = atmega168p ; change MCU frequency board_build.f_cpu = 16000000L Debugging¶ PIO Unified Debugger currently does not support Microduino Core (Atmega168PA@16M,5V) board.
https://docs.platformio.org/en/latest/boards/atmelavr/168pa16m.html
2019-04-18T17:17:21
CC-MAIN-2019-18
1555578517745.15
[]
docs.platformio.org
Note EdX does not support this tool. The recommender provides learners with a list of online resources related to the course content. These resources are jointly managed by course team members and the learners. The most common use of the recommender is for remediation of errors and misconceptions, followed by providing additional, more advanced resources. For example, if a learner is working through a physics problem, the recommender could be used to show links to concepts used in the problem on Wikipedia, PhET, and OpenStax, as well as in the course itself. The recommender can help fill complex knowledge gaps and help move learners in the right direction. Learners and course team members can complete the following tasks with the recommender. Course team members can endorse useful resources or remove irrelevant entries. If you use the recommender, you should inform learners through course content or course updates about the tool. An example of a recommender in a course follows. The upper part of the figure illustrates a question in a problem set where the recommender is attached. The middle of the figure shows a list of resources and several gadgets for users to work on the resources. The bottom portion shows additional information about a given resource on mouse-over event. Course team members should be sure to review all supplemental materials to assure that they are accessible before making them available through your course. For more information, see Accessibility Best Practices for Developing Course Content. Before you can add a recommender component to your course, you must enable the recommender tool in Studio. To enable the recommender tool in Studio, you add the "recommender" key to the Advanced Module List on the Advanced Settings page. (Be sure to include the quotation marks around the key value.) For more information, see Enabling Additional Exercises and Tools. To add a recommender to a course, follow these steps.
https://edx.readthedocs.io/projects/open-edx-building-and-running-a-course/en/open-release-ginkgo.master/exercises_tools/recommenderXBlock.html
2019-04-18T17:16:55
CC-MAIN-2019-18
1555578517745.15
[]
edx.readthedocs.io
Composer Product Videos This page contains Composer product videos. Stay tuned for more videos to come. To request a video, please email [email protected]. Composer Installation Video Below is a video tutorial on Composer 8.1.4 Installation. Depending on the flavor of Eclipse you have installed, your interface may appear slightly different than that shown in the video. Getting Started After Installation This tutorial shows how to immediately get familiar with Composer by using a sample application. Uninstalling Composer This video tutorial shows how to uninstall Composer when you want to install a later version. Moving to Composer from IRD A video on using Composer to create routing strategies instead of IRD and the similarities between the two. Introduction to the Interface Below is a video that can function as a brief introduction to the Composer user interface. Using Templates to Create a Routing Workflow Below is a video tutorial on using Composer templates to create a workflow that routes interactions to targets based on a percent allocation. Integrated Voice and Route Application This video shows an example workflow that integrates GVP voice self-service with Orchestration routing. Defining Agents, Agent Groups, and Skills This video shows how Agent, Agent Group, and Skill objects are defined in Genesys Administrator prior to using them for skills-based routing in Composer. Skills-Based Routing This video presents a simple example of the Composer aspect of routing chat interactions to Agent Groups. This example is based on a multimedia interaction (chat), which uses Composer's Route Interaction block, Targets property. A voice interaction uses the Target block and properties, such as the Targets property. Once ORS/URS identify a routing target, other servers are involved in the process of delivering the interaction to the agent desktop. Debugging VoiceXML Applications Below is a video tutorial on debugging VoiceXML applications. Deploying a Composer Application to a Web Server Below is a video tutorial on exporting and deploying a Composer application to a web server. Using the Database Blocks Below is a video tutorial on using the Database Blocks. Creating a Simple Grammar Below is a video tutorial on building a simple grammar with the Grammar Menu block. Using the Web Service Block Below is a video tutorial on using the Web Service block. Feedback Comment on this article:
https://docs.genesys.com/Documentation/Composer/latest/Deployment/CompVideos
2019-06-16T06:36:43
CC-MAIN-2019-26
1560627997801.20
[array(['/images/3/37/Video_mockup.png', None], dtype=object)]
docs.genesys.com
Pre-Upgrade tasks when upgrading to System Center Operations Manager Perform the following pre-upgrade tasks in the order presented before you begin the upgrade process to System Center 2016 - Operations Manager or version 1801. Review the event logs for Operations Manager on the management servers to look for recurring warning or critical events. Address them and save a copy of the event logs before you perform your upgrade. Cleanup the database (ETL table) As part of upgrade to System Center Operations Manager installation (setup) includes a script to cleanup ETL tables, grooming the database. However, in cases where there are a large number of rows (greater than 100,000) to cleanup, we recommend running the script before starting the upgrade to promote a faster upgrade and prevent possible timeout of setup. Performing this pre-upgrade task in all circumstances ensures a more efficient installation. To cleanup ETL To cleanup the ETL table, run the following script on the SQL Server hosting the Operations Manager database: -- . Note Cleanup of ETL can require several hours to complete. Remove Agents from pending management Before you upgrade a management server, remove any agents that are in Pending Management. Log on to the Operations console by using an account that is a member of the Operations Manager Administrators role for the Operations Manager management group. In the Administration pane, expand Device Management, and then click Pending Management. Right-click each agent, and then click Approve or Reject. Disable notification subscriptions You must disable notification subscription before you upgrade the management group to ensure that notifications are not sent during the upgrade process. Log on to the Operations console account that is a member of the Operations Manager Administrators role for the Operations Manager management group. In the Operations console, select the Administration view. In the navigation pane, expand Administration, expand the Notifications container, and then click Subscriptions. Select each subscription, and then click Disable in the Actions pane. Note Multiselect does not work when you are disabling subscriptions. Disable connectors Refer to the non-Microsoft connector documentation for any installed Connectors to determine the services used for each Connector. To stop a service for a Connector, perform the following steps: On the Start menu, point to Administrative Tools, and then click Services. In the Name column, right-click the Connector that you want to control, and then click Stop. Verify the Operations Manager database has more than 50 percent free You must verify that the operational database has more than 50 percent of free space before you upgrade the management group because the upgrade might fail if there is not enough space. Ensure that the transactions logs are 50 percent of the total size of the operational database. On the computer that hosts the operational database, open SQL Server Management Studio. In the Object Explorer, expand Databases. Right-click the Operations Manager database, select to Reports, Standard Reports, and then click Disk Usage. View the Disk Usage report to determine the percentage of free space. If the database does not have 50 percent free, perform the following steps to increase it for the upgrade: On the computer that hosts the operational database, open SQL Server Management Studio. In the Connect to Server dialog box, in the Server Type list, select Database Engine. In the Server Name list, select the server and instance for your operational database (for example, computer\INSTANCE1). In the Authentication list, select Windows Authentication, and then click Connect. In the Object Explorer pane, expand Databases, right-click the Operations Manager database, and then click Properties. In the Database Properties dialog box, under Select a page, click Files. In the results pane, increase the Initial Size value for the MOM_DATA database by 50 percent. Note This step is not required if free space already exceeds 50 percent. Set the Initial Size value for the MOM_LOG transaction log to be 50 percent of the total size of the database. For example, if the operational database size is 100 GB, the log file size should be 50 GB. Then click OK. Back up the Operations Manager databases Obtain verified recent backups of the operational database and of the data warehouse database before you upgrade the secondary management server. You should also create backups of databases for optional features, such as the Reporting and the Audit Collection Services database before you upgrade them. For more information, see Create a Full Database Backup (SQL Server). Stop Operations Manager services on Management servers Before upgrading the first management server in your management group, it is recommended to stop the Operations Manager services - System Center Data Access, System Center Configuration, and Microsoft Monitoring Agent on all other management servers to avoid any issues while the operational and data warehouse databases are being updated. Increase agent HealthService cache size To ensure the agents can queue data during the upgrade, update the following registry setting on the agents manually or automated with your configuration management or orchestration solution:). Once you have completed the upgrade of the management group, you can reset it back to the default value. Next steps To continue with the upgrade, review Upgrade overview.
https://docs.microsoft.com/en-us/system-center/scom/deploy-upgrade-pretasks?view=sc-om-2019
2019-06-16T06:48:23
CC-MAIN-2019-26
1560627997801.20
[]
docs.microsoft.com
What's New in DNS Server in Windows Server Applies to: Windows Server (Semi-Annual Channel), Windows Server 2016 This topic describes the Domain Name System (DNS) server functionality that is new or changed in Windows Server 2016. In Windows Server 2016, DNS Server offers enhanced support in the following areas. DNS Policies.. You can also use DNS policies for Active Directory integrated DNS zones. For more information, see the DNS Policy Scenario Guide. Response Rate Limiting You can configure RRL settings to control how to respond to requests to a DNS client when your server receives several requests targeting the same client. By doing this, you can prevent someone from sending a Denial of Service (Dos) attack using your DNS servers. For instance, a bot net can send requests to your DNS server using the IP address of a third computer as the requestor. Without RRL, your DNS servers might respond to all the requests, flooding the third computer. When you use RRL, you can configure the following settings: Responses per second. This is the maximum number of times the same response will be given to a client within one second. Errors per second. This is the maximum number of times an error response will be sent to the same client within one second. Window. This is the number of seconds for which responses to a client will be suspended if too many requests are made. Leak rate. This is how frequently the DNS server will respond to a query during the time responses are suspended. For instance, if the server suspends responses to a client for 10 seconds, and the leak rate is 5, the server will still respond to one query for every 5 queries sent. This allows the legitimate clients to get responses even when the DNS server is applying response rate limiting on their subnet or FQDN. TC rate. This is used to tell the client to try connecting with TCP when responses to the client are suspended. For instance, if the TC rate is 3, and the server suspends responses to a given client, the server will issue a request for TCP connection for every 3 queries received. Make sure the value for TC rate is lower than the leak rate, to give the client the option to connect via TCP before leaking responses. Maximum responses. This is the maximum number of responses the server will issue to a client while responses are suspended. White list domains. This is a list of domains to be excluded from RRL settings. White list subnets. This is a list of subnets to be excluded from RRL settings. White list server interfaces. This is a list of DNS server interfaces to be excluded from RRL settings. DANE support You can use DANE support (RFC 6394 and 6698) to specify to your DNS clients what CA they should expect certificates to be issued from for domains names hosted in your DNS server. This prevents a form of man-in-the-middle attack where someone is able to corrupt a DNS cache and point a DNS name to their own IP address. For instance, imagine you host a secure website that uses SSL at by using a certificate from a well-known authority named CA1. Someone might still be able to get a certificate for from a different, not-so-well-known, certificate authority named CA2. Then, the entity hosting the fake website might be able to corrupt the DNS cache of a client or server to point to their fake site. The end user will be presented a certificate from CA2, and may simply acknowledge it and connect to the fake site. With DANE, the client would make a request to the DNS server for contoso.com asking for the TLSA record and learn that the certificate for was issues by CA1. If presented with a certificate from another CA, the connection is aborted. Unknown record support An "Unknown Record" is an RR whose RDATA format is not known to the DNS server. The newly added support for unknown record (RFC 3597) types means that you can add the unsupported record types into the Windows DNS server zones in the binary on-wire format. The windows caching resolver already has the ability to process unknown record types. Windows DNS server will not do any record specific processing for the unknown records, but will send it back in responses if queries are received for it. IPv6 root hints The IPV6 root hints, as published by IANA, have been added to the windows DNS server. The internet name queries can now use IPv6 root servers for performing name resolutions. Windows PowerShell support The following new Windows PowerShell cmdlets and parameters are introduced in Windows Server 2016. Add-DnsServerRecursionScope. This cmdlet creates a new recursion scope on the DNS server. Recursion scopes are used by DNS policies to specify a list of forwarders to be used in a DNS query. Remove-DnsServerRecursionScope. This cmdlet removes existing recursion scopes. Set-DnsServerRecursionScope. This cmdlet changes the settings of an existing recursion scope. Get-DnsServerRecursionScope. This cmdlet retrieves information about existing recursion scopes. Add-DnsServerClientSubnet. This cmdlet creates a new DNS client subnet. Subnets are used by DNS policies to identify where a DNS client is located. Remove-DnsServerClientSubnet. This cmdlet removes existing DNS client subnets. Set-DnsServerClientSubnet. This cmdlet changes the settings of an existing DNS client subnet. Get-DnsServerClientSubnet. This cmdlet retrieves information about existing DNS client subnets. Add-DnsServerQueryResolutionPolicy. This cmdlet creates a new DNS query resolution policy. DNS query resolution policies are used to specify how, or if, a query is responded to, based on different criteria. Remove-DnsServerQueryResolutionPolicy. This cmdlet removes existing DNS policies. Set-DnsServerQueryResolutionPolicy. This cmdlet changes the settings of an existing DNS policy. Get-DnsServerQueryResolutionPolicy. This cmdlet retrieves information about existing DNS policies. Enable-DnsServerPolicy. This cmdlet enables existing DNS policies. Disable-DnsServerPolicy. This cmdlet disables existing DNS policies. Add-DnsServerZoneTransferPolicy. This cmdlet creates a new DNS server zone transfer policy. DNS zone transfer policies specify whether to deny or ignore a zone transfer based on different criteria. Remove-DnsServerZoneTransferPolicy. This cmdlet removes existing DNS server zone transfer policies. Set-DnsServerZoneTransferPolicy. This cmdlet changes settings of an existing DNS server zone transfer policy. Get-DnsServerResponseRateLimiting. This cmdlet retrieves RRL settings. Set-DnsServerResponseRateLimiting. This cmdlet changes RRL settigns. Add-DnsServerResponseRateLimitingExceptionlist. This cmdlet creates an RRL exception list on the DNS server. Get-DnsServerResponseRateLimitingExceptionlist. This cmdlet retrieves RRL excception lists. Remove-DnsServerResponseRateLimitingExceptionlist. This cmdlet removes an existing RRL exception list. Set-DnsServerResponseRateLimitingExceptionlist. This cmdlet changes RRL exception lists. Add-DnsServerResourceRecord. This cmdlet was updated to support unknown record type. Get-DnsServerResourceRecord. This cmdlet was updated to support unknown record type. Remove-DnsServerResourceRecord. This cmdlet was updated to support unknown record type. Set-DnsServerResourceRecord. This cmdlet was updated to support unknown record type For more information, see the following Windows Server 2016 Windows PowerShell command reference topics. See also Feedback Send feedback about:
https://docs.microsoft.com/en-us/windows-server/networking/dns/what-s-new-in-dns-server
2019-06-16T07:38:17
CC-MAIN-2019-26
1560627997801.20
[]
docs.microsoft.com
.. A rollover typically works like this: - you request/create new key material - you publish the new validation key in addition to the current one. You can use the AddValidationKeys. Data protection¶ Cookie authentication in ASP.NET Core (or anti-forgery in MVC) use the ASP.NET Core data protection feature. Depending on your deployment scenario, this might require additional configuration. See the Microsoft docs for more information.
http://docs.identityserver.io/en/latest/topics/crypto.html
2019-06-16T07:08:50
CC-MAIN-2019-26
1560627997801.20
[]
docs.identityserver.io
For cPanel & WHM version 11.50.
https://docs.cpanel.net/display/1150Docs/The+cPanel+Home+Interface
2019-06-16T07:06:10
CC-MAIN-2019-26
1560627997801.20
[]
docs.cpanel.net
x-LogConfigServerTrace Section: Log Default Value: No default value Valid Values: yes, no Changes Take Effect: Immediately Dependencies: None Specifies whether or not Data Aggregator writes Configuration Server process data to a log file. This option is used for debugging only. 8.5.212.08 Workforce Management Data Aggregator Release Notes Helpful Links Releases Info Product Documentation Genesys Products What's New This release contains the following new features and enhancements: - WFM Data Aggregator has been updated with the latest built-in framework libraries and extra logging when x-LogConfigServerTrace option is set to yes. In addition, the WFM Data Aggregator Installation Package (IP) now contains the wfmda.pdb file. (WFM-31556) Resolved Issues This release contains no resolved issues. Upgrade Notes No special procedure is required to upgrade to release 8.5.212.08. This page was last modified on February 15, 2019, at 11:18. Feedback Comment on this article:
https://docs.genesys.com/Documentation/RN/latest/wm-da85rn/wm-da8521208
2019-06-16T07:39:59
CC-MAIN-2019-26
1560627997801.20
[]
docs.genesys.com
You can use the Virtual Appliance Management Interface Database page to monitor or update the configuration of the appliance database. You can also use it to change the master node designation and the synchronization mode used by the database.. When operating in synchronous mode, vRealize Automation invokes automatic failover. In the event of master node failure the next available replica node will automatically become the new master. The failover operation requires 10 to 30 seconds on a typical vRealize Automation deployment. Prerequisites Install and configure vRealize Automation according to appropriate instructions in Installing vRealize Automation. Log in to vRealize Automation Appliance Management as root using the password you entered when you deployed the vRealize Automation appliance..
https://docs.vmware.com/en/vRealize-Automation/7.4/com.vmware.vra.prepare.use.doc/GUID-088041C1-7A84-4081-8507-0D51EFCE325B.html
2019-06-16T06:42:00
CC-MAIN-2019-26
1560627997801.20
[]
docs.vmware.com
Add a local user account By adding a local user account, you can provide users with direct access to your ExtraHop appliances and restrict their access as needed by their role in your organization. - Log into the Admin UI on the Discover or Command, which must be a minimum of 5 characters. Confirm Password: Re-type the password from the Password field. - In the User Privileges section, select the desired privileges for the user. - Click Save. Thank you for your feedback. Can we contact you to ask follow up questions?
https://docs.extrahop.com/7.2/add_local_user/
2019-06-16T07:12:31
CC-MAIN-2019-26
1560627997801.20
[]
docs.extrahop.com
set DSRM password Applies To: Windows Server 2003, Windows Server 2008, Windows Server 2003 R2, Windows Server 2012, Windows Server 2003 with SP1, Windows 8 Reset Password on server %s Parameters Remarks The. Examples To reset the DSRM password on a domain controller named DC2, type the following command, and then press ENTER: Reset DSRM Administrator Password: reset password on server DC2 Additional references group membership evaluation security account management semantic database analysis
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2012-r2-and-2012/cc754363(v%3Dws.11)
2019-06-16T07:31:16
CC-MAIN-2019-26
1560627997801.20
[]
docs.microsoft.com
NdisFSendNetBufferLists function Filter drivers call the NdisFSendNetBufferLists function to send a list of network data buffers. Syntax void NdisFSendNetBufferLists( NDIS_HANDLE NdisFilterHandle, __drv_aliasesMem PNET_BUFFER_LIST NetBufferList, NDIS_PORT_NUMBER PortNumber, ULONG SendFlags ); Parameters NdisFilterHandle The NDIS handle that identifies this filter module. NDIS passed the handle to the filter driver in a call to the FilterAttach function. NetBufferList A pointer to a linked list of NET_BUFFER_LIST structures. Each NET_BUFFER_LIST structure describes a list of NET_BUFFER structures. PortNumber A port number that identifies a miniport adapter port. Miniport adapter port numbers are assigned by calling the NdisMAllocatePort function. A zero value identifies the default port of a miniport adapter. SendFlags Flags that define attributes for the send operation. The flags can be combined with an OR operation. To clear all the flags, set this member to zero. This function supports the following flags: NDIS_SEND_FLAGS_DISPATCH_LEVEL Specifies that the current IRQL is DISPATCH_LEVEL. For more information about this flag, see Dispatch IRQL Tracking. NDIS_SEND_FLAGS_CHECK_FOR_LOOPBACK Specifies that NDIS should check for loopback. By default, NDIS does not loop back data to the driver that submitted the send request. An overlying driver can override this behavior by setting this flag. When this flag is set, NDIS identifies all the NET_BUFFER structures that contain data that matches the receive criteria for the binding. NDIS indicates NET_BUFFER structures that match the criteria to the overlying driver. This flag has no affect on checking for loopback, or looping back, on other bindings. NDIS_SEND_FLAGS_SWITCH_SINGLE_SOURCE If this flag is set, all packets in a linked list of NET_BUFFER_LIST structures originated from the same Hyper-V extensible switch source port. For more information, see Hyper-V Extensible Switch Send and Receive Flags. NDIS_SEND_FLAGS_SWITCH_DESTINATION_GROUP If this flag is set, all packets in a linked list of NET_BUFFER_LIST structures are to be forwarded to the same extensible switch destination port. For more information, see Hyper-V Extensible Switch Send and Receive Flags. Return Value None Remarks After a filter driver calls the NdisFSendNetBufferLists function, NDIS submits the NET_BUFFER_LIST structures to the underlying drivers. A filter driver can originate send requests or it can filter the requests that it receives from overlying drivers. If the filter driver originates send requests, the driver must allocate buffers pools. The filter driver allocates each NET_BUFFER_LIST structure from a pool. The filter driver can preallocate NET_BUFFER_LIST structures or it can allocate the structures just before calling NdisFSendNetBufferLists and then free them when the send operation is complete. A filter driver must set the SourceHandle member of each NET_BUFFER_LIST structure that it originates to the same value that it passes to the NdisFilterHandle parameter. The filter handle provides the information that NDIS requires to return the NET_BUFFER_LIST structure to the filter driver. The filter driver must not modify the SourceHandle member in any NET_BUFFER_LIST structures that it did not originate. Before calling NdisFSendNetBufferLists, a filter driver can set information that accompanies the send request with the NET_BUFFER_LIST_INFO macro. The underlying drivers can retrieve this information with the NET_BUFFER_LIST_INFO macro. NDIS calls a filter driver's FilterSendNetBufferLists function to pass on send requests from overlying drivers. A filter driver can pass on such requests by passing the NET_BUFFER_LISTT structures that it received in FilterSendNetBufferLists to NdisFSendNetBufferLists. As soon as a filter driver calls the NdisFSendNetBufferLists function, it relinquishes ownership of the NET_BUFFER_LIST structures and all associated resources. NDIS calls the FilterSendNetBufferListsComplete function to return the structures and data to the filter driver. NDIS can collect the structures and data from multiple send requests into a single linked list of NET_BUFFER_LIST structures before it passes the list to FilterSendNetBufferListsComplete.. Requirements See AlsoFilterSendNetBufferListsComplete Feedback Send feedback about:
https://docs.microsoft.com/en-us/windows-hardware/drivers/ddi/content/ndis/nf-ndis-ndisfsendnetbufferlists
2019-06-16T07:45:29
CC-MAIN-2019-26
1560627997801.20
[]
docs.microsoft.com
Security Data ownership and privacy All Quality Clouds customers remain in full control of their data at all times as a result of the following set-up: - You can define your own credentials with which Quality Clouds connects to your instance, remaining in full control of the data to be accessed. - Quality Clouds only accesses the contents of the tables that hold code of Configuration Elements, and not the transactional or business sensitive data stored in the instance. - It only stores the summary data related to the issues detected in the customer instance, as well as a list of Configuration items which have been modified. The source code is analysed in-memory, and it is never persisted. - The user defined by you to access the instances can be an administrator with web services access only. Additionally, you can always create restrictions by using ACLs (Access Control Lists) to the instance(s) tables. Quality Clouds adheres to the following security recommendation to let you stay in full control: - As owner of the data, you can always ask to have it deleted. - No names, email address or any other personal user information are captured from the instance. Access control Quality Clouds is responsible for the management of user accounts within the scan instance under our software product with your SaaS platform. This includes the creation of the customer record and each of your individual user accounts, the password expiration policy and an initial password which the user has to change on the first login. Authentication and passwords To access the application each user logs in with its own credentials (providing a unique username and password). Quality Clouds enforces user password minimum strength and expiration policy, which can be set to a range from 1 day to 1 year period. When a password expires, you as the user are prompted for the current password, and will be asked to choose a new one. The application automatically ends each idle user’s sessions (with no user interaction) after 10 minutes. Encryption Quality Clouds makes use of encryption for data in transit and sensible/personal data at rest. Any third party credentials needed to perform a software quality scan in a customer's application instance are stored encrypted using AES-256 algorithm. Any personal data registered in a Quality Clouds customer account: user name, emails and phone number, is encrypted before being persisted. User passwords are stored hashed in the Quality Clouds database using SHA-1 algorithm. Raw passwords are never persisted. Quality Clouds website owns an SSL certificate which forces use of Transport Layer Security (TLS) connections with 256 bit encryption. Encryption in transit for user traffic For all user access, Quality Clouds accesses customer instances over the Internet using forced TLS encryption. IP Whitelisting If your infrastructure implements IP Whitelisting, please make sure that you add these IPs to the allowed origins, so that requests from Quality Clouds are not blocked: Location: EU (Ireland) 52.208.182.137 34.252.123.196 34.255.67.15 What's here
https://docs.qualityclouds.com:8443/qcd/security-3997789.html
2019-06-16T07:43:43
CC-MAIN-2019-26
1560627997801.20
[]
docs.qualityclouds.com:8443
Add Formulas to Questions Introduction Formulas can be created by you and added to a form in the form builder. Formulas will automatically perform an operation on any number values entered on the mobile app when the form is filled out. How to add a formula to a question - Within the Form Builder in the web app, click into the form you want to work in. - Drag or click on the "Formula" field to add it to your form. For example, you may add the following questions: - "How many technicians attended the meeting?" - "How many managers attended the meeting?" - Add the number question that will calculate the answers of the value questions entered. For example, you may add the following question: - "How many total people attended the meeting?" - Within the Defaultfield of the last number question (#3 in above example), enter your calculation using the fxIDof the value questions (this can be found on the question beside the files link). For example, you may type: ={33250}+{33251} - Click Save. Question Types which can be part of a formula The following question types can be in a formula's expression. Formula Behaviors on Question Types The following question types can have their answer set: Formula Operators Order of operations will be honored. Please Contact us to ask about more complex operations, we may already support it! =equal +add -subtract *multiply /divide Some other things to know about formulas. - All fxIDneed to be wrapped in curly braces: {}. Note that this is not parenthesis: () - The fxIDwill never change for a question, so if you move it around or delete it - it's consistent and unique to that question. - We find that it's easiest to create a form in its entirety before making the formulas. - Formulas can be tricky - so be patient and make sure to double-check your work.
http://docs.inspectall.com/article/151-add-formulas-to-questions
2017-12-11T02:19:06
CC-MAIN-2017-51
1512948512054.0
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/564a7e7890336002f86de0be/images/56705b14c697914361556e89/file-qUJBTUuKvX.png', None], dtype=object) ]
docs.inspectall.com
Overview of PowerShell Cmdlets for Always On Availability Groups (SQL Server) Microsoft PowerShell is a task-based command-line shell and scripting language designed especially for system administration. Always On availability groups provides a set of PowerShell cmdlets in SQL Server 2017 that enable you to deploy, manage, and monitor availability groups, availability replicas, and availability databases. Note A PowerShell cmdlet can complete by successfully initiating an action. This does not indicate that the intended work, such as the fail over of an availability group, has completed. When scripting a sequence of actions, you might have to check the status of actions, and wait for them to complete. Note For a list of topics in SQL Server 2017 Books Online that describe how to use cmdlets to perform Always On availability groups tasks, see the "Related Tasks" section of Overview of Always On Availability Groups (SQL Server). Configuring a Server Instance for Always On Availability Groups Backing Up and Restoring Databases and Transaction Logs For information about using these cmdlets to prepare a secondary database, see Manually Prepare a Secondary Database for an Availability Group (SQL Server). Creating and Managing an Availability Group Creating and Managing an Availability Group Listener Creating and Managing an Availability Replica Adding and Managing an Availability Database Monitoring Availability Group Health The following SQL Server cmdlets enable you to monitor the health of an availability group and its replicas and databases. Important You must have CONNECT, VIEW SERVER STATE, and VIEW ANY DEFINITION permissions to execute these cmdlets. *To view information about all of the availability replicas in an availability group, use to the server instance that hosts the primary replica. For more information, see Use Always On Policies to View the Health of an Availability Group (SQL Server). See Also Overview of Always On Availability Groups (SQL Server) Get Help SQL Server PowerShell
https://docs.microsoft.com/en-us/sql/database-engine/availability-groups/windows/overview-of-powershell-cmdlets-for-always-on-availability-groups-sql-server
2017-12-11T02:17:13
CC-MAIN-2017-51
1512948512054.0
[array(['../../../includes/media/yes.png', 'yes'], dtype=object) array(['../../../includes/media/no.png', 'no'], dtype=object) array(['../../../includes/media/no.png', 'no'], dtype=object) array(['../../../includes/media/no.png', 'no'], dtype=object)]
docs.microsoft.com
Examining Transcriptomics Data¶ I. Finding Experimental Data available in PATRIC¶ PATRIC has imported experimental data from several prominent genera, most notably Escherichia and Mycobacterium. To find this data, click on a genus of interest on the Organisms tab (or use the global search) and select a genus of interest. This will take you to the landing page for that genus. To see if any transcriptomic data is available, click on the Transcriptomics tab (red arrow). This will open a new page that shows all the experimental data available for that genus. II. Finding Specific Experiments to Examine¶ - PATRIC provides filter to help find specific experimental data. To find that, click on the Filters icon, which is located on the far right side of the page (red arrow). - This will open up a filter that is on top of the table. It allows researchers to filter on the experimental conditions, the type of mutant, the species, the strain, and if the experiment is a time series or not. - Clicking on any of the text in the filter will highlight it (red arrow). - This will re-filter the table to show all the experiments that have that particular tag. III. Examining a single experiment-Experiment filter¶ - Clicking on the check box in front of the name of an experiment will highlight it (red arrow) and will also give information about the selected experiment in the area beyond the vertical green bar. - Once an experiment is selected, the green bar becomes populated with downstream processes that you can do with that selection. In this case it includes downloading the information about the selection, creating a group that includes the selection, see the genes in that particular experiment, and viewing the experiment. Click on the Genes icon. - This opens a new tab that shows all the genes and conditions from that particular experiment. IV. Examining genes within an experiment: Gene filter¶ A filter on the right side of the gene table where the genes in each experimental condition can be filtered on up- or down-regulation. Clicking on one of the check boxes in front of a single experiment will re-filter the tables to meet that selection. Clicking on a checkbox above this list of experimental conditions will auto-select all the conditions together, re-filtering the table to show all the genes that are similarly expressed across all conditions. Many of the experiments examined more than one genome. The filter table also provides the means to select specific genomes used within the experiment. To do this, researchers need to click on the down arrow under Filter by Genome. This will show all the genomes used in the experiment. Clicking on one will re-filter the table to show only the results for that genome. A text box provides the ability to search for the names of specific genes. Entering text in that box and clicking on the Filter button will re-sort the table to show the genes that contained that word(s) in their functional description. Researchers can also filter on the level of expression of specific genes by selecting a log ratio or Z score, and then clicking on the Filter button. This will re-sort the table to show the number of genes that match that level of expression. V. Visualizing the genes and their expression on a heatmap¶ PATRIC also provides a way to visualize gene expression with a heatmap. Click on the Heatmap tab above the filter. This will open a page that shows all the conditions and gene expression levels that match the selections made on the filter. Several buttons on the upper right above the heatmap allow researchers to flip the axis, putting experimental conditions on the x-axis and the genes on the y-axis. The colors used in the heatmap can be changed. The genes can be clustered, which at the default Pearson correlation with pairwise average-linkage as the clustering type. There is also an advanced clustering method, and a way to see the significant genes, or all the genes in the genome. Clicking on advanced clustering open a pop-up window that allows researchers to cluster by genes, comparisons, or both. In addition, there are several clustering algorithms and clustering types to choose from. Clicking on any of the clustering algorithms will re-sort the heat map to group genes that are expressed similarly together across the conditions. VI. Getting the data about the genes from the heatmap visualization¶ To find further information on a group of genes in the heatmap, researchers can use their mouse to draw a box around genes, or patterns, of interest. This action will generate a pop-up window that provides several options for viewing the data, including download the genes, showing them, adding them to a group, or downloading the heatmap data. Clicking on Show Genes will open a new tab that shows the list of genes from the selection. Clicking on the check box next to Genome Name at the top of the table head will auto-select all the genes that where selected in the heatmap. Once any, or all the genes are selected, a list of downstream analysis tools will appear in the vertical green bar. These downstream tools or processes include the ability to download all the features in the table, go to the feature list view (Features icon), go to a view showing all the genomes that are the genes from the original selection belong to (Genomes icon), get the amino acid or nucleotide fasta files for all genes selected (FASTA icon), generate a multiple sequence alignment (MSA icon), map the PATRIC genes to other identifiers like UniProt (ID Map icon), find out any pathways that the selected genes belong to (Pathway icon), or to add the genes to an existing group, or create a new group with the selected genes (Group icon). If the Pathway Summary icon is selected, a new tab will open with a table showing this information. This may take some time, depending on the number of genes selected. A page with a spinning wheel will be displayed while PATRIC recovers the information. Once the table loads, a list of the pathways, the number of genes from the original selection, the number of genes in the genome that are part of the pathway, and the percent coverage (the selected genes as opposed to the total genes in the genome that are part of the pathway) are displayed. Clicking on an individual pathway open a list of downstream processes in the vertical green bar. Clicking on the pathway map icon will open a new page that shows a list of genes based on their EC number on the left, and a KEGG pathway map with the genes from that genome displayed on the right. Clicking on the legend at the top right of the page shows the genes with those specific EC numbers that are annotated in the genome (green box). The genes that were part of the original selection from the heatmap will be colored blue. VII. Examining multiple experiments¶ Researchers can compare more than one experiment for the same organism by clicking on the experiment titles in front of the name. This may take some time, so the spinning wheel will show that PATRIC is working to display the data selected. This opens a new tab that shows all the genes and conditions from that selected experiments. The same ability of filtering by up- or down-regulation, genome name, or log-ratio/Z-score can be deployed by making a selection and clicking on the Filter button. The heatmap view shows the results of the filtering on the various experiments selected. Clicking on the Cluster icon will cluster the results across all experiments and conditions. The information on specific genes can be viewed by using the mouse to draw a box over the selection, as described above. VIII. Expression at the Gene level - Transcriptomics¶ - Genes can be found by global search, or by selecting a specific gene from a table (red arrow). This populates the vertical green band with possible downstream functions. Clicking on the Feature icon (blue arrow) will open a new tab that is the landing page for the Feature page. - Once on the gene landing page, clicking on the Transcriptomics tab will show if there is any available expression data for the gene in PATRIC. - If data is available, the Transcriptomics tab will show first two charts that show the levels of expression based on Log ratio or Z score based on all the available data. A pie chart showing the strains, gene modification, or experimental conditions is also presented. - Inserting text in the filter (red arrow) and clicking on the filter button will filter the conditions, showing the expression levels based on the experimental conditions that meet the search criteria. Filtering on the Log ration (blue arrow) or Z score will also filter the results. - Clicking on the Table (red arrow) will rewrite the table to show a list of the conditions that the gene is expressed in. - Clicking on a specific condition (red arrow) wills how the information about that condition in the box beyond the vertical green band. It will also populate the vertical green band with downstream processes that apply.
https://docs.patricbrc.org/tutorial/examining_transcriptomics_data/examining_transcriptomics_data.html
2017-12-11T02:19:32
CC-MAIN-2017-51
1512948512054.0
[]
docs.patricbrc.org
Exploring a private genome in PATRIC¶ Locating a privately annotated genome-Global Search¶ - Open the drop-down box using the arrow following All Data Types in the global search at the right of blue banner across any PATRIC page. - Enter the name of the genome you annotated. This can be as simple as using the name of the strain that you used (red arrow). - This opens the Search Results page that shows all the genomes that have the text that was used in the search somewhere in their name or description. If the text is specific, the genome should be easily identified. Click on the name of the genome of interest (red arrow) - This opens the landing page for the selected genome. Locating a privately annotated genome - Jobs page¶ - Click on Your Jobs found in the Workspaces tab. - This opens the jobs page that shows all the jobs that have been submitted. Clicking on a job of interest highlights to row, and populates the green information bar to the right with the View icon (red arrow). - Clicking on the View icon opens the Jobs Detail page, where a number of files with data and details about the specific annotation - A View icon is available at the top of the Jobs Detail page. - Clicking on the view icon opens the landing page for the genome that was chosen. - If that genome has sequences the correspond to antimicrobial resistance or susceptibility, that information will be at the top of the overview page for the genome. This will provide an indication of the antibiotics that the isolate is predicted to be susceptible or resistant to [1]. If a known antimicrobial resistance (AMR) phenotype has been entered as part of the metadata, that will also be available in the table at the top of this page. Tabs on the Genome Landing Page¶ AMR Phenotype¶ - Every landing page at PATRIC has a number of tabs across the top, each of which contains different information. Clicking on AMR Phenotype will show the antibiotics that the isolate is resistant to, differentiating between predicted and actual phenotypes. It includes minimum inhibitory concentration (MIC) values, a variety of columns that show different information on lab typing, and also testing standards. If a particular genome does not have the actual phenotype information, or does not have any predictions from the PATRIC AMR pipeline, this table will be empty. Phylogeny¶ - Clicking on phylogeny will show phylogenetic trees that span the taxonomic level of Order. - The trees do not show private genomes, but select genomes of high quality (either complete or with the fewest contigs, indicating a good assembly). Researchers can see the results as either a phylogram or a cladogram. Genome Browser¶ - Clicking on the Genome Browser tab will show the annotations on the JBrowse viewer [2]. Private genomes will only show the PATRIC annotation, but public genomes will also have RefSeq annotation available for comparison. Researchers can move along the contigs and zoom in and out of the annotation (red box), even to the level of seeing the nucleotide sequence with 6-frame translations. - If the genome is in multiple contigs, the browser will load the first contig. To explore different contigs, click on the down arrow (red arrow) that will open up a list of the available contigs. Scroll down that list to choose the contig of interest, and clicking on the name will reload the browser with that information. - Researchers can also upload other tracts, like RNA-Seq, to the genome browser to compare the annotations with expression data. Clicking on the File tab, and then Open underneath that (red arrow) will open a pop-up window where different files can be selected for display. Circular Viewer¶ - PATRIC also provides a circular view of the genome and annotations. All contigs are united into a single circle. Researchers can change the tracks they want to see by using the filter on the left-hand side, and also upload custom tracks and personal data. Users can upload custom tracks or their own data, resize the image, and download it as a publication quality scaled vector graphic (svg). Mousing over genes in the circular view will generate a pop-up box that shows the location and functional description of the gene. - Double clicking on a particular gene will open up additional information about that gene. Sequences¶ - Clicking on the Sequences tab will show a list of all the contigs, their ID, length, %GC Content, they sequence type, topology and the description. Clicking on the check box in front of a single sequence (red arrow) will display information about the particular sequence at the far right, and will also populate the vertical green bar with possible downstream actions that apply to the selected sequence. Features¶ - Clicking on the Features tab will show a list of all the genes in the genomes. Clicking on the check box in front of a single sequence will display information about the particular sequence at the far right, and will also populate the vertical green bar with possible downstream actions that apply to the selected gene. - Mousing over the icons in the vertical green bar will generate a pop-up box that describes the downstream application. - Clicking on the filter above the table allows researchers to select different types of annotations, like the coding sequences (CDS), tRNA or rRNA genes. When public genomes are examined, researchers can also filter on RefSeq or PATRIC annotations. - Entering a specific keyword into the filter, and hitting return, will resort the table to show all the genes that have text matching that keyword. Specialty Genes¶ - Clicking on the Features tab will show a list of all the genes in the genome. PATRIC BLASTs all the genes in the genome against specific databases that contain genes identified as virulence factors (from the Virulence Factor Database [3,4], PATRIC virulence factors [5], and Victors, which is part of the Phidias database [5], which are involved in antibiotic resistance (the Antibiotic Resistance Database[6] and the Comprehensive Antibiotic Resistance Database[7]), and have been used as drug targets, or are human homologs. The left side of this page contains a filter to narrow the results, and a table listing the results on the right. - Clicking on the check box in front of a single sequence will display information about the particular sequence at the far right, and will also populate the vertical green bar with possible downstream actions that apply to the selected gene. - Clicking on the filter above the table (red arrow) allows researchers to filter on the type of Evidence (private genomes will all have BLASTP as evidence), the Property (i.e. Homology to known virulence factors, drug targets, antibiotic resistance genes, or human homologs), and the source of the information (the database that the genes in the genome were BLASTed against). - Direct evidence that original gene, which is linked to the gene in the selection by homology, can be seen by selecting that gene by clicking the check box in front of the name. This will populate the area beyond the green bar with all the information about that gene, and when available this will include a PubMed ID. This ID is a hyperlink. Clicking on the PubMed link will open up a new page that shows the paper(s) that are the base of that evidence. Protein Families¶ - Clicking on the Protein Families tab will show a list of all the protein families in the genome. There is a filter on the left and a table on the right that lists the protein families, their IDs, Product Description, and statistics on the amino acids contained across your selection on the right. For individual genomes, these statistics will be limited. - To find specific protein families, enter a name that might describe a specific function of interest into the filter box on the left (red arrow) and then click the filter button (blue arrow). This will filter the results to show the protein families that match the search (red box). - Clicking on the check box in front of a single family (red arrow) will display information about the particular family at the far right, and will also populate the vertical green bar with possible downstream actions that apply to the selected family. - Mousing over the icons in the vertical green bar will generate a pop-up box that describes the downstream application. - PATRIC has three types of protein families. FIGFams [9] contain isofunctional homologs. Two new sets of protein families, called PATtyFams [10], are assembled from the function-based groups into families by use of k-mers and a Markov Cluster algorithm (MCL) [11, 12]. To select different protein families associated with the genome, click on the down arrow that follow the text box under Filter By. A drop-down box will appear where researchers can select the family type. Pathways¶ - PATRIC maps genes with functional evidence to the pathways they belong to. The source of pathway information comes from the Kyoto Encyclopedia of Genes and Genomes (KEGG) [13, 14], to which PATRIC maps protein data to. - Clicking on the check box in front of a single pathway (red arrow) will display information about the particular pathway at the far right, and will also populate the vertical green bar with possible downstream actions that apply to the selected family. - Mousing over the icons in the vertical green bar will generate a pop-up box that describes the downstream application. Clicking on the Pathway Map will open a new tab that shows all the genes in that particular genome that have a role in the selected pathway. The page shows a list of the EC Numbers on the left side, and the KEGG pathway map on the right. Green boxes correspond to the EC numbers on the left side of the page (those genes that are actually annotated in the genome). White boxes indicate EC numbers that are not present in this genome References¶ - Davis JJ, Boisvert S, Brettin T, Kenyon RW, Mao C, Olson R, Overbeek R, Santerre J, Shukla M, Wattam AR, Will R, Xia F, Stevens R. 2016. Antimicrobial Resistance Prediction in PATRIC and RAST. Sci Rep. 6:27930. - Skinner, M.E., et al., JBrowse: a next-generation genome browser. Genome Res, 2009. 19(9): p. 1630-8. - Chen, L., et al., VFDB: a reference database for bacterial virulence factors. Nucleic acids research, 2005. 33(suppl 1): p. D325-D328. - Chen, L., et al., VFDB 2016: hierarchical and refined dataset for big data analysis-10 years on. Nucleic Acids Res, 2016. 44(D1): p. D694-7. - Mao, C., et al., Curation, integration and visualization of bacterial virulence factors in PATRIC. Bioinformatics, 2015. 31(2): p. 252-258. - Xiang, Z., Y. Tian, and Y. He, PHIDIAS: a pathogen-host interaction data integration and analysis system. Genome Biol, 2007. 8(7): p. R150. - Liu, B. and M. Pop, ARDB—antibiotic resistance genes database. Nucleic acids research, 2009. 37(suppl 1): p. D443-D447. - McArthur, A.G., et al., The comprehensive antibiotic resistance database. Antimicrobial agents and chemotherapy, 2013. 57(7): p. 3348-3357. - Meyer, F., R. Overbeek, and A. Rodriguez, FIGfams: yet another set of protein families. Nucleic acids research, 2009. 37(20): p. 6643-6654. - Davis, J.J., et al., PATtyFams: Protein Families for the Microbial Genomes in the PATRIC Database. Front Microbiol, 2016. 7: p. 118. - Enright, A.J., S. Van Dongen, and C.A. Ouzounis, An efficient algorithm for large-scale detection of protein families. Nucleic acids research, 2002. 30(7): p. 1575-1584. - van Dongen, S.M., Graph Clustering by Flow Simulation. 2001, iversity of Utrecht: Utrecht. - Kanehisa, M., et al., KEGG for linking genomes to life and the environment. Nucleic acids research, 2008. 36(suppl 1): p. D480-D484. - Kanehisa, M., et al., The KEGG resource for deciphering the genome. Nucleic acids research, 2004. 32(suppl 1): p. D277-D280.
https://docs.patricbrc.org/tutorial/private_genome/private_genome.html
2017-12-11T02:19:56
CC-MAIN-2017-51
1512948512054.0
[]
docs.patricbrc.org
Compatibility for clients generated from WSDL Review these guidelines for service namespaces. Specifying a Unique Namespace for each Table The property glide.wsdl.definition.use_unique_namespace ensures that each table's Direct Web Service WSDL has a unique targetNamespace attribute. This property is true by default, which requires a table's Direct Web Service WSDL to use a targetNamespace value of<table name>. When false (or when the property is not present), all tables use the same targetNamespace value of. Since all tables also share the same operation names, a Web Service client attempting to consume more than one ServiceNow Web Service would be unable to differentiate between requests between multiple tables. Using a unique targetNamespace value allows Web Services clients to distinguish requests between multiple tables. For example, the DIrect Web Service WSDL for the incident table uses this targetNamepsace value. <wsdl:definitions xmlns:<wsdl:types><xsd:schema Setting Namespace Requirements ServiceNow's WSDL schema by default declares an attribute of elementFormDefault="unqualified".. However, this is incompatible with the way clients generated from WSDL (.NET, Axis2, webMethods, ect.) process the embedded schema, it removes the schema namespace as a result, making the web service response unparseable. To overcome this compatibility issue, a boolean property called glide.wsdl.schema.UnqualifiedElementFormDefault is introduced. This property has the value of true by default, setting it to false will make clients generated from WSDL able to parse the return value of the web service invocation. You can modify this property using the Web Services properties page at System Properties > Web Services. By default, service names from dynamically generated WSDL are unique and have the following format: ServiceNow_<table name> To allow duplicate service names, administrators can set the glide.wsdl.unique_service_name property to false. Create the property if it does not exist.
https://docs.servicenow.com/bundle/geneva-servicenow-platform/page/integrate/inbound_soap/reference/compatibility-clients-generated-wsdl.html
2017-12-11T02:22:23
CC-MAIN-2017-51
1512948512054.0
[]
docs.servicenow.com
23-oct-2017: Dokuwiki and its plugins were upgraded to the latest release. Book Creator Add this page to your book Book Creator Remove this page from your book ware.
https://docs.slackware.com/slackware:external
2017-12-11T02:04:27
CC-MAIN-2017-51
1512948512054.0
[array(['https://docs.slackware.com/lib/plugins/bookcreator/images/add.png', None], dtype=object) array(['https://docs.slackware.com/lib/plugins/bookcreator/images/del.png', None], dtype=object) ]
docs.slackware.com
Shooting from a Camera Rig Crane One of the methods real-world filmmakers use to produce smooth, sweeping shots, is by attaching a camera to a crane and controlling the shot with the crane's movement. You can create similar shots in Sequencer with the use of the Camera Rig Crane Actor and an attached Camera. You can keyframe the crane's Pitch, Yaw, or the length of the Crane Arm, as well as Lock the Mounted Camera's Pitch or Yaw (which will follow the crane's movement). In this guide, we will add a Camera Rig Crane, attach a Camera, and create a sample Crane Shot as indicated in the example below: Steps For this how-to, we are using the Blueprint Third Person Template project with Starter Content enabled. In the Level Viewport of your project, select the ThirdPersonCharacter, then hold Alt, drag out, and rotate the copy so that it faces the existing character. These two characters are going to be the subject of our crane shot. From the Main Toolbar, click the Cinematics button, then select Add Level Sequence from the drop-down menu. From the Modes panel under Cinematic, drag a Camera Rig Crane into the level. From the Modes panel under Cinematic, drag a Cine Camera Actor into the level. In the World Outliner panel, drag the Cine Camera Actor on top of the Camera Rig Crane to attach it. This will attach the camera to the crane, allowing it to move where the crane moves. In the Details panel for the Cine Camera Actor, set the Location and Rotation values to 0.0. This will allow the camera to be attached to the Camera Rig Crane's mount position. Make any adjustments to the position of the Camera Rig Crane to set up your shot (below, the position is behind the character). Hold Ctrl and select the Camera Rig Crane and Cine Camera Actor (so that both are selected), then click Add in Sequencer and add both to the Level Sequence. Select the Camera Rig Crane, then in the Details panel, click the Add Key button for Crane Pitch, Crane Yaw and Crane Arm Length. This will set the default position of each to start the sequence. In Sequencer, select the Cine Camera Actor, then press the S key. This is a shortcut for adding a key for the current Transform values, initializing the camera's position. Scrub the Timeline to frame 50, changing the Crane Pitch value on the Camera Rig Crane to 40.0, and then click the Add Key button. If you move the Timeline back to 0 and scrub through to 50, you will see the crane move between the two keyframes (as seen below). Also included in the video (above), is how enabling the Lock Mount Pitch option affects the attached camera. With the option disabled, the camera maintains its set position. When enabling the option, the mounted camera follows the pitch of the crane and automatically changes as the crane moves. This option can be keyframed so that you can turn the option on/off through Sequencer based on your shot needs. With the Timeline at frame 50, select the Cine Camera Actor, and then press S to add another keyframe. Scrub the Timeline ahead to the end of the shot, change the Crane Yaw and Crane Arm Length to 75 and 600 respectively, then add keys for each. You should see the values shift between keyframes for the Crane Yaw and Crane Arm Length as you scrub through the Timeline. Also, similar to the Lock Mount Pitch, the Lock Mount Yaw option allows you to force the mounted camera to be locked to the yaw position of the crane. If you enable both Lock Mount Pitch and Lock Mount Yaw, the mounted camera will pitch and yaw in the same direction of the crane. Rotate the camera in the Level Viewport to frame up the two characters, then press S to add a keyframe for the position. In Sequencer, click the Camera Lock button on the Cine Camera Actor track. This will lock the viewport to the perspective of the camera and allow us to view what the shot will look like when using this camera. End Result You should have something similar to the following video, where the camera movement follows the path defined by your crane. In the video, we lock the viewport to our camera and turn on Game View (G key) to hide all editor icons to give us an idea of what the shot will look like. In this how-to, we added the Camera Rig Crane to the level, then added a Cine Camera Actor and manually attached it to the crane. There is an advanced method you can use when adding a Camera Rig Crane asset to a level that will automate several of the steps from this how-to, which will automatically add and attach a Cine Camera Actor to your Camera Rig Crane. It will also add both assets to an open Level Sequence, assign a Camera Cuts Track, and assign the camera to use for the shot. Please see the Workflow Shortcuts Sequencer documentation for the automated method and other Sequencer tips.
https://docs.unrealengine.com/latest/INT/Engine/Sequencer/HowTo/CameraRigCrane/
2017-12-11T02:04:54
CC-MAIN-2017-51
1512948512054.0
[]
docs.unrealengine.com
A Foundation Theme for Django Projects¶¶ This version supports Foundation 5¶ - We support the latest version of Foundation (5.2.1) - Updated the documentation - The project now supports the --templateargument for django-admin.py startprojectcommand - Removed more Pinax 0.9dependencies however, the project should still be compatible with the other apps in the Pinaxeco-system such as django-user-account Getting Started¶ Start by creating a new virtualenv for your project and install Django 1.6.2 mkvirtualenv mysite pip install Django==1.6.2¶¶
http://pinax-theme-foundation.readthedocs.io/en/latest/
2017-12-11T01:47:24
CC-MAIN-2017-51
1512948512054.0
[]
pinax-theme-foundation.readthedocs.io
What is InspectAll? Imagine capturing all of your inspections, audits, and forms with an App that automatically builds a powerful database you control. InspectAll gives you a framework built on industry best practices to take your process mobile and untlize powerful insights about the information coming in from the field. Here’s a short list of the type of information you can gather and track using InspectAll. - Customer Management - Facilities Maintenance - Plant Walkthroughs - Ergonomics & Workplace Conditions - OSHA Mock Audits - OSHA Inspection Procedures - Training Program Documentation - Risk Assessments - Job Safety / Hazard Analysis (JSA's, JHA's) - Accident / Incident Investigation - Personal Protective Equipment (PPE) - Work Zone Safety - Safety Checklists - Heavy Equipment Inspections - Lockout Tag-out - Material Handling & Storage
http://docs.inspectall.com/article/20-what-is-inspectall
2017-12-11T02:22:19
CC-MAIN-2017-51
1512948512054.0
[]
docs.inspectall.com
External Keypad support The EM2000 module supports both matrix and binary keypads. A typical matrix keypad is shown on the schematic diagram below: Due to flexible scan and return line mapping provided by the keypad (kp.) object, you can assign any combination of GPIO lines to connect to your keypad. Up to 8 scan and 8 return lines can be assigned. On the EM2000 module, all scan line must be configured as outputs, and all return lines — as inputs. To build a keypad you will need to have at least one return line. A sensible count of scan lines, however, starts from two! Having a single scan line is like having no scan lines whatsoever — you might just as well ground this single scan line, i.e. always keep it active: Scan lines can optionally perform the second function of driving LEDs. One such LED can be connected to each scan line, preferably through a buffer, as shown on the diagram below. These LEDs can be used for any purpose you desire — and this purpose can be completely unrelated to the keypad itself. If the LEDs are connected as shown on the diagram, you will turn them ON by settings their corresponding control lines LOW. Binary keypads (i.e. "keypads that output binary key codes") do not require scanning — they contain a (typically microcontroller-based) circuit that performs the scanning and outputs encoded binary codes of pressed keys. Such keypads are sometimes called "encoded keypads": The EM2000 can work with binary keypads incorporating up to 8 data lines. For more information see I/O (io.) and keypad (kp.) objects. They are documented in the "Programmable Hardware Manual".
https://docs.tibbo.com/phm/em2000_keypad
2020-11-23T19:21:34
CC-MAIN-2020-50
1606141164142.1
[array(['kp_standard_config.png', 'kp_standard_config'], dtype=object) array(['kp_no_scan.png', 'kp_no_scan'], dtype=object) array(['kp_add_led.png', 'kp_add_led'], dtype=object) array(['kp_binary.png', 'kp_binary'], dtype=object)]
docs.tibbo.com
POP3SecureSocket.TopLinesReceived From Xojo Documentation Event POP3SecureSocket.TopLinesReceived(Index as Integer,Data as EmailMessage) Supported for all project types and targets. Supported for all project types and targets. Executes in response to a call to RetrieveLines. Index contains the index number of the partial message being retrieved and Data contains the requested lines of the message.
http://docs.xojo.com/POP3SecureSocket.TopLinesReceived
2020-11-23T20:03:04
CC-MAIN-2020-50
1606141164142.1
[]
docs.xojo.com
Circuitpython on STM32¶ This port brings the ST Microelectronics STM32 series of MCUs to Circuitpython. STM32 chips have a wide range of capability, from <$1 low power STM32F0s to dual-core STM32H7s running at 400+ MHz. Currently, only the F4, F7, and H7 families are supported, powered by the ARM Cortex M4 and M7 processors. Refer to the ST Microelectronics website for more information on features sorted by family and individual chip lines: st.com/en/microcontrollers-microprocessors/stm32-high-performance-mcus.html STM32 SoCs vary product-by-product in clock speed, peripheral capability, pin assignments, and their support within this port. Refer to mpconfigport.mk for a full list of enabled modules sorted by family. How this port is organized:¶ - boards/ contains the configuration files for each development board and breakout available on the port, as well as system files and both shared and SoC-specific linker files. Board configuration includes a pin mapping of the board, oscillator information, board-specific build flags, and setup for OLED or TFT screens where applicable. - common-hal/ contains the port-specific module implementations, used by shared-module and shared-bindings. - packages/ contains package-specific pin bindings (LQFP100, BGA216, etc) - peripherals/ contains peripheral setup files and peripheral mapping information, sorted by family and sub-variant. Most files in this directory can be generated with the python scripts in tools/. - st-driver/ submodule for ST HAL and LL files generated via CubeMX. Shared with TinyUSB. - supervisor/ contains port-specific implementations of internal flash, serial and USB, as well as the port.c file, which initializes the port at startup. - tools/ python scripts for generating peripheral and pin mapping files in peripherals/ and board/. At the root level, refer to mpconfigboard.h and mpconfigport.mk for port specific settings and a list of enabled modules. Build instructions¶ Ensure your clone of Circuitpython is ready to build by following the guide on the Adafruit Website. This includes installing the toolchain, synchronizing submodules, and running mpy-cross. Once the one-time build tasks are complete, you can build at any time by navigating to the port directory: $ cd ports/stm To build for a specific circuitpython board, run: $ make BOARD=feather_stm32f405_express You may also build with certain flags available in the makefile, depending on your board and development goals. The following flags would enable debug information and correct flash locations for a pre-flashed UF2 bootloader: $ make BOARD=feather_stm32f405_express DEBUG=1 UF2_BOOTLOADER=1 USB connection¶ Connect your development board of choice to the host PC via the USB cable. Note that for most ST development boards such as the Nucleo and Discovery series, you must use a secondary OTG USB connector to access circuitpython, as the primary USB connector will be connected to a built-in ST-Link debugger rather than the chip itself. In many cases, this ST-Link USB connector will still need to be connected to power for the chip to turn on - refer to your specific product manual for details. Flash the bootloader¶ Most ST development boards come with a built-in STLink programming and debugging probe accessible via USB. This programmer may show up as an MBED drive on the host PC, enabling simple drag and drop programming with a .bin file, or they may require a tool like OpenOCD or StLink-org/stlink to run flashing and debugging commands. Many hobbyist and 3rd party development boards also expose SWD pins. These can be used with a cheap stlink debugger or other common programmers. For non-ST products or users without a debugger, all STM32 boards in the high performance families (F4, F7 and H7) include a built-in DFU bootloader stored in ROM. This bootloader is accessed by ensuring the BOOT0 pin is held to a logic 1 and the BOOT1 pin is held to a logic 0 when the chip is reset (ST Appnote AN2606). Most chips hold BOOT low by default, so this can usually be achieved by running a jumper wire from 3.3V power to the BOOT0 pin, if it is exposed, or by flipping the appropriate switch or button as the chip is reset. Once the chip is started in DFU mode, BOOT0 no longer needs to be held high and can be released. An example is available in the Feather STM32F405 guide. Windows users will need to install stm32cubeprog, while Mac and Linux users will need to install dfu-util with brew install dfu-util or sudo apt-get install dfu-util. More details are available in the Feather F405 guide. Flashing the circuitpython image with DFU-Util¶ Ensure the board is in dfu mode by following the steps in the previous section. Then run: $ make BOARD=feather_stm32F405_express flash Alternatively, you can navigate to the build directory and run the raw dfu-util command: dfu-util -a 0 --dfuse-address 0x08000000 -D firmware.bin Accessing the board¶ Connecting the board to the PC via the USB cable will allow code to be uploaded to the CIRCUITPY volume. Circuitpython exposes a CDC virtual serial connection for REPL access and debugging. Connecting to it from OSX will look something like this: screen /dev/tty.usbmodem14111201 115200 You may also use a program like mu to assist with REPL access.
https://circuitpython.readthedocs.io/en/6.0.x/ports/stm/README.html
2020-11-23T18:51:58
CC-MAIN-2020-50
1606141164142.1
[]
circuitpython.readthedocs.io
You are looking at an older version of this article! Under Catalogues > Products you will get a list of your created products with the most important information. By clicking on the respective column header you can change the sorting of the products. Next to the individual column captions you will find a button which you can use to hide the column. In the overview you can find the most important information about your products at a glance. The standard overview is structured as follows: Active: Indicates whether the item is active and can be used in the shop. Name: The name of the product, which is used as a heading on the product detail page, for example. Product number: The unique product number. Price: Displays the price for the default customer group. In stock: Displays the current stock, plus colored information. (0 = red; 1 - 25 = yellow, >25 = green) Manufacturer: Name of the product manufacturer In the header line you will find a button on the right side to adjust the list settings. Here you can (de-)activate the compact mode. This removes the larger line spacing and more lines can be displayed without scrolling.You can also hide and show columns here and set the order of the columns using the two buttons to the right of each column. On the right side you can use the button "..." to open the context menu for the respective product and get access to further functions. Edit Click here to open the processing mask of the product.You can find further information on the individual functions of all fields in the article Add a new product. If you no longer need the product, you can delete it by clicking here. Please note that products that are already included in orders will continue to be listed as items in the order even after the product is deleted, but will refer to a non-existent dataset. Therefore we recommend not to delete such products, but to deactivate them.
https://docs.shopware.com/en/shopware-6-en/catalogues/product-overview?version=1.0.0&category=shopware-6-en/catalogues
2020-11-23T19:52:49
CC-MAIN-2020-50
1606141164142.1
[]
docs.shopware.com
To download an AssetBundle from a remote server, you can use UnityWebRequest.AssetBundle. This function streams data into an internal buffer, which decodes and decompresses the AssetBundle’s data on a worker thread. The function’s arguments take several forms. In its simplest form, it takes only the URL from which the AssetBundle should be downloaded. You may optionally provide a checksum to verify the integrity of the downloaded data. Alternately, if you wish to use the AssetBundle caching system, you may provide either a version number or a Hash128 data structure. These are identical to the version numbers or Hash128 objects provided to the old system via. UnityWebRequestand sets the target URL to the supplied URL argument. It also sets the HTTP verb to GET, but sets no other flags or custom headers. DownloadHandlerAssetBundleto the UnityWebRequest. This download handler has a special assetBundleproperty, which can be used to extract the AssetBundle once enough data has been downloaded and decoded to permit access to the resources inside the AssetBundle. Hash128object as arguments, it also passes those arguments to the DownloadHandlerAssetBundle. The download handler then employs the caching system.)).assetBundle; } } } Alternatively, you can implement GetAssetBundle using a helper getter: IEnumerator GetTexture() { UnityWebRequest www = UnityWebRequest.GetAssetBundle(""); yield return; AssetBundle bundle = DownloadHandlerAssetBundle.GetContent(www); }
https://docs.unity3d.com/ru/2017.1/Manual/UnityWebRequest-DownloadingAssetBundle.html
2020-11-23T20:03:53
CC-MAIN-2020-50
1606141164142.1
[]
docs.unity3d.com
Block¶ Description¶ The block content type allows to group an arbitrary amount of other content types. Each block can define multiple types with with different content types included. These blocks can then be repeated and ordered by the content manager in the Sulu-Admin. A quite common use case is to combine a text editor with a media selection. This way a text can be directly linked to an image via the assignment to the same block. This approach has its biggest benefit over putting images into the text editor when used in combination with responsive design. When using multiple content types in a block the template developer has the freedom to place the image where and in which format it makes sense. In contrast, adding images to the text editor would make it quite hard to adapt the format and placement in the twig template. Example¶ Please note that the configuration of the block content type differs from the other content types. Instead of a property-tag a block-tag is used. The default-type-attribute is mandatory and describes which of the types are used by default. The other essential attribute is the types-tag, which contains multiple type-tags. A type defines some titles and its containing properties, whereby all the available Content Type Reference (except the block itself, since we do not support nesting) can be used. These types are offered to the content manager via dropdown. If collapsed the system will show the content of three properties in the block by default, in order to give the content manager an idea which block they are seeing. The sulu.block_preview tag can be used to manually choose which properties should be shown as a preview in collapsed blocks. These tags additionally take a priority attribute, which can alter the order of the property previews. The example only shows a single type, combining a media selection with a text editor as described in the description. <block name="blocks" default- <meta> <title lang="de">Inhalte</title> <title lang="en">Content</title> </meta> <types> <type name="editor_image"> <meta> <title lang="de">Editor mit Bild</title> <title lang="en">Editor with image</title> </meta> <properties> <property name="images" type="media_selection" colspan="3"> <meta> <title lang="de">Bilder</title> <title lang="en">Images</title> </meta> <tag name="sulu.block_preview" priority="512"/> </property> <property name="article" type="text_editor" colspan="9"> <meta> <title lang="de">Artikel</title> <title lang="en">Article</title> </meta> <tag name="sulu.block_preview" priority="1024"/> </property> </properties> </type> </types> </block> Twig¶ A reusable way for rendering blocks is having a separate template file per type: {% for block in content.blocks %} {% include 'includes/blocks/' ~ block.type ~ '.html.twig' with { content: block, view: view.blocks[loop.index0], } %} {% endfor %} This way, its possible to access the properties of the block type ivia the content and view variable in the rendered block template.
https://docs.sulu.io/en/2.1/reference/content-types/block.html
2020-11-23T18:46:14
CC-MAIN-2020-50
1606141164142.1
[]
docs.sulu.io
.Redbits R/O Property Details red "channel" can be adjusted. For example, if the high byte is equal to 5, then there are 32 levels for red. This property also tells you the bit position and length of the red field in red field is the first field and occupies 5 bits (4-0).
https://docs.tibbo.com/taiko/lcd_redbits
2020-11-23T20:04:30
CC-MAIN-2020-50
1606141164142.1
[]
docs.tibbo.com
This section provides information about how to install and update the Open edX developer stack (Devstack). Before you install Devstack, make sure that you have met the installation prerequisites. To install Devstack, follow these steps. Decide which branch you will be working with. “master” is the latest code in the repositories, changed daily. Open edX releases are more stable, for example, Hawthorn. edx/devstack repository from. git clone Navigate to the devstack directory cd devstack If you are not using the master branch, check out the branch you want. git checkout open-release/hawthorn.master If you are not using the master branch, define an environment variable for the Open edX version you are using, such as hawthorn.master or zebrawood.rc1. Note that unlike a server install, the value of the OPENEDX_RELEASE variable should not use the open-release/ prefix. export OPENEDX_RELEASE=hawthorn.master Run make dev.checkout to check out the correct branch in the local make dev.checkout Clone the Open edX service repositories. The Docker Compose file mounts a host volume for each service’s executing code. The host directory defaults to be a sibling of the /devstack directory. For example, if you clone the edx/devstack repository to ~/workspace/devstack, host volumes will be expected in ~/workspace/course-discovery, ~/workspace/ecommerce, etc. You can clone these repositories with the following command. make dev.clone To customize where the local repositories are found, set the DEVSTACK_WORKSPACE environment variable. (macOS only) Share the cloned service directories in Docker, using Docker -> Preferences -> File Sharing in the Docker menu. Run the provision command to configure the various services with superusers (for development without the auth service) and tenants (for multi-tenancy). Note When you run the provision command, databases for ecommerce and edxapp will be dropped and recreated. Use the following default provision command. make dev.provision The default username and password for the superusers are both edx. You can access the services directly using Django admin at the /admin/ path, or log in using single sign-on at When you have completed these steps, see Starting the Open edX Developer Stack to begin using Devstack. For help with running Devstack, see Troubleshooting Devstack. EdX publishes new images for Open edX services frequently. After you have installed and started Devstack, you can update your Devstack installation to use the most up-to-date versions of the Devstack images by running the following sequence of commands. make down make pull make dev.up This stops any running Devstack containers, pulls the latest images, and then starts all of the Devstack containers.
https://edx.readthedocs.io/projects/edx-installing-configuring-and-running/en/open-release-hawthorn.master/installation/install_devstack.html
2020-11-23T19:52:46
CC-MAIN-2020-50
1606141164142.1
[]
edx.readthedocs.io
A list of scheduled actions to be described. If this list is omitted, all scheduled actions are described. The list of requested scheduled actions cannot contain more than 50 items. If an auto scaling group name is provided, the results are limited to that group. If unknown scheduled actions are requested, they are ignored with no error.AWSSDK (Module: AWSSDK) Version: 1.5.60.0 (1.5.60.0)
https://docs.aws.amazon.com/sdkfornet1/latest/apidocs/html/P_Amazon_AutoScaling_Model_DescribeScheduledActionsRequest_ScheduledActionNames.htm
2020-11-23T20:10:54
CC-MAIN-2020-50
1606141164142.1
[]
docs.aws.amazon.com
ggez A Rust library to create a Good Game Easily. It is built on SDL2, and aims to implement an API quite similar to (a simplified version of) the Love2D game engine. This means it will contain basic and portable drawing and sound, resource loading and event handling. It's not meant to be everything to everyone, but rather a good base upon which to build. However, eventually there should be a ggez-goodies crate that implements higher-level systems atop this, such as a resource cache, basic GUI/debugger, scene manager, and more sophisticated drawing tools such as sprites, layered and tiled maps, etc. Features - Filesystem abstraction that lets you load resources from folders or zip files - Hardware-accelerated rendering of bitmaps - Playing and loading sounds through SDL2_mixer - TTF font rendering through SDL2_ttf, as well as (eventually) bitmap fonts. - Interface for handling keyboard and mouse events easily through callbacks - Config file for defining engine and game settings - Easy timing and time-tracking functions. Examples See example/imageview.rs To run, you have to copy (or symlink) the resources directory to a place the running game can find it. Cargo does not have an easy way of doing this itself at the moment, so the procedure is (on Linux): cargo build --example imageview cp -R resources target/debug/ cargo run --example imageview Either way, if it can't find the resources it will give you an error along the lines of ResourceNotFound("'resources' directory not found! Should be in "/home/foo/src/ggez/target/debug/resources"). Just copy the resources directory to where the error says it's looking. Status - Need to add more tests, somehow - Need to figure out exiting cleanly. THIS IS SOLVED, but blocked by a bug in rust-sdl! Issue #530. - Do we want to include Love2D's graphics transform functions? ...probably not, honestly. Things to add atop it - Resource loader/cache - Scene stack - GUI - particle system (or put that in with it like LOVE?) (No, in LOVE it's part of the main engine 'cause you can't efficiently implement it as a module) - Input indirection layer and input state tracking - Sprites with ordering, animation, atlasing, tile mapping, etc. Future work It would be nice to have a full OpenGL-y backend like Love2D does, with things like shaders, render targets, etc. gfx might be the best option there, maaaaaaybe. Right now the API is mostly limited to Love2D 0.7 or so. Using OpenAL (through the ears crate perhaps?) for sound would get us positional audio too. Useful goodies - specs for entity-component system (alternatives: ecs or recs crates) - cgmath or vecmath for math operations? - physics/collision???
https://docs.rs/crate/ggez/0.1.0
2020-11-23T19:17:34
CC-MAIN-2020-50
1606141164142.1
[]
docs.rs