content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
Date: Tue, 17 Nov 2009 20:35:15 +0200 From: Manolis Kiagias <[email protected]> To: [email protected] Cc: [email protected] Subject: Re: where to find libintl.so.8 Message-ID: <[email protected]> In-Reply-To: <[email protected]> References: <[email protected]> Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help Len Conrad wrote: > FreeBSD 6.2-RELEASE FreeBSD 6.2-RELEASE #0 > > portsnap'd today > > running ver 1.2.8 of > > rdiff-backup > > which gets: > > ImportError: Shared object "libintl.so.8" not found, required by "librsync.so.1" > > thanks > Len > This is installed by the devel/gettext port. It is probably installed in your machine (most ports depend on it) but something may have gone wrong during a portupgrade. /usr/ports/UPDATING states the following for gettext upgrades: As a result of the upgrade to gettext-0.17, the shared library version of libintl has changed, so you will need to rebuild all ports that depend on gettext: # portupgrade -rf gettext # portmaster -r gettext I suggest you try one of these commands. (Check with 'pkg_info -Ix gettext' first to see what gettext you are running) Want to link to this message? Use this URL: <>
https://docs.freebsd.org/cgi/getmsg.cgi?fetch=510674+0+archive/2009/freebsd-questions/20091122.freebsd-questions
2021-09-17T05:13:49
CC-MAIN-2021-39
1631780054023.35
[]
docs.freebsd.org
InterSystems Documentation Website Cookies For information on how to manage your cookie preferences on InterSystems websites, go to Cookies Settings. This is subject to change without notice. Information gathered from third party services may be limited or prevented through browser configuration (examples include “Do Not Track” mode in certain browsers) and managing your privacy preferences on the third party sites (such as disabling “Promoted content”). If you would like your information to be suppressed from future communications or would like to provide/update your contact information, please contact our marketing privacy coordinator at [email protected] or call (617) 621-0600.
https://docs.intersystems.com/website-cookies.html
2021-09-17T03:50:21
CC-MAIN-2021-39
1631780054023.35
[]
docs.intersystems.com
Release Notes for MongoDB 1.4¶ On this page Upgrading¶ We're pleased to announce the 1.4 release of MongoDB. 1.4 is a drop-in replacement for 1.2. To upgrade you just need to shutdown mongod, then restart with the new binaries. (Users upgrading from release 1.0 should review the 1.2 release notes, in particular the instructions for upgrading the DB format.) Release 1.4 includes the following improvements over release 1.2: Core Server Enhancements¶ - concurrency improvements - indexing memory improvements - background index creation - better detection of regular expressions so the index can be used in more cases Replication and Sharding¶ Deployment and Production¶ - configure "slow threshold" for profiling - ability to do fsync + lock for backing up raw files - option for separate directory per db ( --directoryperdb) get serverStatus via http - REST interface is off by default for security ( --restto enable) - can rotate logs with a db command, logRotate - enhancements to serverStatus command (db.serverStatus()) - counters and replication lag stats - new mongostattool Query Language Improvements¶ Geo¶ - 2d geospatial search - geo $center and $box searches Give Feedback
https://docs.mongodb.com/v5.0/release-notes/1.4/
2021-09-17T03:45:30
CC-MAIN-2021-39
1631780054023.35
[]
docs.mongodb.com
Upgrade Guide - Scylla Enterprise 2021.x.y to 2021.x.z for Red Hat Enterprise 7, 8 or CentOS 7, 8¶ This document is a step by step procedure for upgrading from Scylla Enterprise 2021.x.y to 2021.x.z. Applicable versions¶ This guide covers upgrading Scylla Enterprise from the following versions: 2021.x.y to 2021.x.z, on the following platforms: Red Hat Enterprise Linux, version 7 and later CentOS, version 7 and later No longer provide packages for Fedora Upgrade Procedure¶ A Scylla upgrade is a rolling procedure which does not require full cluster shutdown. For each of the nodes in the cluster, serially (i.e. one at a time), Stop Scylla¶ sudo systemctl stop scylla-server Download and install the new release¶ Before upgrading, check what version you are running now using rpm -qa | grep RPM repo to 2021.x install sudo yum clean all sudo yum update scylla\* -y Start the node¶: Drain the node and stop Downgrade to previous release¶ Install sudo yum downgrade scylla\*-2021.x.y-\* -y Restore the configuration file¶ sudo rm -rf /etc/scylla/scylla.yaml sudo cp -a /etc/scylla/scylla.yaml.backup-2021.x.z /etc/scylla/scylla.yaml Start the node¶ sudo systemctl start scylla-server Validate¶ Check upgrade instruction above for validation. Once you are sure the node rollback is successful, move to the next node in the cluster.
https://docs.scylladb.com/upgrade/upgrade-enterprise/upgrade-guide-from-2021.x.y-to-2021.x.z/upgrade-guide-from-2021.x.y-to-2021.x.z-rpm/
2021-09-17T03:32:05
CC-MAIN-2021-39
1631780054023.35
[]
docs.scylladb.com
Upgrade Guide - Scylla 4.4 to 4.5 for Ubuntu 20.04 platform: Ubuntu 20.04. A Scylla upgrade is a rolling procedure which does not require cluster schema¶ Make sure that all nodes have the schema synched prior to upgrade, we won’t survive an upgrade that-|SRC_VERSION| 4.4.x version, stop right here! This guide only covers 4.4.x to 4.5.y upgrades. To upgrade: Update the Scylla deb repo to 4.5 Install sudo apt-get clean all.5 Scylla rollback is a rolling procedure which does not require full cluster shutdown. For each of the nodes rollback to 4.4, you will: Drain the node and stop Scylla Retrieve the old Scylla packages Restore the configuration file Restore system tables Reload systemd configuration deb repo to 4.4.
https://docs.scylladb.com/upgrade/upgrade-opensource/upgrade-guide-from-4.4-to-4.5/upgrade-guide-from-4.4-to-4.5-ubuntu-20-04/
2021-09-17T04:52:44
CC-MAIN-2021-39
1631780054023.35
[]
docs.scylladb.com
About the TOM Toolkit¶ What’s a TOM?¶ It stands for Target and Observation Manager, and its a software package designed to facilitate astronomical observing projects and collaborations. Though useful for a wide range of projects, TOM systems are particularly important for programs with a large number of potential targets and/or observations. TOM systems perform some or all of these functions: Harvest target alerts, or upload catalogs of targets of interest to the project science goals. Search and cross-match additional information on targets from catalogs and archives. Store information from the project’s own analysis of the targets, related data and observations. Provide informative displays of the targets, data and observing program. Provide flexible search capabilities on parameters that are relevant to the science. Provide tools to plan appropriate observations. Enable observations to be requested from telescope facilities. Receive information about the status of observation requests. Harvest data obtained as a result of their observing requests. Facilitate the sharing of information and data. Motivation for a TOM Toolkit¶ Many projects, from several branches of astronomy, have found it necessary to develop TOM systems. Current examples include the PTF Marshall and NASA’s ExoFOP, as well as those customized for the LCO Network: SNEx, NEO Exchange and RoboNet. These tools provide capabilities which enable the projects to identify and evaluate high priority targets in good time to plan and conduct suitable observations, and to analyze the results. These capabilities have proven to be essential for existing projects to keep track of their observing program and to achieve their scientific goals. They are likely to become increasingly vital as next generation surveys produce ever-larger and more rapidly-evolving target lists. However, designing the existing TOM systems required high levels of expertise in database and software development that are not common among astronomers. No two TOM systems are identical, as astronomers strongly prefer to directly control the science-specific aspects of their projects such as target selection, observation templates and analysis techniques. At the same time, while all of these systems are customized for the science goals of the projects they support, much of their underlying infrastructure and functions are very similar. What’s needed is a software package that lets astronomers easily build a TOM, customized to suit the needs of their project, without becoming an IT expert or software engineer. Financial Support¶ The TOM Toolkit has been made possible through generous financial support from the Heising-Simons Foundation and the Zegar Family Foundation.
https://tom-toolkit.readthedocs.io/en/stable/introduction/about.html
2021-09-17T04:18:03
CC-MAIN-2021-39
1631780054023.35
[array(['../_images/hs.jpg', '../_images/hs.jpg'], dtype=object) array(['../_images/zff.png', '../_images/zff.png'], dtype=object)]
tom-toolkit.readthedocs.io
7. Finite state machine¶ 7.1. Introduction¶ In previous chapters, we saw various examples of the combinational circuits and sequential circuits. In combinational circuits, the output depends on the current values of inputs only; whereas in sequential circuits, the output depends on the current values of the inputs along with the previously stored information. In the other words, storage elements, e.g. flip flogs or registers, are required for sequential circuits. The information stored in the these elements can be seen as the states of the system. If a system transits between finite number of such internal states, then finite state machines (FSM) can be used to design the system. In this chapter, various finite state machines along with the examples are discussed. Further, please see the SystemVerilog-designs in Chapter 10, which provides the better ways for creating the FSM designs as compared to Verilog. 7.2. Comparison: Mealy and Moore designs¶ section{}label{} FMS design is known as Moore design if the output of the system depends only on the states (see Fig. 7.1); whereas it is known as Mealy design if the output depends on the states and external inputs (see Fig. 7.2). Further, a system may contain both types of designs simultaneously. Note Following are the differences in Mealy and Moore design, - In Moore machine, the outputs depend on states only, therefore it is ‘synchronous machine’ and the output is available after 1 clock cycle as shown in Fig. 7.3. Whereas, in Mealy machine output depends on states along with external inputs; and the output is available as soon as the input is changed therefore it is ‘asynchronous machine’ (See Fig. 7.16 for more details). - Mealy machine requires fewer number of states as compared to Moore machine as shown in Section 7.3.1. - Moore machine should be preferred for the designs, where glitches (see Section 7.4) are not the problem in the systems. - Mealy machines are good for synchronous systems which requires ‘delay-free and glitch-free’ system (See example in Section Section 7.7.1), but careful design is required for asynchronous systems. Therefore, Mealy machine can be complex as compare to Moore machine. 7.3. Example: Rising edge detector¶ Rising edge detector generates a tick for the duration of one clock cycle, whenever input signal changes from 0 to 1. In this section, state diagrams of rising edge detector for Mealy and Moore designs are shown. Then rising edge detector is implemented using Verilog code. Also, outputs of these two designs are compared. 7.3.1. State diagrams: Mealy and Moore design¶ Fig. 7.2 and Fig. 7.1 are the state diagrams for Mealy and Moore designs respectively. In Fig. 7.2, the output of the system is set to 1, whenever the system is in the state ‘zero’ and value of the input signal ‘level’ is 1; i.e. output depends on both the state and the input. Whereas in Fig. 7.1, the output is set to 1 whenever the system is in the state ‘edge’ i.e. output depends only on the state of the system. 7.3.2. Implementation¶ Both Mealy and Moore designs are implemented in Listing 7.1. The listing can be seen as two parts i.e. Mealy design (Lines 37-55) and Moore design (Lines 57-80). Please read the comments for complete understanding of the code. The simulation waveforms i.e. Fig. 7.3 are discussed in next section. 7.3.3. Outputs comparison¶ In Fig. 7.3, it can be seen that output-tick of Mealy detector is generated as soon as the ‘level’ goes to 1, whereas Moore design generate the tick after 1 clock cycle. These two ticks are shown with the help of the two red cursors in the figure. Since, output of Mealy design is immediately available therefore it is preferred for synchronous designs. Fig. 7.3 Simulation waveforms of rising edge detector in Listing 7.1 7.3.4. Visual verification¶ Listing 7.2 can be used to verify the results on the FPGA board. Here, clock with 1 Hz frequency is used in line 19, which is defined in Listing 6.5. After loading the design on FPGA board, we can observe on LEDs that the output of Moore design displayed after Mealy design, with a delay of 1 second. 7.4. Glitches¶ Glitches are the short duration pulses which are generated in the combinational circuits. These are generated when more than two inputs change their values simultaneously. Glitches can be categorized as ‘static glitches’ and ‘dynamic glitches’. Static glitches are further divided into two groups i.e. ‘static-0’ and ‘static-1’. ‘Static-0’ glitch is the glitch which occurs in logic ‘0’ signal i.e. one short pulse i.e. ‘high-pulse (logic-1)’ appears in logic-0 signal (and the signal settles down). Dynamic glitch is the glitch in which multiple short pulses appear before the signal settles down. Note Most of the times, the glitches are not the problem in the design. Glitches create problem when it occur in the outputs, which are used as clock for the other circuits. In this case, glitches will trigger the next circuits, which will result in incorrect outputs. In such cases, it is very important to remove these glitches. In this section, the glitches are shown for three cases. Since, clocks are used in synchronous designs, therefore Section Section 7.4.3 is of our main interest.. To remove the glitch, we can add the prime-implicant in red-part as well. This solution is good, if there are few such gates are required; however if the number of inputs are very high, whose values are changing simultaneously then this solution is not practical, as we need to add large number of gates. Fig. 7.6 Glitches (see disjoint lines in ‘z’) in design in Listing 7.3 7.4.2. Unfixable Glitch¶ Listing 7.4 is another example of glitches in the design as shown in Fig. 7.7. Here, glitches are continuous i.e. these are occurring at every change in signal ‘din’. Such glitches are removed by using D-flip-flop as shown in Section Section 7.4.3. Since the output of Manchester code depends on both edges of clock (i.e. half of the output changes on +ve edge and other half changes at -ve edge), therefore such glitches are unfixable; as in Verilog both edges can not be connected to one D flip flop. Fig. 7.7 Glitches in Listing 7.4 7.4.3. Combinational design in synchronous circuit¶ Combination designs in sequential circuits were discussed in Fig. 4.1. The output of these combination designs can depend on states only, or on the states along with external inputs. The former is known as Moore design and latter is known as Mealy design as discussed in Section Section 7.2. Since, the sequential designs are sensitive to edge of the clock, therefore the glitches can occur only at the edge of the clock. Hence, the glitches at the edge can be removed by sending the output signal through the D flip flop, as shown in Fig. 7.8. Various Verilog templates for sequential designs are shown in Section Section 7.5 and Section 7.6. 7.5. Moore architecture and Verilog templates¶ Fig. 7.8 shows the different block for the sequential design. In this figure, we have three blocks i.e. ‘sequential logic’, ‘combinational logic’ and ‘glitch removal block’. In this section, we will define three process-statements to implemented these blocks (see Listing 7.6). Further, ‘combinational logic block’ contains two different logics i.e. ‘next-state’ and ‘output’. Therefore, this block can be implemented using two different block, which will result in four process-statements (see Listing 7.5). Moore and Mealy machines can be divided into three categories i.e. ‘regular’, ‘timed’ and ‘recursive’. The differences in these categories are shown in Fig. 7.9, Fig. 7.10 and Fig. 7.11 for Moore machine. In this section, we will see different Verilog templates for these categories. Note that, the design may be the combinations of these three categories, and we need to select the correct template according to the need. Further, the examples of these templates are shown in Section Section 7.7. 7.5.1. Regular machine¶ Please see the Fig. 7.9 and note the following points about regular Moore machine, - Output depends only on the states, therefore no ‘if statement’ is required in the process-statement. For example, in Lines 85-87 of Listing 7.5, the outputs (Lines 86-87) are defined inside the ‘s0’ (Line 86). - Next-state depends on current-state and and current external inputs. For example, the ‘state_next’ at Line 49 of Listing 7.5 is defined inside ‘if statement’ (Line 48) which depends on current input. Further, this ‘if statement’ is defined inside the state ‘s0’ (Line 47). Hence the next state depends on current state and current external input. Note In regular Moore machine, - Outputs depend on current external inputs. - Next states depend on current states and current external inputs. Listing 7.6 is same as Listing 7.5, but the ouput-logic and next-state logic are combined in one process block. 7.5.2. Timed machine¶ If the state of the design changes after certain duration (see Fig. 7.10), then we need to add the timer in the Verilog design which are created in Listing 7.5 and Listing 7.5. For this, we need to add one more process-block which performs following actions, - Zero the timer : The value of the timer is set to zero, whenever the state of the system changes. - Stop the timer : Value of the timer is incremented till the predefined ‘maximum value’ is reached and then it should be stopped incrementing. Further, it’s value should not be set to zero until state is changed. Note In timed Moore machine, - Outputs depend on current external inputs. - Next states depend on time along with current states and current external inputs. Template for timed Moore machine is shown in Listing 7.7, which is exactly same as Listing 7.6 except with following changes, - Timer related constants are added at Line 22-27. - An ‘always’ block is added to stop and zero the timer (Lines 44-54). - Finally, timer related conditions are included for next-state logic e.g. Lines 64 and 67 etc. 7.5.3. Recursive machine¶ In recursive machine, the outputs are fed back as input to the system (see Fig. 7.11). Hence, we need additional process-statement which can store the outputs which are fed back to combinational block of sequential design, as shown in Listing 7.8. The listing is same as Listing 7.7 except certain signals and process block are defined to feedback the output to combination logic; i.e. Lines 29-31 contain the signals (feedback registers) which are required to be feedback the outputs. Here, ‘_next’ and ‘_reg’ are used in these lines, where ‘next’ value is fed back as ‘reg’ in the next clock cycle inside the ‘always’ statement which is defined in Lines 63-75. Lastly, ‘feedback registers’ are also used to calculate the next-state inside the ‘if statement’ e.g. Lines 91 and 96. Also, the value of feedback registers are updated inside these ‘if statements’ e.g. Lines 93 and 102. Note In recursive Moore machine, - Outputs depend on current external inputs. Also, values in the feedback registers are used as outputs. - Next states depend current states, current external input, current internal inputs (i.e. previous outputs feedback as inputs to system) and time (optional). 7.6. Mealy architecture and Verilog templates¶ Template for Mealy architecture is similar to Moore architecture. The minor changes are required as outputs depend on current input as well, as discussed in this section. 7.6.1. Regular machine¶ In Mealy machines, the output is the function of current input and states, therefore the output will also defined inside the if-statements (Lines 49-50 etc.). Rest of the code is same as Listing 7.6. 7.6.2. Timed machine¶ Listing 7.10 contains timer related changes in Listing 7.9. See description of Listing 7.7 for more details. 7.6.3. Recursive machine¶ Listing 7.11 contains recursive-design related changes in Listing 7.10. See description of Listing 7.8 for more details. 7.7. Examples¶ 7.7.1. Regular Machine : Glitch-free Mealy and Moore design¶ In this section, a non-overlapping sequence detector is implemented to show the differences between Mealy and Moore machines. Listing 7.12 implements the ‘sequence detector’ which detects the sequence ‘110’; and corresponding state-diagrams are shown in Fig. 7.12 and Fig. 7.13. The RTL view generated by the listing is shown in Fig. 7.15, where two D-FF are added to remove the glitches from Moore and Mealy model. Also, in the figure, if we click on the state machines, then we can see the implemented state-diagrams e.g. if we click on ‘state_reg_mealy’ then the state-diagram in Fig. 7.14 will be displayed, which is exactly same as Fig. 7.13. Further, the testbench for the listing is shown in Listing 7.13, whose results are illustrated in Fig. 7.16. Please note the following points in Fig. 7.16, - Mealy machines are asynchronous as the output changes as soon as the input changes. It does not wait for the next cycle. - If the output of the Mealy machine is delayed, then glitch will be removed and the output will be same as the Moore output (Note that, there is no glitch in this system. This example shows how to implement the D-FF to remove glitch in the system (if exists)). - Glitch-free Moore output is delayed by one clock cycle. - If glitch is not a problem, then we should use Moore machine, because it is synchronous in nature. But, if glitch is problem and we do not want to delay the output then Mealy machines should be used. Fig. 7.14 State diagram generated by Quartus for Mealy machine in Listing 7.12 Fig. 7.15 RTL view generated by Listing 7.12 Fig. 7.16 Mealy and Moore machine output for Listing 7.13 7.7.2. Timed machine: programmable square wave¶ Listing 7.14 generates the square wave using Moore machine, whose ‘on’ and ‘off’ time is programmable (see Lines 6 and 7) according to state-diagram in Fig. 7.17. The simulation waveform of the listing are shown in Fig. 7.18. Fig. 7.18 Simulation waveform of Listing 7.14 7.7.3. Recursive Machine : Mod-m counter¶ Listing 7.15 implements the Mod-m counter using Moore machine, whose state-diagram is shown in Fig. 7.19. Machine is recursive because the output signal ‘count_moore_reg’ (Line 50) is used as input to the system (Line 32). The simulation waveform of the listing are shown in Fig. 7.20 Fig. 7.20 Simulation waveform of Listing 7.15 Note It is not good to implement every design using FSM e.g. Listing 7.15 can be easily implement without FSM as shown in Listing 6.4. Please see the Section Section 7.8 for understading the correct usage of FSM design. 7.8. When to use FSM design¶ We saw in previous sections that, once we have the state diagram for the FSM design, then the Verilog design is a straightforward process. But, it is important to understand the correct conditions for using the FSM, otherwise the circuit will become complicated unnecessary. - We should not use the FSM diagram, if there is only ‘one loop’ with ‘zero or one control input’. ‘Counter’ is a good example for this. A 10-bit counter has 10 states with no control input (i.e. it is free running counter). This can be easily implemented without using FSM as shown in Listing 6.3. If we implement it with FSM, then we need 10 states; and the code and corresponding design will become very large. - If required, then FSM can be use for ‘one loop’ with ‘two or more control inputs’. - FSM design should be used in the cases where there are very large number of loops (especially connected loops) along with two or more controlling inputs. 7.9. Conclusion¶ In this chapter, Mealy and Moore designs are discussed. Also, ‘edge detector’ is implemented using Mealy and Moore designs. This example shows that Mealy design requires fewer states than Moore design. Further, Mealy design generates the output tick as soon as the rising edge is detected; whereas Moore design generates the output tick after a delay of one clock cycle. Therefore, Mealy designs are preferred for synchronous designs.
https://verilogguide.readthedocs.io/en/latest/verilog/fsm.html
2021-09-17T03:02:53
CC-MAIN-2021-39
1631780054023.35
[array(['../_images/edgeDetectorWave.jpg', '../_images/edgeDetectorWave.jpg'], dtype=object) array(['../_images/glitches.jpg', '../_images/glitches.jpg'], dtype=object) array(['../_images/manchesterGlitch.jpg', '../_images/manchesterGlitch.jpg'], dtype=object) array(['../_images/Altera_Mealy_seq_det_state.jpg', '../_images/Altera_Mealy_seq_det_state.jpg'], dtype=object) array(['../_images/Mealy_Moore_RTL_glitchfree.jpg', '../_images/Mealy_Moore_RTL_glitchfree.jpg'], dtype=object) array(['../_images/Mealy_MooreGlitchFree.jpg', '../_images/Mealy_MooreGlitchFree.jpg'], dtype=object) array(['../_images/square_wave_ex.jpg', '../_images/square_wave_ex.jpg'], dtype=object) array(['../_images/counterEx.jpg', '../_images/counterEx.jpg'], dtype=object) ]
verilogguide.readthedocs.io
CI/CD integrations - Jenkins, TFS, Azure Devops Git integrations - All standard git service provider. Bug reports integrations - Jira. In order to integrate your CI/CD tool, you need to configure ‘CloudBeat CLI’ as a part of your build process. Here’s a video on how to do it: To sync with your Git service provider you need to provide the following: URL - the address to your git project Branch name - for example: ‘master’ Name - your git username Password - your git password This configuration needs to be done when first creating a project. When a test case is finished with status failed - you may integrate it with a bug report system such as ‘Jira’. You need to do the following: Go to your failed test results page by clicking it. Select a failure reason from the drop down list on the top line of the test case. On the top line you will have a ‘Jira’ button - click it. Link the bug you opened in Jira in the ‘Link’ field. Click ‘Add’ and the dialog will close.
https://docs.cloudbeat.io/features/integrations
2021-09-17T04:49:59
CC-MAIN-2021-39
1631780054023.35
[]
docs.cloudbeat.io
VisualScript¶ Inherits: Script < Resource < Reference < Object A script implemented in the Visual Script programming environment. Description¶ A script implemented in the Visual Script programming environment. The script extends the functionality of all objects that instance it. Object.set_script extends an existing object, if that object's class matches one of the script's base classes. You are most likely to use this class via the Visual Script editor or when writing plugins for it. Tutorials¶ Signals¶ Emitted when the ports of a node are changed. Method Descriptions¶ Add a custom signal with the specified name to the VisualScript. Add a function with the specified name to the VisualScript. void add_node ( String func, int id, VisualScriptNode node, Vector2 position=Vector2( 0, 0 ) ) Add a node to a function of the VisualScript. Add a variable to the VisualScript, optionally giving it a default value or marking it as exported. void custom_signal_add_argument ( String name, Variant.Type type, String argname, int index=-1 ) Add an argument to a custom signal added with add_custom_signal. Get the count of a custom signal's arguments. Get the name of a custom signal's argument. Variant.Type custom_signal_get_argument_type ( String name, int argidx ) const Get the type of a custom signal's argument. Remove a specific custom signal's argument. Rename a custom signal's argument. void custom_signal_set_argument_type ( String name, int argidx, Variant.Type type ) Change the type of a custom signal's argument. Swap two of the arguments of a custom signal. Connect two data ports. The value of from_node's from_port would be fed into to_node's to_port. Disconnect two data ports previously connected with data_connect. Returns the id of a function's entry point node. Returns the position of the center of the screen for a given function. VisualScriptNode get_node ( String func, int id ) const Returns a node given its id and its function. Returns a node's position in pixels. Returns the default (initial) value of a variable. Returns whether a variable is exported. Dictionary get_variable_info ( String name ) const Returns the information for a given variable as a dictionary. The information includes its name, type, hint and usage. Returns whether a signal exists with the specified name. bool has_data_connection ( String func, int from_node, int from_port, int to_node, int to_port ) const Returns whether the specified data ports are connected. Returns whether a function exists with the specified name. Returns whether a node exists with the given id. Returns whether the specified sequence ports are connected. Returns whether a variable exists with the specified name. Remove a custom signal with the given name. Remove a specific function and its nodes from the script. Remove a specific node. Remove a variable with the given name. Change the name of a custom signal. Change the name of a function. Change the name of a variable. Connect two sequence ports. The execution will flow from of from_node's from_output into to_node. Unlike data_connect, there isn't a to_port, since the target node can have only one sequence port. Disconnect two sequence ports previously connected with sequence_connect. Position the center of the screen for a function. Set the base type of the script. Position a node on the screen. Change the default (initial) value of a variable. Change whether a variable is exported. void set_variable_info ( String name, Dictionary value ) Set a variable's info, using the same format as get_variable_info.
https://docs.godotengine.org/es/latest/classes/class_visualscript.html
2021-09-17T04:59:54
CC-MAIN-2021-39
1631780054023.35
[]
docs.godotengine.org
. Using the Model Reconstruction Service¶ The Model Reconstruction submenu option under the Services main menu (Metabolomics category) opens the Reconstruct Metabolic Model input form (shown below). Note: You must be logged into PATRIC to use this service. The Model Reconstruction service is also available via the PATRIC Command Line Interface (CLI). Select a Genome¶ Select a public genome or a genome from workspace Optional Parameters¶ Media¶ Selection for a predefined media formulation from a dropdown list for this model to be gapfilled and simulated for flux-balance analysis (FBA), e.g., GMM – Glucose Minimal Media, NMS- Nitrate Mineral Salts Medium, LB - Luria-Bertani Medium. Output Results¶ The Metabolic Reconstruction Service generates several files that are deposited in the Private Workspace in the designated Output Folder. These include gf.0.gftbl - a Gapfill reactions table containing the list of the reations added to the model by the gapfilling algorithm during reconstruction. gf.0.fbatbl - a FBA flux distribution for the gapfilling simulation. model_name.sbml - the model in Systems Biology Markup Language (SBML) format. model_name.rxntbl - list of all the reactions in the model. model_name.cpdtbl - list of all the reactions in the model. fba.0.futbl - FBA flux for all model reactions. fba.0.essentials - List of genes in the model characterized as essential by the FBA simulation in the specified media. ModelSEED Model Viewer¶ Clicking on the View icon at the upper right portion of the job result page will open a login screen for the ModelSEED. PATRIC or RAST credentials can be used for login. After login, the ModelSEED Model Viewer will be displayed, consisting of multiple tabs for Reactioons, Compounds, Genes, Compartments of the model, and the weighted components of the Biomass. Each column in the table is sortable by clicking the column header. Information about the model is searchable by using the search bar.
https://docs.patricbrc.org/user_guides/services/model_reconstruction_service.html
2021-09-17T04:44:27
CC-MAIN-2021-39
1631780054023.35
[array(['../../_images/services_menu.png', 'Model Reconstruction Menu'], dtype=object) array(['../../_images/model_reconstruction_service_results_page.png', 'Metabolic Reconstruction Service Results Page'], dtype=object) array(['../../_images/model_reconstruction_view_icon.png', 'Model Reconstruction Service View Icon'], dtype=object) array(['../../_images/modelseed_login.png', 'ModelSEED Login'], dtype=object) array(['../../_images/modelseed_model_viewer.png', 'ModelSEED Model Viewer'], dtype=object) ]
docs.patricbrc.org
The At a Glance card can be found on all Divisions on the Details page, providing you with quick stats about a particular business. The first card within the At a Glance section shows the financial difference between the last 365 days and the previous 365 days. This gives the user a quick indication as to whether the customer's spending has increased or decreased over the past year. If you click onto the card it'll open the Sales Transactions page, detailing the products this business has purchased with you. Below the Last 365 Days card is the Sales Value, detailing the total value spent so far in the current calendar year. If you click on the card, it'll list all the Products purchased by this business in that calendar year, providing great insight into their purchasing behaviour. Underneath the Google Maps preview of the Division's location, you can then view how many active Opportunities, Problems and Contacts the Division has. If you were to click on any one of these cards, it'd take you to their individual pages e.g. if you click on the Active Problems card it will open the Problem page.
https://docs.prospect365.com/en/articles/2057705-at-a-glance-information-division-level
2021-09-17T03:34:52
CC-MAIN-2021-39
1631780054023.35
[]
docs.prospect365.com
This chapter provides a high-level introduction to Spring Integration’s core concepts and components. It includes some programming tips to help you make the most of Spring Integration.. Spring Integration is motivated by the following goals: Spring Integration is guided by the following principles:. Section 6.1.2, “Message Channel Implementations” has a detailed discussion of the variety of channel implementations available in Spring Integration.: someService(the id): someComponent.someMethod.serviceActivator: someService:() { ... } }: someAdapter(the id): someAdapter: someAdapter someAdapter.source(as long as you use the convention of appending .sourceto the @Beann. Section 10.4 Section B.1.1, “Annotation-driven Configuration with the @Publisher Annotation” for more information. The @GlobalChannelInterceptor annotation has been introduced to mark ChannelInterceptor beans for global channel interception. This annotation is an analogue of the <int:channel-interceptor> XML element (see the section called .: Example 5.1. pom.xml ... > . This section documents some of the ways to get the most from Spring Integration. Section 5.3, “Main Components”, earlier in this chapter). Their implementations (contracts) are: org.springframework.messaging.Message: See Chapter 7, Message; org.springframework.messaging.MessageChannel: See Section 6.1, “Message Channels”; org.springframework.integration.endpoint.AbstractEndpoint: See Section 6.2, ot Chapter 39,> As discussed in Section 5.6, .
https://docs.spring.io/spring-integration/docs/5.1.2.RELEASE/reference/html/overview.html
2021-09-17T04:48:49
CC-MAIN-2021-39
1631780054023.35
[]
docs.spring.io
Every When you create an actor property or flow property, if you use the Show method, you can choose from a list of aggregations. One of these aggregations is every, which produces a set. An actor property that uses the every aggregation on a column produces a set containing all values of that column for the corresponding actor.
https://docs.scuba.io/glossary/Every.1302331609.html
2021-09-17T02:52:57
CC-MAIN-2021-39
1631780054023.35
[]
docs.scuba.io
Change the visibility of properties This article explains how to change the display of properties that you've created or that have been shared with you: Hiding and unhiding properties There are two global visibility settings for properties: hidden or visible. Visible properties—can be viewed and searched for in the expression builder by all users. Properties can be selected to define other properties, as well as used in queries. Objects are visible by default. Hidden properties—can help keep drop-down lists manageable, and discourage other users from using specific properties. Hidden properties are not shown in drop-down menus in the expression builder. Hidden properties are still visible in object lists. All Scuba users can toggle a property to be visible or hidden, provided they can view the property (that is, provided they have created the property or someone shared it with them). You might want to hide properties that other users don't rely on, or that they shouldn’t use can use only the clean version of the event property. If you've created an Event Property A that is defined by a second object, Event Property B, which is in turn defined by a third object, Event Property C. In this case, you might want only Event Property A to be visible. To use an object in a query or to define an object but otherwise hide it from lists, first make the object visible. Then after you've used it, toggle it back to hidden. To hide a property and make it visible again, do the following: Click Data in the left menu. In the right pane, click the tab for the type of property you want to hide (event property, actor property, and so on). In the list of properties, select the property you want to hide. At the top of the page, click the eye icon and select Hidden from the drop-down list. The property will be hidden from every user's expression builder. visible only to that user. Personal favorites are identified by a gold star. An object's popularity rating is based on the number of times it has been marked as a favorite by users. Admin favorite—can be applied only by admin users, and is visible to all users. Admin favorites are identified by a gray star. Admin favorited properties can help guide users in their selection for queries. Favorited items appear at the top of lists, in alphabetical order. Personal favorites are shown before admin favorites. To favorite a property, do the following: In the left menu bar, click Data. At the top of the page, click the type of object you want to favorite (event property, actor property, flow, or measure). In the list on the right, click the star for the property you want to favorite.The star to the left of the property name appears gold, and will appear at the top of your lists, in alphabetical order. Your personal favorites (gold stars) appear before admin favorites (gray stars). What's Next You've successfully created properties and changed the visibility. Now learn how to:
https://docs.scuba.io/guides/Change-the-visibility-of-properties.1302496257.html
2021-09-17T03:21:59
CC-MAIN-2021-39
1631780054023.35
[]
docs.scuba.io
Delete an Agent Group Use this procedure to delete an agent group in Nessus Manager. To delete an agent group: In the top navigation bar, click Scans. The My Scans page appears. In the left navigation bar, click Agents. The Agents page appears. - Click the Groups tab. In the row for the agent group that you want to delete, click the button. A confirmation window appears. To confirm, click Delete. The manager deletes the agent group.
https://docs.tenable.com/nessus/8_4/Content/DeleteAnAgentGroup.htm
2021-09-17T03:59:29
CC-MAIN-2021-39
1631780054023.35
[]
docs.tenable.com
CPU Sprites in UE4 can cast shadows on the environment, which we can see in this example of puffy clouds. However, it should be noted that shadow casting such as this does not work for GPU particles. To set up a Particle System to cast shadows, there are a few things you must do: First, the Emitter itself must have its Cast Shadow property enabled. Secondly, the lights that are intended to affect the particles must have Cast Translucent Shadows enabled. And finally, the settings for the shadow and self-shadow behavior are located in the Material used in this particle effect, under the two Translucency groups in the material properties. This image shows these properties from the Material Editor.
https://docs.unrealengine.com/4.26/en-US/Resources/ContentExamples/EffectsGallery/1_H/
2021-09-17T05:18:57
CC-MAIN-2021-39
1631780054023.35
[array(['./../../../../../Images/Resources/ContentExamples/EffectsGallery/1_H/image038.jpg', 'image038.png'], dtype=object) array(['./../../../../../Images/Resources/ContentExamples/EffectsGallery/1_H/image040.jpg', 'image040.png'], dtype=object) array(['./../../../../../Images/Resources/ContentExamples/EffectsGallery/1_H/image042.jpg', 'image042.png'], dtype=object) array(['./../../../../../Images/Resources/ContentExamples/EffectsGallery/1_H/image044.jpg', 'image044.png'], dtype=object) ]
docs.unrealengine.com
The KB Admin site allows Knowledge Base administrators to view data about articles and authors, create new articles, manage existing articles, approve and manage article comments, answer end-user questions, manage images, and create and manage Knowledge Base article categories. The KB Admin site must be configured before it can be used. Read KBSA Configuration Overview for detailed information on configuring the KB Admin site. Access the lists, libraries, discussion boards and other site content used in the KB Admin site by selecting Site Actions > View All Site Content and selecting the desired item. Click the links below for more information on each section. - Workflow Configuration List: Create customized buttons for the ribbon for workflow approval processes. - Reporting: View data and statistics about Knowledge Base articles. - Authors: View data about Knowledge Base articles according to author. - Pictures: Upload article- or category-related images. - Pending Questions: Answer questions submitted by end users. - Create or Edit an Article: Create and edit articles. Manage article and category permissions. - Categories & Types: Create and edit article categories. - Tag Cloud: Create tags to be associated with articles. - Permissions: Manage user permissions.
https://docs.bamboosolutions.com/document/how_to_use_the_kb_admin_site/
2021-09-17T03:22:52
CC-MAIN-2021-39
1631780054023.35
[]
docs.bamboosolutions.com
Query results cache Hive filters and caches similar or identical queries in the query result cache. Caching repetitive queries can reduce the load substantially when hundreds or thousands of users of BI tools and web services query Hive..
https://docs.cloudera.com/runtime/7.2.11/hive-performance-tuning/topics/hive-query-results-cache.html
2021-09-17T04:11:07
CC-MAIN-2021-39
1631780054023.35
[]
docs.cloudera.com
Storage Before you deploy a PostgreSQL cluster with Cloud Native PostgreSQL, ensure that the storage you are using is recommended for database workloads. Our advice is to clearly set performance expectations by first benchmarking the storage using tools such as fio, and then the database using pgbench. Benchmarking Cloud Native PostgreSQL EDB maintains cnp-bench, an open source set of guidelines and Helm charts for benchmarking Cloud Native PostgreSQL Cloud Native PostgreSQL. Workaround for volume expansion on AKS This paragraph covers Azure issue on AKS storage classes, that are supposed to support online resizing, but they actually require the following workaround. Let's suppose you have a cluster with threegresSbfg 0/1 Completed 0 17s cluster-example-4 1/1 Running 0 10s
https://docs.enterprisedb.io/cloud-native-postgresql/1.8.0/storage/
2021-09-17T04:39:17
CC-MAIN-2021-39
1631780054023.35
[]
docs.enterprisedb.io
Network Access Before removing the serial console or HDMI monitor and USB keyboard, verify network connectivity. The provided images default to obtaining network settings from a DHCP server. Verify Network Configuration Determine your IP address: $ ip addr Check that you have a default gateway: $ ip route Ping a known host: $ ping -c3 iot.fedoraproject.org Verify that sshd is running: $ systemctl is-active sshd View the default firewall configuration: $ sudo firewall-cmd --list-all Configure Remote Access Once the initial setup is complete, the serial console or HDMI monitor and USB keyboard are no longer required to access the device. The device can be left in a headless state and accessed remotely. The released images are configured to run an SSH server and accept outside connections. The default configuration allows root login if a password is set. This is a good reason to leave the root account locked. The default configuration also allows standard users to login and the user can then sudo if they were made an administrator as a member of the wheel group. $ ssh [email protected]
https://docs.fedoraproject.org/sk/iot/network-access/
2021-09-17T05:01:25
CC-MAIN-2021-39
1631780054023.35
[]
docs.fedoraproject.org
Syntax newrelic_background_job([bool $flag]) Manually specify that a transaction is a background job or a web transaction. Requirements Compatible with all agent versions. Description Tell the agent to treat this "web" transaction as a "non-web" transaction (the APM UI separates web and non-web transactions, for example in the Transactions page). Call as early as possible. This is most commonly used for cron jobs or other long-lived background tasks. However, this call is usually unnecessary since the agent usually detects whether a transaction is a web or non-web transaction automatically. You can also reverse the functionality by setting the optional flag to false, which marks a "non-web" transaction as a "web" transaction. Parameters Examples Mark transaction as a background job function example() { if (extension_loaded('newrelic')) { // Ensure PHP agent is available newrelic_background_job(); } ... } Mark transaction as a web transaction function example() { if (extension_loaded('newrelic')) { // Ensure PHP agent is available newrelic_background_job(false); } ... }.
https://docs.newrelic.com/docs/agents/php-agent/php-agent-api/newrelic_background_job/?q=
2021-09-17T05:18:56
CC-MAIN-2021-39
1631780054023.35
[]
docs.newrelic.com
versioning state of a bucket. To retrieve the versioning state of a bucket, you must be the bucket owner. This implementation also returns the MFA Delete status of the versioning state. If the MFA Delete status is enabled , the bucket owner must use an authentication device to change the versioning state of the bucket. The following operations are related to GetBucketVersioning : See also: AWS API Documentation See 'aws help' for descriptions of global parameters. get-bucket-versioning --bucket <value> [--expected-bucket-owner <value>] [--cli-input-json <value>] [--generate-cli-skeleton <value>] --bucket (string) The name of the bucket for which to get the versioning information. - versioning configuration for a bucket named my-bucket: aws s3api get-bucket-versioning --bucket my-bucket Output: { "Status": "Enabled" } Status -> (string) The versioning state of the bucket. MFADelete -> (string) Specifies whether MFA delete is enabled in the bucket versioning configuration. This element is only returned if the bucket has been configured with MFA delete. If the bucket has never been so configured, this element is not returned.
https://docs.aws.amazon.com/cli/latest/reference/s3api/get-bucket-versioning.html
2021-09-17T05:37:01
CC-MAIN-2021-39
1631780054023.35
[]
docs.aws.amazon.com
You will find many of our products complement each other quite nicely saving you the time and trouble to write custom code. With Bamboo at the heart of your SharePoint investment, you gain access to a huge catalog of enhancements, components, and accessories that add the critical functionality your business requires. The same components can be easily used in future applications and they all come from a single, trusted vendor, ensuring an easy purchase process and support you can count on. That’s the Bamboo Way!
https://docs.bamboosolutions.com/document/complementary_products_for_bamboo_select/
2021-09-17T04:20:57
CC-MAIN-2021-39
1631780054023.35
[]
docs.bamboosolutions.com
Date: Fri, 16 Mar 2018 09:25:37 -0400 From: Lowell Gilbert <[email protected]> To: Bob McDonald <[email protected]> Cc: [email protected] Subject: Re: Removing local_unbound Message-ID: <[email protected]> In-Reply-To: <CAM-YptdAXG-hLEGYewKMnpnLzC-4pX3BuZy1J8i4uny_=BMqEw@mail.gmail.com> (Bob McDonald's message of "Fri, 16 Mar 2018 08:42:59 -0400") References: <CAM-YptdAXG-hLEGYewKMnpnLzC-4pX3BuZy1J8i4uny_=BMqEw@mail.gmail.com> Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help Bob McDonald <[email protected]> writes: > > Check "man 5 src.conf". Add > >> WITHOUT_UNBOUND=1 > >> to /etc/src.conf. > > > OK, is there any way to do it that doesn't require rebuilding the system? > > This will certainly work but if there are lots of systems, this could get > cumbersome. > > Is there something in the install that would leave these programs out? You don't really need to rebuild the system, just "make delete-old". Or you could make your own release without unbound. But really, it's probably not worth worrying about, any more than it is to have bsnmpd(1) on your system even if you don't use it. Want to link to this message? Use this URL: <>
https://docs.freebsd.org/cgi/getmsg.cgi?fetch=240337+0+archive/2018/freebsd-questions/20180318.freebsd-questions
2021-09-17T05:27:13
CC-MAIN-2021-39
1631780054023.35
[]
docs.freebsd.org
VideoPlayer¶ Inherits: Control < CanvasItem < Node < Object Control for playing video streams. Description¶ Control node for playing video streams using VideoStream resources. Supported video formats are WebM ( .webm, VideoStreamWebm), Ogg Theora ( .ogv, VideoStreamTheora), and any format exposed via a GDNative plugin using VideoStreamGDNative. Note: Due to a bug, VideoPlayer does not support localization remapping yet. Warning: On HTML5, video playback will perform poorly due to missing architecture-specific assembly optimizations, especially for VP8/VP9.. Note: Changing this value won't have any effect as seeking is not implemented yet, except in video formats implemented by a GDNative add-on..
https://docs.godotengine.org/en/3.3/classes/class_videoplayer.html
2021-09-17T03:22:06
CC-MAIN-2021-39
1631780054023.35
[]
docs.godotengine.org
Select Intellicus if Intellicus should authenticate the user of this organization. User name and password will be stored in Intellicus repository. When a user logs in, Intellicus will verify the credentials before allowing the access. This is selected when Intellicus is deployed as stand-alone. Select External Application if users trying to access Intellicus will be authenticated by an external application. This is generally the case when Intellicus is integrated in an application that uses another application to authenticate its users. Select Type and provide Server IP and Port of machine where application responsible for authentication is available. User DN of the user is required to authenticate to the external source. In this case, you need to map application users with Intellicus users. The supported types in External Authentication are Windows NT and LDAP. Figure 3: New Organization – Advanced tab Select Host Application if Intellicus is integrated in an application and that will also take care of authenticating the user trying to access Intellicus. In this case, you need to map application users with Intellicus users. Select Call Back Mechanism when Intellicus is integrated in an application and Intellicus should call host application’s code to perform the authentication check. In this case, a function is called along with user credentials as arguments. This function carries out authentication and returns the result. For your authentication code, the selected call method is Local. Specify Server IP and the Port of call back server. Under the Java Class Implementer, specify implementer class name that Intellicus should call. Intellicus’ Global Filters feature allows you to set fields based on which you can filter every report related query. This is to make sure that the users have access to desired information only. To apply security filter, check Apply Security Filter checkbox. If Intellicus should apply the filter, select the Intellicus option. - Global Filter Column Name: This is the requirement from database side. All filterable tables must contain a common named column for applying automatic global filtering. Specify that column in this box. - User Attribute: Select the user attribute that should be used for matching the secured records for a logged in user. - Ignore if Not Present: Check this checkbox to ignore filtering if the global filter column is not found in the database. When kept unchecked and if global filter column is not found, Intellicus will generate an exception. Select the Call Back Mechanism button. Specify respective values based on requirements of application taking care of call back. Make password related settings here. To set a minimum password length, specify the number of characters that a password must have, in Minimum Password Length box. When user is created in Intellicus, you need to provide a password during user creation. Check Reset Password on First Login to force user to change the password on first logon. It is a good practice to change passwords at regular interval. In Password Expires, specify the number of days after which the password should expire. If you don’t want to force password change, check Password Never Expires checkbox.
https://docs.intellicus.com/documentation/basic-configurations-must-read-19-0/user-and-access-rights-management-in-intellicus-19-0/organization/advanced-tab/
2021-09-17T02:57:54
CC-MAIN-2021-39
1631780054023.35
[array(['https://docs.intellicus.com/wp-content/uploads/2018/12/New-organization-Advanced-Tab.png', 'Advanced tab'], dtype=object) ]
docs.intellicus.com
This chapter has information and steps to get you started with installing and evaluating Intellicus in the quickest way. Downloading Intellicus To acquire Intellicus setup, you can either contact our support team ([email protected]) or download evaluation version from the web site: Before you install Here are a few points to be noted before starting the Installation: Operating System Intellicus server and portal can be deployed on most of the popular Operating Systems. Instructions provided in this manual are for Windows platform. Server Port By default, Intellicus report server listens at port 45450. If this port is busy on the machine where Intellicus is being installed for evaluation, you can specify a different server port at the time of Installation. Intellicus Portal By default, Intellicus runs on Jakarta tomcat web server at port 80. If this port is already used by another application, you can specify a different portal port for Intellicus at the time of installation. Note: You will receive “HTTP 404 error: Page not found” message in the browser; when Intellicus portal does not start up because port 80 (or the one allocated to web server) was used by another application. Installing Intellicus Intellicus setup for Windows is made available as ‘Intellicus_win.exe’ file. To start installation, you need to double-click this file. Figure 1: Welcome screen displayed upon running setup.exe Click Next on screens to install Intellicus with default configuration: - Installation folder: C:/Program Files/ - Server port: 45450 - Web Server: Jakarta tomcat - Portal port: 80 - Timeouts: 600 seconds Note: As part of installation steps, you will be allowed to change above information. In the last step of Installation, setup asks if it should start Intellicus Service and Launch Portal. Figure 2: “Installation Complete” screen Launching Intellicus Portal Intellicus provides browser based user interface. Specify following in your browser’s address bar to launch Intellicus when it is running on local machine: When Intellicus is running on a remote machine (in the same network as the local host), the URL will be: http://(ip of the machine)/intellicus Below are the default credentials provided by Intellicus: - User Name: Admin - Password: Admin - Organization Name: Intellica(Click “More Option” if you wish to select any organization other than the default.) Intellicus’ demo setup has three pre-defined user roles: - Administrator: Super admin user will be able to carry out all configuration and administrative activities of the application across all organizations - Designer: User who can design standard, ad hoc and smart reports on permitted databases/ data sources - End User: Business user who can view the already designed and saved reports to which he/she has been allowed access permissions To explore and evaluate all the features of Intellicus, login using Admin / Admin user name and password combination. The below screen enables you to login to the application. Figure 3: Intellicus login page Creating the ROOT folder To launch Intellicus, without using /intellicus in the URL (as specified above), follow the below steps to create ROOT folder environment of Intellicus: - Stop the Report Server and the Web Server if running - Take backup of Intellicus - Delete ‘<INSTALL DIR>\jakarta\work’ folder - Go to <Installed Path>\Intellicus\jakarta\webapps\ - Rename the “intellicus” folder as “ROOT” Note : “ROOT” is case sensitive. User needs to provide this name as it is. - Go to Tomcat server settings at <Installed Path>/Intellicus/jakarta/conf/ and edit the “server.xml” file - Comment as shown below: <!– This tag is added to avoid session persistence mechanism <Context path=”/intellicus” docBase=”intellicus”> <Manager pathname=””> </Manager> </Context>–> - Save the server.xml file - Start the Report Server and the Web Server - URL for the updated setup now is: <%> for example : Changing the port number Below is the information on how to change the port number when Jakarta Web Server is used. The port number information is stored in server.xml file. Figure 4: Location of server.xml To change the port number: - Open server.xml in a text editor. - Change the value of Connector port to a number that is free. - Save the changes. Figure 5: Changing the port number After successful login, the following Home Page appears: Figure 6: Homepage for the Admin user Figure 7: Homepage for the Designer user Figure 8: Homepage for the End user What to do next Intellicus demo installation includes a limited demo data which you can use to get a feel of Intellicus features. Explore what you can do on portal Listed below are some of the portal pages you may wish to explore. Figure 9: Intellicus Menu Query Objects /Data Sources: Navigate > Design > Query Object Query Objects form the business meta-layer for end user reporting. Query Objects hide the physical database details, SQL complexities, table names etc. from the end user. A query object contains details to fetch desired data from a data connection. A query object acts as a data source to reports, parameter and analytical objects. (For details, refer Working with Query Objects) Smart Report: Navigate > Analytics > Smart View A self-service tool to design reports with drag and drop actions for performing on-the-fly operations. Get contemporary style of web controls like dynamic data grids and various charting options for data depiction. (For details, refer Configuring Ad hoc Reporting, Working with Smart View)) High Speed Report (OLAP Reports): Navigate > Analytics > High Speed View Intellicus’ evolved multi-dimensional analytics to let you explore your data from all dimensions in a single screen to get a panoramic view of your business at the speed of your thought. (For details, refer Working with High Speed View)) Ad hoc Reports –customization and report design: Navigate > Administration > Configure > Ad hoc Wizard, Navigate > Design > Ad hoc Template, Navigate > Design > Ad hoc Report, Navigate > Analytics > Smart View Ad hoc reporting feature brings report design functionalities from designers to the users. Users can not only view report output, but also study and analyze the output by carrying out many analysis activities like sorting, grouping and filtering the output without returning to the designer screen. (For details, refer Configuring Ad hoc Reporting, Designing Ad hoc Reports) Running Reports: Explorer > expand category > right click the report > select the Run Report option Intellicus provide facility to run a report in different formats. Apart from this you can also schedule (using Quick Schedule and Batch Scheduling option) and view results later. (For details, refer Running Reports In Intellicus, Batch Scheduler) Dashboards: Navigate > Design > Dashboard Widget, Navigate > Analytics > Dashboard To visualize your business data at a glance, monitor overall performance and make informed business decisions. (For details, refer Creating Dashboard Widgets, Working with Dashboards) Batch Scheduler: Navigate > Schedule > Jobs Scheduling of reports is very helpful for better utilization of server and printer resources. (For details, refer Batch Scheduler) Configurations: Navigate > Administration > Configure You can configure properties related to Report Server and Client (portal) as well as configure email templates and customize configuration files. (For details, refer Configuring Intellicus) Favorites: Explorer > expand category > right click the object > select the Add to Favorites option. Standard Reports: Intellicus standard reporting enables you to create pixel-perfect, form style and printable format reports to support your organization’s operational tasks. An enterprise-grade scheduler and dispatcher of Intellicus helps you distribute the soft copies to thousands of users without manual intervention. You need to use Intellicus Studio to design and deploy standard reports. In Windows, click Windows Start > All apps > Intellicus > Studio to start the Studio application. (For details, refer Desktop Studio-A Tour)
https://docs.intellicus.com/documentation/installation-manuals-19-0/
2021-09-17T04:38:51
CC-MAIN-2021-39
1631780054023.35
[array(['https://docs.intellicus.com/wp-content/uploads/2019/12/Figure-1g.png', 'Welcome screen'], dtype=object) array(['https://docs.intellicus.com/wp-content/uploads/2019/12/Figure-2g.png', 'Installation Complete screen'], dtype=object) array(['https://docs.intellicus.com/wp-content/uploads/2019/12/Figure-3g.png', 'login page'], dtype=object) array(['https://docs.intellicus.com/wp-content/uploads/2019/12/Figure-4g.png', 'location of server.xml'], dtype=object) array(['https://docs.intellicus.com/wp-content/uploads/2019/12/Figure-5g.png', 'change the port number'], dtype=object) array(['https://docs.intellicus.com/wp-content/uploads/2019/12/Figure-6g.png', 'Homepage for Admin'], dtype=object) array(['https://docs.intellicus.com/wp-content/uploads/2019/12/Figure-7g.png', 'Homepage for Designer'], dtype=object) array(['https://docs.intellicus.com/wp-content/uploads/2019/12/Figure-8g.png', 'Homepage for End User'], dtype=object) array(['https://docs.intellicus.com/wp-content/uploads/2019/12/Figure-9g.png', 'menu'], dtype=object) ]
docs.intellicus.com
RSKcalculateCTlag.m Input -Required- RSK -Optional- pressureRange: [0, max(Pressure)] (default) profile: [ ] (all profiles default). direction: up, down or both. windowLength: 21 (default) Output lag This function estimates the optimal conductivity time shift relative to temperature in order to minimise salinity spiking. The algorithm works by computing the salinity for conductivity lags of -20 to 20 samples at 1 sample increments. The optimal lag is determined by constructing a high-pass filtered version of the salinity time series for every lag, and then computing the standard deviation of each. The optimal lag is the one that yields the smallest standard deviation. The pressureRange argument allows for the possibility of estimating the lag on specific sections of the profile. This can be useful when unreliable measurements near the surface are found to impact the optimal lag, or when the profiling speed is highly variable. The lag output by RSKcalculateCTlag is compatible with the lag input argument for RSKalignchannel when lagunits is specified as samples in RSKalignchannel. Example: rsk = RSKopen('file.rsk'); rsk = RSKreadprofiles(rsk, 'profile', 1:10, 'direction', 'down'); % 1. All downcast profiles with default smoothing lag = RSKcalculateCTlag(rsk); % 2. Specified profiles (first 4), reference salinity found with 13 pt boxcar. lag = RSKcalculateCTlag(rsk, 'profile',1:4, 'windowLength',13);
https://docs.rbr-global.com/rsktools/process/post-processors/rskcalculatectlag-m
2021-09-17T04:39:41
CC-MAIN-2021-39
1631780054023.35
[]
docs.rbr-global.com
AnimationEngine¶ The Animation Engine is an interactive web page available to subscribers only. It supports generation of animated codes and standard codes with full access to all parameters without have to code or learn a programming language. There is a link to it from the api top url of api.acme.codes. This page can also be helpful to learning developers as the API calls used to create animated and standard code are displayed as individual codes are made.
https://acme.readthedocs.io/en/latest/AnimationEngine.html
2021-09-17T03:25:19
CC-MAIN-2021-39
1631780054023.35
[array(['_images/AnimationEngine.png', '_images/AnimationEngine.png'], dtype=object) ]
acme.readthedocs.io
To create our first cross-tab, let us have a detailed look at the cross-tab illustrated in the 1st chapter. Figure 4: Cross Tab Arrangement Generally, a cross-tab data arrangement has following components: Row Headers (1): This contains field values that will be placed on 1st column of every row. Column Headers (2): This contains field values that will be placed on 1st row of every column. Summary fields (3): All the cells covered by row headers and column headers are summary fields. Intellicus supports more than one field for summary. That means, for each intersection of a row and column you can have more than one cells of summary. Column Summary (4) and row summary: Generally, last row contains summary (sum or any other function specified) of values in the respective column. Similarly, last column would contain summary of values in respective row. Tip: You can have multiple row-headers, column-headers as well as summaries. For example, row headers may have Region and Sales Executives; column headers may have product line and products. Placing the cross-tab control on report A report may have one or more section. The way Intellicus Report Server treats report components depends on the section where they are placed. For example, a report component placed on detail section will be rendered every time a new record is processed. That would not be the case with a report component placed on page header. Tip: A cross-tab processes all the records in a record-set before it is rendered. Safest location to place a cross-tab is report header. Do not place a cross-tab in detail section, since detail section is rendered for each record, which results in a wrongly rendered cross-tab. However, if you have no choice, then an unbound cross-tab can be placed anywhere required. Here are the steps to place a cross-tab control on report using tool bar button. On Desktop studio, when a report is open, - Expand the section where you want to place a cross-tab. - On toolbar, click . - On the section where you want to place the cross-tab, click and mark top right corner of cross-tab, drag the mouse towards bottom right until you get right size of the cross-tab. - In Create Bound Cross-tab report? Click Yes to create bound cross-tab and No to create unbound cross-tab. The cross-tab has been placed. Building the cross-tab You will get meaningful information on a cross-tab only if you have selected right fields for row header, column header and summary field. Assuming that you have already placed a cross-tab control on the report and arranged for a data source all the activities related to building a cross-tab is carried out on Cross-tab Properties dialog box. They are: Placing field for column header, row header and summary field. Setting group preferences when a numeric type or date type value is selected for column header or row header. Setting the right summary function for summary field. Selecting suitable look and feel for the cross-tab. To get cross-tab properties dialog box, - Place the mouse pointer anywhere on the cross-tab control and right-click the mouse. A context menu appears. - Click the option Properties. Crosstab Properties dialog box opens. Placing field for column header The figure below and boxes next to it shows how to select a field for column header. Figure 5: Placing a field for the column header Placing field for row header The figure below and boxes next to it shows how to select a field for row header. Figure 6: Placing a field for the row header Numeric field as row header or column header You need to take little care when setting up a numeric field as row header or column header. If you place a numeric field as it is in a row header, you will get one row for each of the value of that field. For example, if you have a numeric value for Zones, then this may be fine. If you are looking for departmental expenditure by expenditure range, then this method would not work. You have to set expenditure range, for example, 0 – 1000, 1001 – 2000, etc. After selecting the right numeric field for column header or row header, do this: Figure 7: Numeric field as row or column header Important: When you set Numeric Group By, the cross-tab will display the starting number of the group. It will not indicate starting number and ending number of group. For example, for a group, 0 – 5, it will not show “ 0 – 5” in respective header. It will show “0”. To get value of your preference in column header / row header which has numeric group, you need to use formula field (calculated field). If there are no records representing a range, for example there is no data for group 6 – 9, that row / column will not be placed on cross-tab. Date field as row header or column header A date type field can be set as row header or column header. If you place a date field as it is in a row header, you will get one row for each date. It is possible to group date records by: - Day (Monday, Tuesday, etc) - Month (Jan, Feb, March, etc) - Quarter (Jan – March, April – June, July – Sep, Oct – Dec) - Year (actual year number) Figure 8: Date type field for row or column header When date is grouped, the respective row header or column header will show the start date of the group. For example, quarter January 2006 – March 2006 will be represented by “1/1/2006”. To make the best use of this feature, we suggest using Date Group By along with the most appropriate Date Format. For example, if you have grouped by Month, use Date format as MMM. Placing the field as summary field Figure 9: Placing field for summary Selecting the right summary function For numeric fields, generally sum function is selected. When you want summary on fields having type other than numeric, you may choose count. Titles The title set on Settings tab, or on Format tab for a row / column will appear on top – left cell of the cross-tab: If title on Settings tab is specified, then it will appear on cross-tab. If this is not specified, in that case, If title is specified after selecting column header, then it will appear on cross-tab. If this is not specified, in that case, If title is specified after selecting row header, then it will appear on cross-tab. Setting up cross-tab title (Format tab) Figure 10: Title for row / column header Basics of Look and Feel Appearance of cross-tab makes a lot of difference. You can select specific colors for row header, column header as well as other components of a cross-tab. You can also select one of the pre-designed color-theme too. Figure 11: Setting look and feel for the cross-tab Example Date on Row header This imaginary example represents expenditure by branch, on lighting and heating for Months January, February and March. Figure 12: Expenditure by branch All you need to have to get this report, is a record-set having fields: - BranchCode - Date of Bill - BillAmount (or amount paid, or similar) General steps in studio / arrangement of fields on cross-tab would be: - Set the appropriate connection, create an SQL and refresh fields. - Place Cross-tab component (preferably on Report Header section). - Place Date of Bill on Row header. - Set Date Group By as Month. - Set Date Format as MMM and Alignment as Left. - Place BranchCode on column header. - Place BillAmount (or amount paid, etc.) in summary field.
https://docs.intellicus.com/documentation/using-intellicus-19-1/studio-reports-19-1/your-first-cross-tab-19-1/
2021-09-17T02:59:43
CC-MAIN-2021-39
1631780054023.35
[array(['https://docs.intellicus.com/wp-content/uploads/2019/01/Cross-Tab-Arrangement.png', 'Cross Tab Arrangement'], dtype=object) array(['https://docs.intellicus.com/wp-content/uploads/2019/01/Placing-a-field-for-the-column-header.png', 'Placing field for column header'], dtype=object) array(['https://docs.intellicus.com/wp-content/uploads/2019/01/Placing-a-field-for-the-row-header.png', 'Placing field for row header'], dtype=object) array(['https://docs.intellicus.com/wp-content/uploads/2019/01/Numeric-field-as-row-or-column-header.png', 'Numeric field as row or column header'], dtype=object) array(['https://docs.intellicus.com/wp-content/uploads/2019/01/Date-type-field-for-row-or-column-header.png', 'Date field as row or column header'], dtype=object) array(['https://docs.intellicus.com/wp-content/uploads/2019/01/Title-for-row-column-header.png', 'cross-tab title'], dtype=object) array(['https://docs.intellicus.com/wp-content/uploads/2019/01/Setting-look-and-feel-for-the-cross-tab.png', 'look and feel for the cross-tab'], dtype=object) array(['https://docs.intellicus.com/wp-content/uploads/2019/01/Expenditure-by-branch.png', 'Expenditure by branch'], dtype=object) ]
docs.intellicus.com
#include <usServiceException.h> A service exception used to indicate that a service problem occurred. A ServiceException object is created by the framework or to denote an exception condition in the service. An enum type is used to identify the exception type for future extendability. This exception conforms to the general purpose exception chaining mechanism. Definition at line 50 of file usServiceException.h. Definition at line 54 of file usServiceException.h. Creates a ServiceException with the specified message, type and exception cause. Definition at line 94 of file usServiceException.h. References operator<<(), US_Core_EXPORT, US_END_NAMESPACE, and US_PREPEND_NAMESPACE. Returns the type for this exception or UNSPECIFIED if the type was unspecified or unknown.
https://docs.mitk.org/nightly/classus_1_1ServiceException.html
2021-09-17T03:53:46
CC-MAIN-2021-39
1631780054023.35
[]
docs.mitk.org
32 Test It is possible to load this portion of the framework without loading the rest of the framework. Use (require framework/test). Currently, the test engine has primitives for pushing buttons, setting check-boxes and choices, sending keystrokes, selecting menu items and clicking the mouse. Many functions that are also useful in application testing, such as traversing a tree of panels, getting the text from a canvas, determining if a window is shown, and so on, exist in GRacket. 32.1 Actions and completeness The actions associated with a testing primitive may not have finished when the primitive returns to its caller. Some actions may yield control before they can complete. For example, selecting “Save As...” from the “File” menu opens a dialog box and will not complete until the “OK” or “Cancel” button is pushed. However, all testing functions wait at least a minimum interval before returning to give the action a chance to finish. This interval controls the speed at which the test suite runs, and gives some slack time for events to complete. The default interval is 100 milliseconds. The interval can be queried or set with test:run-interval. A primitive action will not return until the run-interval has expired and the action has finished, raised an error, or yielded. The number of incomplete actions is given by test:number-pending-actions. Note: Once a primitive action is started, it is not possible to undo it or kill its remaining effect. Thus, it is not possible to write a utility that flushes the incomplete actions and resets number-pending-actions to zero. However, actions which do not complete right away often provide a way to cancel themselves. For example, many dialog boxes have a “Cancel” button which will terminate the action with no further effect. But this is accomplished by sending an additional action (the button push), not by undoing the original action. 32.2 Errors Errors in the primitive actions (which necessarily run in the handler thread) are caught and reraised in the calling thread. However, the primitive actions can only guarantee that the action has started, and they may return before the action has completed. As a consequence, an action may raise an error long after the function that started it has returned. In this case, the error is saved and reraised at the first opportunity (the next primitive action). The test engine keeps a buffer for one error, saving only the first error. Any subsequent errors are discarded. Reraising an error empties the buffer, allowing the next error to be saved. The function test:reraise-error reraises any pending errors. 32.3 Technical Issues 32.3.1 Active Frame The Self Test primitive actions all implicitly apply to the top-most (active) frame. 32.3.2 Thread Issues The code started by the primitive actions must run in the handler thread of the eventspace where the event takes place. As a result, the test suite that invokes the primitive actions must not run in that handler thread (or else some actions will deadlock). See make-eventspace for more info. 32.3.3 Window Manager (Unix only) In order for the Self Tester to work correctly, the window manager must set the keyboard focus to follow the active frame. This is the default behavior in Microsoft Windows and MacOS, but not in X windows. In X windows, you must explicitly tell your window manager to set the keyboard focus to the top-most frame, regardless of the position of the actual mouse. 32.4 Test Functions Note: To send the “Enter” key, use #\return, not #\newline. Under Mac OS, 'right corresponds to holding down the command modifier key while clicking and 'middle cannot be generated. Under Windows, 'middle can only be generated if the user has a three button mouse.
https://docs.racket-lang.org/framework/Test.html
2018-10-15T10:50:57
CC-MAIN-2018-43
1539583509170.2
[]
docs.racket-lang.org
Download GitHub Source Repository The source repository is available via GitHub. The current stable release is version 1.6 which can be obtained from: Refer to NEWS for a list of the latest changes, and be sure to read the Installation section for how to compile and install it. To receive notifications when new versions are released, subscribe to the meep-announce mailing list: Precompiled Packages for Ubuntu Precompiled packages of Meep 1.3 are available for Ubuntu. We recommend Ubuntu as Meep and all of its dependencies can be installed using just one line: sudo apt-get install meep h5utils You can also install the parallel version of Meep which is based on Open MPI using: sudo apt-get install meep-mpi-default Newer packages for Ubuntu and Debian are currently being prepared. Amazon Web Services (AWS) The latest stable version of Meep preinstalled on Ubuntu 16.04 can be accessed for free on Amazon Web Services (AWS) Elastic Compute Cloud (EC2) as an Amazon Machine Image (AMI) provided by Simpetus.
https://meep.readthedocs.io/en/latest/Download/
2018-10-15T11:39:58
CC-MAIN-2018-43
1539583509170.2
[]
meep.readthedocs.io
4 Package Metadata Package metadata, including dependencies on other packages, is reported by an "info.rkt" module within the package. This module must be implemented in the info language. For example, a basic "info.rkt" file might be The following "info.rkt" fields are used by the package manager: When a package is a single collection package, its "info.rkt" file may specify additional fields that are used for the Scribble documentation system or other tools. Many of these fields are described in Controlling raco setup with "info.rkt" Files. collection — either 'multi to implement a multi-collection package or a string or 'use-pkg-name to implement a single-collection package. If collection is defined as a string, then the string is used as the name of the collection implemented by the package. If collection is defined as 'use-pkg-name, then the package name is used as the package’s collection name. Beware that omitting collection or defining it as 'use-pkg-name means that a package’s content effectively changes with the package’s name. A package’s content should normally be independent of the package’s name, and so defining collection to a string is preferable for a single-collection package. version — a version string. The default version of a package is "0.0". deps — a list of dependencies, where each dependency has one of the following forms: A string for a package source. - A list of the formwhere each keyword-and-spec has a distinct keyword in the form A version-string specifies a lower bound on an acceptable version of the needed package. A platform-spec indicates that the dependency applies only for platforms with a matching result from (system-type) when platforms-spec is a symbol or (path->string (system-library-subpath #f)) when platform-spec is a string or regular expression. See also matching-platform?. For example, platform-specific binaries can be placed into their own packages, with one separate package and one dependency for each supported platform. - A list of the formwhich is deprecated and equivalent to Each element of the deps list determines a dependency on the package whose name is inferred from the package source (i.e., dependencies are on package names, not package sources), while the package source indicates where to get the package if needed to satisfy the dependency. Using the package name "racket" specifies a dependency on the Racket run-time system, which is potentially useful when a version is included in the dependency. For most purposes, it’s better to specify a versioned dependency on "base", instead. See also Package Dependency Checking. build-deps — like deps, but for dependencies that can be dropped in a binary package, which does not include sources; see Source, Binary, and Built Packages and Package Dependency Checking. The build-deps and deps lists are appended, while raco pkg create strips away build-deps when converting a package for --binary mode. implies — a list of strings and 'core. Each string refers to a package listed in deps and indicates that a dependency on the current package counts as a dependency on the named package; for example, the gui package is defined to ensure access to all of the libraries provided by gui-lib, so the "info.rkt" file of gui lists "gui-lib" in implies. Packages listed in implies list are treated specially by updating: implied packages are automatically updated whenever the implying package is updated. The special value 'core is intended for use by an appropriate base package to declare it as the representative of core Racket libraries. update-implies — a list of strings. Each string refers to a package listed in deps or build-deps and indicates that the implied packages are automatically updated whenever the implying package is updated. setup-collects — a list of path strings and/or lists of path strings, which are used as collection names to set up via raco setup after the package is installed, or 'all to indicate that all collections need to be setup. By default, only collections included in the package are set up (plus collections for global documentation indexes and links). distribution-preference — either 'source, 'built, or 'binary, indicating the most suitable distribution mode for the package (but not a guarantee that it will be distributed as such). Absence of this definition implies 'binary if the package has no ".rkt" or ".scrbl" files other than "info.rkt" files, and if it has any ".so", ".dll", ".dylib", or ".framework" files; otherwise, absence implies 'built. package-content-state — a list of two items; the first item is 'binary, 'binary-lib, or 'built, and the second item is either #f or a string to represent a Racket version for compiled content. This information is used by raco pkg install or raco pkg update with --source, --binary, or --binary-lib to ensure that the package content is consistent with the requested conversion; see also Source, Binary, and Built Packages. Absence of this definition is treated the same as (list 'source #f). Changed in version 6.1.0.5: Added update-implies. Changed in version 6.1.1.6: Added distribution-preference.
https://docs.racket-lang.org/pkg/metadata.html
2018-10-15T10:44:47
CC-MAIN-2018-43
1539583509170.2
[]
docs.racket-lang.org
What is a JT file? JT (Jupiter Tessellation) is an efficient, industry-focused and flexible ISO-standardized 3D data format developed by Siemens PLM Software. Mechanical CAD domains of Aerospace, automotive industry, and Heavy Equipment use JT as their most leading 3D visualization format. JT format is a scene graph that supports the attributes and nodes that are CAD specific. Sophisticated compression techniques are used to store facet data (triangles). This format is structured to support visual attributes, product and manufacturing information (PMI), and Metadata. There is a good support for asynchronous streaming of content. In heavy mechanical industry, professionals use JT file in their CAD solutions and product lifecycle management (PLM) software programs to examine the geometry of complicated goods. As JT supports nearly all important 3D CAD formats its assembly can deal with a variety of combination which is known as “multi-CAD”. This multi-CAD assembly is always well managed and up-to-date because synchronization among native CAD product description files with their associated JT files takes place automatically. JT files are originally lightweight, so considered to be suitable for internet collaborations. Companies Collaborate through sending 3D visualizations over the media much more easily as compared to “heavy” original CAD files. In addition, JT files ensures many security feature that make intellectual property sharing more secure. History Engineering Animation, Inc. and Hewlett Packard were the original designers of JT, who developed that format as the Direct Model toolkit. After EAI was acquired by UGS Corp. JT became a part of UGS’s suite. Early in 2007, UGS announced JT as their master 3D format. In the same year, Siemens AG purchased UGS and turn out to be Siemens PLM Software. Siemens uses JT as the common interoperability format and data archival format. In 2009, ISO accepted JT specification for publication as an ISO Publicly Available Specification (PAS).In the middle of 2010, ProSTEP iViP announced that at industrial level, JT and STEP AP 242 XML can be used together to achieve the maximum advantage in data exchange scenarios. In 2012, JT has been officially declared as an ISO-standardized (ISO 14306:2012 (ISO JT V1)) 3D visualization format. JT File Format All objects in the JT format are represented through an object identifier and references among objects are handled through referenced object’s identifier. The integrity of these object references can be maintained through pointers unswizzling/swizzling. A JT file is arranged as a series of blocks and Header block is always the first block of data in the file. A series of data segment and a TOC segment immediately follow the header block. The one Data segment (6 LSG Segment) possesses a reference compliant JT file always exists. TOC Segment contains the location information of all other Data Segments of that file. File Header File Header is the first block in the JT file’s data hierarchy. Versioning information and TOC location information is enclosed within the header which facilitates loaders in file reading. The file header contents are arranged as follows. TOC Segment The TOC Segment must exist within a file and contains identification and location information of all other data segments. The actual location of the TOC Segment is specified by the TOC Offset field of the file header. Each individually addressable Data Segment is represented by TOC entry in a TOC segment. Data Segment JT file defines all stored data within a Data Segment. Some Data Segment can compress all the data bytes of information remained within the segment. Data segments have the following structure: Following table describes different types of data segments: Compression Transmission and storage requirements of 3D models are more demanding, so JT files may take the benefit of compression. A JT data model may be composed of different files using different compression technique but the compression process is transparent to JT data user. So far, JT Open Toolkit (as a standard) and advanced compression are two compression techniques used by JT files. JT Open Toolkit employs an easy, lossless compression algorithm, while the advanced compression employs a more refined, domain-specific compression technique leads to lossy geometry compression. Client applications prefer advanced compression rather than using standard compression, as advanced compression yields fairly high compression ratios. Backward compatibility with ordinary JT file viewing applications are maintained through the provision of standard compression. The compression form must be compatible with the JT file format version which can be seen when a JT file opens using text editor and enclosed within ASCII header information.
https://docs.fileformat.com/3d/jt/
2021-05-06T04:42:36
CC-MAIN-2021-21
1620243988725.79
[array(['../JT-1.png', 'JT File Format'], dtype=object) array(['../JT-2.png', 'JT Data Segment alt text'], dtype=object)]
docs.fileformat.com
If you are using PostgreSQL, you have to create a database for Plume. service postgresql start su - postgres createuser -d -P plume createdb -O plume plume Before starting Plume, you’ll need to create a configuration file, called .env. This file should be in the same directory as the one in which you will start Plume ( ~/Plume, if you followed the previous instructions). If you are installing from source, you can use cp .env.sample .env to generate it. Here is a sample of what you should put inside for GNU/Linux and Mac OS X systems. # The address of the database # (replace USER, PASSWORD, PORT and DATABASE_NAME with your values) # # If you are using SQlite, use the full path of the database file (`plume.db` for instance) # Windows user's paths are backslashes, change them to forward slashes #DATABASE_URL=/etc/path/to/Plume/plume.db DATABASE_URL=postgres://USER:PASSWORD@IP:PORT/DATABASE_NAME # For PostgreSQL: migrations/postgres # For SQlite: migrations/sqlite MIGRATION_DIRECTORY=migrations/postgres # The domain on which your instance will be available BASE_URL=plu.me # Secret key used for private cookies and CSRF protection # You can generate one with `openssl rand -base64 32` ROCKET_SECRET_KEY= # Mail settings # If you don't want to setup a mail server and/or address for plume # and don't plan to use the "password reset" feature, # you can comment these lines. MAIL_SERVER=smtp.example.org MAIL_USER=example MAIL_PASSWORD=123456 MAIL_HELO_NAME=example.org [email protected] For more information about what you can put in your .env, see the documentation about environment variables. Now we need to run migrations. Migrations are scripts used to update the database. To run the migrations, you can do for GNU/Linux and Mac OS X: plm migration run If you are using Windows and DATABASE of sqlite, you will need to copy the sqlite3.dll from “C:\ProgramData\chocolatey\lib\SQLite\tools” to where plm.exe and plume.exe were compiled: copy "C:\ProgramData\chocolatey\lib\SQLite\tools\sqlite3.dll" "C:\Users\%USERNAME%\.cargo\bin\sqlite3.dll" Now you may run the migrations: plm migration run Migrations should be run after each update. When in doubt, run them. You will also need to initialise search index: plm search init After that, you’ll need to setup your instance, and the admin’s account. plm instance new plm users new --admin Note if you want to use LDAP: you should still create an administrator account, at least to give admin rights to your own LDAP account once it’s registered. On Windows, there might be an error creating the admin user. To get around this, you need to run: plm users new --admin -n "adminusername" -N "Human Readable Admin Name" -b "Biography of Admin here" -p hackmeplease For more information about these commands, and the arguments you can give them, check out their documentation. Now that Plume is configured, if you are in a production environment you probably want to configure your init system to make it easier to manage. Configure init system
https://docs.joinplu.me/installation/config/
2021-05-06T03:02:42
CC-MAIN-2021-21
1620243988725.79
[]
docs.joinplu.me
Do You Offer an Affiliate Program? MPG is a product that offers one of the most rewarding Affiliate Programs in the WordPress plugin developer community! The only requirement is to have a premium license of MPG. Once you are on a premium plan, you will see the “Affiliation” tab on the MPG plugin side menu where you may click to apply to our Affiliate Program. Program Summary - 20% commission when a customer purchases a new license. - Get commission for automated subscription renewals. - 30-day tracking cookie after the first visit to maximize earning potential. - $100 minimum payout amount. - Payouts are in USD and processed monthly via PayPal. - Since we hold 30 days for potential refund requests, we only pay commission for subscriptions with more than 30 days.
https://docs.mpgwp.com/article/19-do-you-offer-affiliate-program
2021-05-06T03:07:11
CC-MAIN-2021-21
1620243988725.79
[]
docs.mpgwp.com
Position of the transform relative to the parent transform. If the transform has no parent, it is the same as Transform.position. using UnityEngine; using System.Collections; public class ExampleClass : MonoBehaviour { void Example() { // Move the object to the same position as the parent: transform.localPosition = new Vector3(0, 0, 0); // Get the y component of the position relative to the parent // and print it to the Console print(transform.localPosition.y); } } Note that the parent transform's world rotation and scale are applied to the local position when calculating the world position. This means that while 1 unit in Transform.position is always 1 unit, 1 unit in Transform.localPosition will get scaled by the scale of all ancestors.
https://docs.unity3d.com/kr/2018.1/ScriptReference/Transform-localPosition.html
2021-05-06T04:45:50
CC-MAIN-2021-21
1620243988725.79
[]
docs.unity3d.com
The index of the currently selected match An error message (used for bad regex syntax) What should we include when we search? Is the filters view open? Should the focus forced into the input on the next render? The query constructed from the text and the case/regex flags Whether or not the replace entry row is visible Whether or not the replace input is currently focused The text in the replace entry Whether or not the search input is currently focused The text in the search entry The total number of matches found in the document Should the search string be treated as a RegExp? Should the search be case sensitive?
https://jupyterlab.readthedocs.io/en/stable/api/interfaces/documentsearch.idisplaystate.html
2021-05-06T04:36:00
CC-MAIN-2021-21
1620243988725.79
[]
jupyterlab.readthedocs.io
Learn how to setup a Bugsnag integration with Bitbucket Issues. The Bitbucket Issues integration allows an issue to be created in Bitbucket for errors that are reported to Bugsnag. In Bugsnag, set up the Bitbucket Issues integration from Project Settings > Issue tracker, then select Bitbucket Issues from the Available integrations section. Multiple Bitbucket integrations can be configured for Bugsnag projects which allows different issues to be created in different repositories. to allow Bugsnag to automatically create an issue. Filters can be configured to define which errors should automatically create an issue in Bitbucket Issues.. The Bitbucket Issues integration has two way sync capabilities which means that errors in Bugsnag can be kept in sync with the linked issue in Bitbucket. To enable two-way sync select Two-way sync and enable the automations you require. The first two settings define the state that the linked Bitbucket issue should be transitioned to when an error in Bugsnag is marked as fixed or when it is reopened. The last two settings define the behavior when a Bitbucket issue transitions to or away from certain states. With the default configuration a linked Bugsnag error will be marked as fixed when the Bitbucket issue is transitioned to either Resolved or Closed. Correspondingly when a Bitbucket issue is transitioned away from Resolved or Closed to any other state, the linked Bugsnag error will be reopened. Multiple states can be selected in both lists. If your server’s security policy denies access from external IP addresses and websites, you will need to allow access to Bugsnag’s IP addresses below.
https://docs.bugsnag.com/product/integrations/issue-tracker/bitbucket-issues/
2021-05-06T04:12:41
CC-MAIN-2021-21
1620243988725.79
[]
docs.bugsnag.com
Chattermill gets better with more of your colleagues on board 🤝 To invite a colleague to Chattermill, go to the widget in the bottom left hand corner, and select Team Management. Here you can enter the email addresses of the users you'd like to add, and select if you want them to be an admin or a normal user. Admins are able to invite other people to Chattermill, and can edit theme tags on comments. Your colleague will then receive an email inviting them to the project. If they don't have an existing Chattermill account, we will ask them to create one. Please note: invites are project specific. If your company has multiple Chattermill projects (e.g. a Main account and a Competitor account) the user will need to be invited to each one separately.
https://docs.chattermill.com/en/articles/2085181-how-to-invite-my-colleagues-to-chattermill
2021-05-06T02:44:32
CC-MAIN-2021-21
1620243988725.79
[]
docs.chattermill.com
Unable To Verify The First Certificate Error If your migration is failing with a "Fetch Failed Error: unable to verify the first certificate" message, then you may have a certificate error on the other server. First, if have changed your certificate on your FileMaker Server, you will need to restart Otto so make sure you do that first. Whats Happening? Otto is reaching out to the server to get a copy or a clone of the file, and it can't make a secure connection, because it can't verify that the SSL Certificate is valid. Specifically, it can't verify the first certificate in the chain of certificates. What's the Cause? This is usually a problem with the Intermediate Bundle Cert that was installed along with the main cert on your FileMaker server. Make sure you have the correct files from certificate authority and the intermediate bundle is up to date and correct. Last Resort This is not NOT RECOMMENDED as it does reduce your security a bit, but you can turn off this check on Otto by setting a ENV variable. Add a line to your .env file like this. NODE_TLS_REJECT_UNAUTHORIZED=0
https://docs.ottofms.info/article/683-unable-to-verify-the-first-certificate-error
2021-05-06T03:10:52
CC-MAIN-2021-21
1620243988725.79
[]
docs.ottofms.info
Dexter Industries BrickPi3¶ BrickPi3 is a LEGO MINDSTORMS compatible daughterboard for Raspberry Pi produced by Dexter Industries. Setup¶ Since the BrickPi3 does not have an EEPROM to electronically identify it, some manual configuration is required to enable it. This is done by editing the config.txt file in the boot partition: # uncomment if you are using BrickPi3 dtoverlay=brickpi3 It is also possible to stack more than one BrickPi3 on one Raspberry Pi. In order for this to work, the ID for each BrickPi3 needs to be configured in config.txt: # uncomment only if you are stacking multiple BrickPi3s # replace 0123... with actual id number of each BrickPi3 dtparam=id1=0123456789ABCDEF0123456789ABCDEF dtparam=id2=0123456789ABCDEF0123456789ABCDEF #dtparam=id3=0123456789ABCDEF0123456789ABCDEF #dtparam=id4=0123456789ABCDEF0123456789ABCDEF The ID can be found by first connecting each board individually and running ev3dev-sysinfo. Currently, ev3dev only supports stacking up to 4 BrickPi3s (although technically more is possible). Tip The ID configuration is not needed when using only one BrickPi3. Power¶ BrickPi3 is powered from 8 AA batteries (nominally 12V). It is possible to power the Raspberry Pi + BrickPi3 via USB. However, the motors will not move because the motor controller chips are connected directly to the battery voltage. Warning When powering BrickPi3 from USB, it will shutdown because of low battery. See. We are working on a workaround. More information is available in the hardware driver documentation. Sensors and Input Ports¶ A major difference compared to LEGO MINDSTORMS EV3 is that the BrickPi3 cannot automatically detect sensors. You must manually configure each input port to tell it what kind of sensor is attached. Many LEGO MINDSTORMS sensors will work with BrickPi3. There are some caveats though. - A small number of NXT/I2C sensors require 9V (battery voltage) on input port pin 1 (e.g. the LEGO NXT Ultrasonic sensor and the Codatex RFID sensor). BrickPi33 firmware). More information is available in the hardware driver documentation. Motors and Output Ports¶ A major difference compared to LEGO MINDSTORMS EV3 is that the BrickPi3 cannot automatically detect motors. By default, each output port is configured for NXT motors. Passive braking is not implemented in the BrickPi33 has one LED that can be used for status indication. More information is available in the hardware driver documentation There is an additional LED for power indication. It is not user controllable. Sound¶ BrickPi3 does not have any sound capabilities. However, the headphone jack on the Raspberry Pi can be used for sound with an external speaker. The sound driver is not enabled by default and must be turned on by editing config.txt. Display¶ BrickPi3 does not have a display. However, it is possible to use the HDMI (or NTSC) connector on the Raspberry Pi to attach a display. Bluetooth¶ BrickPi3 does not have Bluetooth capabilities. The built-in Bluetooth on Raspberry Pi 3 Model B and Raspberry Pi Zero W can be used. A USB Bluetooth dongle can be used if needed. Wi-Fi¶ BrickPi.
http://docs.ev3dev.org/en/ev3dev-stretch/platforms/brickpi3.html
2021-05-06T04:05:04
CC-MAIN-2021-21
1620243988725.79
[]
docs.ev3dev.org
. InAppBilling InAppBilling now includes support for the Samsung Galaxy Store In-App Purchases. This builds upon the existing support for Google Play, Apple, Amazon and Huawei, so you have access to all the majors stores through one API. We have added support for Apple AppStore purchases from macOS. So you can now support macOS, iOS and tvOS all through this extension. Additionally Google Play Billing has been updated to the latest release (v3.0.2). Get the best in-app monetisation extension here. Push Notifications OneSignal has been updated to the latest SDK which brings a raft of improvements to the platform under the hood. This update includes the following versions: - iOS v3.1.1 - Android v4.1.0 Facebook has released a major update to their SDK (v9.0.0). You probably have received a message along the following lines: Your current SDK is deprecated and will no longer be supported by Facebook. We will continue to allow API calls from your SDK for a transition period of two years. Starting January 20, 2023 we will fail all calls from deprecated SDKs. We encourage you to upgrade to v9.0 as soon as possible to avoid disruption to your application and to access all the benefits of our newest SDK. This is a change as all previous versions of the SDK will stop working after the specified date. As part of this update Facebook has introduced the concept of a “limited login” that allows developers to signal that a login is limited in terms of tracking users. Unity Plugins Several of our extensions are now available as Unity Plugins in the Unity Asset Store. This makes them extremely easy to integrate into your Unity project. Currently we have the following plugins available: Check them out in the store. If there are any you think would be useful to your Unity project please let us know. Note: If you already have a subscription to the AIR version of these plugins then you have access to the Unity Plugin for free. As always, if you have any native development needs for AIR, Unity, Flutter or Haxe, please feel free to contact us at [email protected].
https://docs.airnativeextensions.com/news/2021-02/
2021-05-06T03:22:56
CC-MAIN-2021-21
1620243988725.79
[array(['/assets/images/airpackagemanager-765f6711e71d9c6e836ef114e816207f.png', None], dtype=object) array(['/assets/images/samsung-store-c54776645b6b31bdc6accb9469dd84a4.png', None], dtype=object) array(['/assets/images/onesignal-8a2164183fb5f4c2d6a36eae743503bb.png', None], dtype=object) array(['/assets/images/facebook-9383166add6e07a21a86a9ebfd5ac5a4.png', None], dtype=object) array(['/assets/images/unity-e1662b27d71619e2df216ff02bb0844d.png', None], dtype=object) ]
docs.airnativeextensions.com
MARCO!!!! … POLO!!! That’s right, the official name of Venice’s airport is Marco Polo, named after the famous Venetian native son known for being a well-traveled explorer and trader. Luckily for us, we don’t have to take years long journeys to get from one place to the other now, we can jet in and out from his airport. Flying into Venice through his airport will afford you one of the most marvelous views of your life. If you’re sure to catch a seat on the right side of the plane, towards the front, you may catch a glimpse of one of the most beautiful cities in the world from a birds eye vantage point. It’s not a huge airport, one terminal split into only about 35 gates, but it understandably serves a lot of tourists heading on their Italian holidays. Lots of european airlines, and SkyTeam airlines. The airport itself sits about 5 miles from Venice’s city center, and can be reached by ACTV bus, Aliguna ferry or water taxi. Careful, water taxis alone can be very expensive (but may be a little faster). Remember: ✈️ = o sole mio ✈️✈️✈️✈️✈️ = volare Convenience to the city: ✈️✈️ (mainly because of the large sea in between airport and city, understandable though) Ease of navigating through terminals: ✈️✈️✈️ (only one terminal) Dining: ✈️✈️ (all italian, but extra points for a gelateria on site) Bathrooms: ✈️✈️✈️ (fairly clean) Charging stations/wifi: ✈️✈️✈️ (free wifi available) Amenities: ✈️✈️✈️✈️ (Two lounges. For being a smaller airport, there’s a lot of designer shops: Bvlgari, Versace, Ferragamo, Valentino — hey, you’re in Italy, what do you expect?)
https://traveling-docs.com/2018/03/21/know-before-you-go-venice-vce/
2021-05-06T04:05:46
CC-MAIN-2021-21
1620243988725.79
[array(['https://travelingdocs.files.wordpress.com/2018/03/img_1280.jpg?w=676', 'IMG_1280.jpg'], dtype=object) array(['https://travelingdocs.files.wordpress.com/2018/03/img_1288.jpg?w=676', 'IMG_1288.jpg'], dtype=object) ]
traveling-docs.com
GuntenGeek allows you to add new font icons: Fontastic, IcoMoon, Font Awesome … with ease. Access the GutenGeek > Custom Icons and add new. Important: Please upload svg package of the font icon. Once done, you can see a list of icons from the imported font icon package. How to use Font Icon? The font icon will be added as a content element to add to any block. You can configure the style of the font icon: size, background, border.
https://docs.gutengeek.com/faq/import-font-icons/
2021-05-06T03:14:14
CC-MAIN-2021-21
1620243988725.79
[array(['/wp-content/uploads/docs/import-font-icons.png', None], dtype=object) array(['/wp-content/uploads/docs/custom-font-icon-import.png', None], dtype=object) array(['/wp-content/uploads/docs/add-font-icon-block.png', None], dtype=object) array(['/wp-content/uploads/docs/icon-settings.png', None], dtype=object)]
docs.gutengeek.com
Static XBPS In rare cases, it is possible to break the system sufficiently that XBPS can no longer function. This usually happens while trying to do unsupported things with libc, but can also happen when an update contains a corrupt glibc archive or otherwise fails to unpack and configure fully. Another issue that can present itself is in systems with a XBPS version before 0.54 (released June 2019). These systems will be impossible to update from the official repositories using the regular update procedure, due a change in the compression format used for repository data, which was made in March 2020. In these cases it is possible to recover your system with a separate, statically compiled copy of XBPS. Obtaining static XBPS Statically compiled versions of XBPS are available in all mirrors in the static/ directory. The link below points to the static copies on the primary mirror in Germany: Download and unpack the latest version, or the version that matches the broken copy on your system (with a preference for the latest copy). Using static XBPS The tools in the static set are identical to the normal ones found on most systems. The only distinction is that these tools are statically linked to the musl C library, and should work on systems where nothing else does. In systems where the platform can no longer boot, it is recommended to chroot in with Void installation media and use the static tools from there, as it is unlikely that even a shell will work correctly on the target system. When using static XBPS with glibc installation, environmental variable XBPS_ARCH need to be set.
https://docs.voidlinux.org/xbps/troubleshooting/static.html
2021-05-06T02:48:42
CC-MAIN-2021-21
1620243988725.79
[]
docs.voidlinux.org
As a system administrator, you can create global tenant roles and publish them to one or more organizations in your cloud. You can edit and delete existing global tenant roles. You can unpublish global tenant roles from individual organizations in your cloud. After the initial VMware Cloud Director installation and setup, the system contains a set of predefined global tenant that are published to all organizations. See Predefined Roles and Their Rights.
https://docs.vmware.com/en/VMware-Cloud-Director/10.2/VMware-Cloud-Director-Service-Provider-Admin-Portal-Guide/GUID-778EEADA-D287-4634-BC6A-7157FAFA60BE.html
2021-05-06T05:01:59
CC-MAIN-2021-21
1620243988725.79
[]
docs.vmware.com
Once a SD-WAN Edge is provisioned with Analytics, the Analytics functionality collects data (application-specific Analytics or application and branch Analytics). The collected Analytics data are then sent directly from the SD-WAN Edge to the Cloud Analytics Engine. Operator Super User, Operator Standard Admin, Enterprise Super User, Enterprise Standard admin, Partner Super User, and Partner Standard Admin can view the Analytics data for a specific customer in the Analytics portal (). To view the Analytics data, perform the following steps. Prerequisites - Ensure that all the necessary system properties to enable Analytics are properly set in the SD-WAN Orchestrator. For more information, see Enable VMware Edge Network Intelligence on a VMware SD-WAN Orchestrator. - Ensure that you have access to the Analytics portal to view the Analytics data. Procedure - In the Enterprise portal, click Open New Orchestrator UI. - Click Launch New Orchestrator UI in the pop-up window. The UI opens in a new tab displaying the monitoring options. - In the Monitor Customers tab, click on the Customer name link for which you want to view the Analytics data. - For a selected customer, to view Application Analytics data, click Application Analytics. - To view Branch Analytics data, click Branch Analytics.When the Analytics menu is clicked, the Analytics portal will be opened in a new browser tab, where you can view the Analytics data (Application and Branch) of all the Edges configured for a selected customer. Note that the Browser settings may prevent this action as popups. You need to allow it when browser shows notification.
https://docs.vmware.com/en/VMware-SD-WAN/4.2/VMware-SD-WAN-Edge-Network-Intelligence-Configuration-Guide/GUID-03DDB5EE-9408-45DC-8223-5C983DD06BF7.html
2021-05-06T04:34:34
CC-MAIN-2021-21
1620243988725.79
[]
docs.vmware.com
Visionect Software Suite¶ The Visionect Software Suite is at the heart of our technology. The Suite acts as a server for thin clients and hosts an entity (session) for each Visionect client device. It makes sure that e-paper displays connected to client devices display the correct content. Around that we’ve built a host of features that enable you to deploy, control, manage and integrate the Software Suite in your solutions. Note The Visionect Software Suite was previously known as Visionect Server. - Components - Using the Visionect Software Suite - Display tiling - Remote firmware upgrades - API - Networking and Security - Requirements - Installation - Running Visionect Software Suite in production
http://docs.visionect.com/VisionectSoftwareSuite/index.html
2018-08-14T13:55:06
CC-MAIN-2018-34
1534221209040.29
[]
docs.visionect.com
Sometimes. So the first alternative for the 68000’s logical-or could be written as "+m" (output) : "ir" (input). The second could be "+r" (output): "irm" (input). However, the fact that two memory locations cannot be used in a single instruction prevents simply using "+rm" (output) : "irm" (input). Using multi-alternatives, this might be written as "+m,r" (output) : "ir,irm" (input). This describes all the available alternatives to the compiler, allowing it to choose the most efficient one for the current conditions. There is no way within the template to determine which alternative was chosen. However you may be able to wrap your asm statements with builtins such as __builtin_constant_p to achieve the desired results. © Free Software Foundation Licensed under the GNU Free Documentation License, Version 1.3.
http://docs.w3cub.com/gcc~7/multi_002dalternative/
2018-08-14T13:14:30
CC-MAIN-2018-34
1534221209040.29
[]
docs.w3cub.com
Transaction Support With Amazon Cloud Directory, it’s often necessary to add new objects or add relationships between new objects and existing objects to reflect changes in a real-world hierarchy. Batch operations can make directory tasks like these easier to manage by providing the following benefits: Batch operations can minimize the number of round trips required to write and read objects to and from your directory, improving the overall performance of your application. Batch write provides the SQL database-equivalent transaction semantics. All operations successfully complete, or if any operation has a failure then none of them are applied. Using batch reference you can create an object and use a reference to the new object for further action such as adding it to a relationship, reducing overhead of using a read operation before a write operation. BatchWrite Use BatchWrite operations to perform multiple write operations on a directory. All operations in batch write are executed sequentially. It works similar to SQL database transactions. If one of the operation inside batch write fails, the entire batch write has no effect on the directory. If a batch write fails, a batch write exception occurs. The exception contains the index of the operation that failed along with exception type and message. This information can help you identify the root cause for the failure. The following API operations are supported as part of batch write: Batch Reference Name Batch reference names are supported only for batch writes when you need to refer to an object as part of the intermediate batch operation. For example, suppose that as part of a given batch write, 10 different objects are being detached and are attached to a different part of the directory. Without batch reference, you would have to read all 10 object references and provide them as input during reattachment as part of the batch write. You can use a batch reference to identify the detached resource during attachment. A batch reference can be any regular string prefixed with the number sign / hashtag symbol (#). For example, in the following code sample, an object with link name "this-is-a-typo" is being detached from root with a batch reference name "ref" . Later the same object is attached to the root with the link name as "correct-link-name" . The object is identified with the child reference set to batch reference. Without the batch reference, you would initially need to get the objectIdentifier that is being detached and provide that in the child reference during attachment. You can use a batch reference name to avoid this extra read. BatchDetachObject batchDetach = new BatchDetachObject() .withBatchReferenceName("ref") .withLinkName("this-is-a-typo") .withParentReference(new ObjectReference().withSelector("/")); BatchAttachObject batchAttach = new BatchAttachObject() .withParentReference(new ObjectReference().withSelector("/")) .withChildReference(new ObjectReference().withSelector("#ref")) .withLinkName("correct-link-name"); BatchWriteRequest batchWrite = new BatchWriteRequest() .withDirectoryArn(directoryArn) .withOperations(new ArrayList(Arrays.asList(batchDetach, batchAttach))); BatchRead Use BatchRead operations to perform multiple read operations on a directory. For example, in the following code sample, children of object with reference “/managers” is being read along with attributes of object with reference “/managers/bob” in a single batch read. BatchListObjectChildren listObjectChildrenRequest = new BatchListObjectChildren() .withObjectReference(new ObjectReference().withSelector("/managers")); BatchListObjectAttributes listObjectAttributesRequest = new BatchListObjectAttributes() .withObjectReference(new ObjectReference().withSelector("/managers/bob")); BatchReadRequest batchRead = new BatchReadRequest() .withConsistencyLevel(ConsistencyLevel.SERIALIZABLE) .withDirectoryArn(directoryArn) .withOperations(new ArrayList(Arrays.asList(listObjectChildrenRequest, listObjectAttributesRequest))); BatchReadResult result = cloudDirectoryClient.batchRead(batchRead); BatchRead supports the following API operations: Limits on Batch operations Each request to the server (including batched requests) has a maximum number of resources that can be operated on, regardless of the number of operations in the request. This allows you to compose batch requests with high flexibility as long as you stay within the resource maximums. For more information on resource maximums, see Amazon Cloud Directory Limits. Limits are calculated by summing the writes or reads for each single operation inside the Batch. For example, the read operation limit is currently 200 objects per API call. Let’s say you want to compose a batch that adds 9 ListObjectChildren API calls and each call requires reading 20 objects . Since the total number of read objects (9 x 20 = 180) does not exceed 200, the batch operation would succeed. The same concept applies with calculating write operations. For example, the write operation limit is currently 20. If you set up your batch to add 2 UpdateObjectAttributes API calls with 9 write operations each, this would also succeed. In either case, should the batch operation exceed the limit, then the operation will fail and a LimitExceededException will be thrown. The correct way to calculate the number of objects that are included within a batch is to include both the actual node or leaf_node objects and if using a path based approach to iterate your directory tree, you also need to include each path that is iterated on, within the batch. For example, as shown in the following illustration of a basic directory tree, to read an attribute value for the object 003, the total read count of objects would be three. The traversing of reads down the tree works like this: 1. Read object 001 object to determine the path to object 003 2. Go down Path 2 3. Read object 003 Similarly, for the number of attributes we need to count the number of attributes in objects 001 and 003 to ensure we don’t hit the limit. Exception handling Batch operations in Cloud Directory can sometimes fail. In these cases, it is important to know how to handle such failures. The method you use to resolve failures differs. Related Cloud Directory Blog Articles
https://docs.aws.amazon.com/clouddirectory/latest/developerguide/transaction_support.html
2018-08-14T14:56:47
CC-MAIN-2018-34
1534221209040.29
[array(['images/limits.png', None], dtype=object)]
docs.aws.amazon.com
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region. Constructs AmazonKinesisAnalyticsClient with AWS Access Key ID, AWS Secret Key and an AmazonKinesisAnalyticsClient Configuration object. Namespace: Amazon.KinesisAnalytics Assembly: AWSSDK.KinesisAnalytics.dll Version: 3.x.y.z AWS Access Key ID AWS Secret Access Key The AmazonKinesisAnalytics
https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/KinesisAnalytics/MKinesisAnalyticsctorStringStringKinesisAnalyticsConfig.html
2018-08-14T14:52:15
CC-MAIN-2018-34
1534221209040.29
[]
docs.aws.amazon.com
Use the Cloud Operations Dashboard The Cloud Operations Dashboard breaks down cloud service requests from your end users and cloud stacks that you offer in the Cloud User Portal. Before you beginRole required: sn_cmp.cloud_operator or sn_cmp.cloud_admin Procedure Navigate to Cloud Management > Cloud Admin Portal > Operate > Cloud Operations Dashboard. The Cloud Operation Dashboard appears, showing you Cloud Service Requests and Stacks. Requests are also broken down by requester in the bar chart below. Stacks are broken down by datacenters. Figure 1. An example Cloud Operations Dashboard Do any of the following to obtain the tag data you want on the report: GoalDo this See data grouped by another attribute Select a value in the Group by choice list for either chart. See updated data Point to the top of any of the charts until the refresh icon () appears, and then click the icon. Save an image of a chart Point to any of the charts until the options icon () appears, and then select Save as PNG or Save as JPEG. Related TasksUse Cloud Root Cause Analysis reportsRelated ConceptsThe Cloud API TrailThe Cloud Orchestration TrailRelated ReferenceSchemas of Cloud Management tables
https://docs.servicenow.com/bundle/kingston-it-operations-management/page/product/cloud-management-v2/task/use-cloud-operations-dashboard.html
2018-08-14T13:54:16
CC-MAIN-2018-34
1534221209040.29
[]
docs.servicenow.com
Testimonials¶¶¶ Hypothesis has been brilliant for expanding the coverage of our test cases, and also for making them much easier to read and understand, so we’re sure we’re testing the things we want in the way we want. Seth Morton¶! Sixty North¶. mulkieran¶ Just found out about this excellent QuickCheck for Python implementation and ran up a few tests for my bytesize package last night. Refuted a few hypotheses in the process. Looking forward to using it with a bunch of other projects as well. Adam Johnson¶). Josh Bronson¶ Adopting Hypothesis improved bidict’s test coverage and significantly increased our ability to make changes to the code with confidence that correct behavior would be preserved. Thank you, David, for the great testing tool. Cory Benfield¶. Jon Moore¶. Russel Winder¶¶. Cody Kochmann¶ Hypothesis is being used as the engine for random object generation with my open source function fuzzer battle_tested which maps all behaviors of a function allowing you to minimize the chance of unexpected crashes when running code in production. With how efficient Hypothesis is at generating the edge cases that cause unexpected behavior occur, battle_tested is able to map out the entire behavior of most functions in less than a few seconds. Hypothesis truly is a masterpiece. I can’t thank you enough for building it. Merchise Autrement¶ Just minutes after our first use of hypothesis we uncovered a subtle bug in one of our most used library. Since then, we have increasingly used hypothesis to improve the quality of our testing in libraries and applications as well. Florian Kromer¶ At Roboception GmbH I use Hypothesis to implement fully automated stateless and stateful reliability tests for the 3D sensor rc_visard and robotic software components . Thank you very much for creating the (probably) most powerful property-based testing framework. Reposit Power¶ With a micro-service architecture, testing between services is made easy using Hypothesis in integration testing. Ensuring everything is running smoothly is vital to help maintain a secure network of Virtual Power Plants. It allows us to find potential bugs and edge cases with relative ease and minimal overhead. As our architecture relies on services communicating effectively, Hypothesis allows us to strictly test for the kind of data which moves around our services, particularly our backend Python applications. Your name goes here¶.
https://hypothesis.readthedocs.io/en/latest/endorsements.html
2018-08-14T13:27:32
CC-MAIN-2018-34
1534221209040.29
[]
hypothesis.readthedocs.io
Multiple User Environments When more than one Composer user attempts to log into the same Workspace, the following message appears: Workspace in use or cannot be created, choose a different one. Whenever Composer uses a Workspace, it locks the Workspace so other Composer instances cannot access it. A Workspace is meant to be a "private" development area, until the developer decides to share it with the team. It is not possible to share a single Workspace among multiple users, so you need to set up (private) workspaces for each developer. To merge the work of different developers together, use source control, which could be be SVN (Subversion), Git, or something else. This is the best way to manage a Composer Project with multiple users working simultaneously on it, and prevent the developers from interferring with each other's work. You could consider the Subversion plugin in Composer as a connector to source countrol like SVN. To install the Subversion plugin, see Software Updates Functionality (Plugins). Continuing with this example, once the Subversion plugin is installed, the Project can be shared using source control. When you right-click on a Project, you will find all the relevant options under the Team menu. For the first time, a Project needs to be shared with source control. After this, there will be options on the Team menu. Feedback Comment on this article:
https://docs.genesys.com/Documentation/Composer/latest/Help/MultipleUserEnvironments
2018-08-14T13:34:03
CC-MAIN-2018-34
1534221209040.29
[]
docs.genesys.com
When configuring the LANSA Web system, the following options are available with Configure System, Data/Application Server: or with Configure System, Web Server: The Reset button will retrieve default values from the host system. This option is enabled if the host system to which you are connected supports retrieving default values. Note that you need to select OK to save the settings after you have retrieved them. Also note that Job Queues and Libraries are NOT replaced with default values. IBM i Servers: If you are using a single-tier IBM i installation, you will need to specify the options you wish to use for the Data/Application Server and the Web Server System (IBM i only). If you are using a multi-tier IBM i installation with an IBM i CGI-based Web Server, you will need to specify the options you wish to use for the Web Server System (IBM i only). You will need to connect to the relevant IBM i to set up each of the systems. The Web Server setting can also be changed using the W3@P2901 program. Other Servers: If you are using a Windows Multi-Tier installation, then you will only need to specify the options you wish to use for the Windows Data/Application Server. The Web Server settings for the parameters used by the LANSA Web Server Extension are configured using the 1.2.2 LANSA Web Server Extension section.
https://docs.lansa.com/14/en/lansa085/content/lansa/jmp_0280.htm
2018-08-14T14:08:37
CC-MAIN-2018-34
1534221209040.29
[]
docs.lansa.com
Family Member Has Status Flag¶ This Condition located on the Family category tab in Search Builder will allow you to create a search for people in your database based on one set of criteria, and then add this Condition to find anyone in their family that has one or more Status Flags that you select from the drop down list. You can also find family members without specific Status Flags as well. Use Case Create a search for all Primary Adult church members that are enrolled in a Life Group, but who have a family member that does not have the Status Flag of Active Attender. Remember, each church will have different Status Flags. See also
http://docs.touchpointsoftware.com/SearchBuilder/QB-FamHasStatusFlag.html
2018-08-14T14:08:44
CC-MAIN-2018-34
1534221209040.29
[]
docs.touchpointsoftware.com
Word Highlighting PHP editor emphasizes occurrences of a symbol which has the keyboard cursor in. This feature performs full contextual match, it means not only words with the same content are highlighted, but the highlighted word must refer to the exact same symbol in actual context. Using Word Highlighting By navigating the keyboard cursor over a symbol, waiting for about 500ms, the symbol occurrences are highlighted. This feature is used to quickly check occurrences of a symbol under the keyboard cursor within the editor window. Usually it is used to visually check where is e.g. a local variable used. Note: The feature supports the same matches as Find All References.
https://docs.devsense.com/en/editor/word-highlighting
2018-08-14T13:55:53
CC-MAIN-2018-34
1534221209040.29
[array(['https://docs.devsense.com/content_docs/editor/imgs/word-highlight.png', 'Word Highlighting'], dtype=object) ]
docs.devsense.com
Performance Analytics release notes ServiceNow® Performance Analytics product enhancements and updates in the Kingston release. Activation informationPlatform feature is active by default. Performance Analytics requires a separate subscription. New in the Kingston release Widget on-click behavior Configure widgets to redirect users to a specific URL when clicked on. External data sources for Performance Analytics Collect scores and breakdowns from external JDBC data sources. Text analytics Reveal any patterns that exist in user-entered text fields. Automated scores migration Use an automated process to migrate existing scores to an improved table structure. Scheduled export of an indicator to PDF Schedule an indicator to automate its distribution. Admin Console Set up and manage Performance Analytics and Reporting in a single page. Find, create, and modify dashboards, reports, indicators, and other configuration records. Troubleshoot errors, access usage information, modify advanced configurations, and more. Analytics Diagnostics Identify and diagnose configuration issues using predefined scripts that examine the database for invalid records and provide suggestions to resolve issues. Changed in this release Naming: Performance Analytics Premium is called Performance Analytics. Performance Analytics for Incident Management is called Complimentary Performance Analytics for Incident Management. Content Packs are called Out-of-the-box Performance Analytics Solutions. License activation: System administrators can now activate the plugin for the licensed version of Performance Analytics. The specific plugin that you activate depends on the product area you have purchased a subscription for. For more information, see Performance Analytics: Getting Started in the Performance Analytics Community pages. Targets and thresholds: Displayed when you export a scorecard to PDF. You can set a default global threshold color: with the property com.snc.pa.default_chart_threshold_color. Additional properties: appear on the Performance Analytics properties page. You can search in choice lists: on the indicator and breakdown wizards. The scorecard REST API: returns the indicator Unit. Spotlight: Existing Spotlight records are now deleted from the Spotlight table when the Spotlight evaluation job runs. If you have upgraded ServiceNow, the records created under the earlier release are deleted when the Spotlight evaluation job first runs. Data collection periods can be configured by an administrator. Collections are no longer limited to daily frequency. Related ConceptsUpgrade your instanceRelated ReferenceUpgrade planning checklist (Kingston)
https://docs.servicenow.com/bundle/kingston-release-notes/page/release-notes/performance-analytics-reporting/performance-analytics-rn.html
2018-08-14T13:53:03
CC-MAIN-2018-34
1534221209040.29
[]
docs.servicenow.com
Lanes, markers, and panels Lanes, markers, and panels are the fundamental elements of a timeline visualization. appear in the 3D view only. Note: Lanes and markers are available in the 3D view only. A panel in the 2D view always represents a single record, while panels in 3D view may represent one or more records. Lanes A lane is a channel in which activities are grouped. A visualization can display up to eight lanes at a time. While viewing a visualization, you can use the Settings pane to show or hide individual lanes.Note: The number of items displayed in a lane depends on the Max items per lane and Max items per lane 2d settings on Timeline Visualization form. Markers Markers are horizontal lines that cross all lanes and identify a transition to the next month. Panels Panels in both 2D and 3D views are color coded according to values that the administrator selects during the initial setup. In 2D view, panels are grouped by month and stacked in chronological order, from the earliest date to the latest date. By default, the 2D view opens with the current month displayed on the left side of the visualization. You can print visualizations from the 2D view using the browser's print option. In 3D view, panels are grouped in lanes and ordered by date, from earliest to latest. The date that appears on the panel determines its placement in 2D and 3D view. The date displayed is based on a value the timeline administrator selects during initial setup. Panels appear in the CIO Roadmap according to the planned completion date for the project. In 3D view, projects with the same planned date of completion are consolidated into a single panel. In 2D view, projects with the same planned date of completion are displayed as individual panels. Panel headers in the CIO Roadmap are color coded based on project state. However, in 3D view, if a panel represents more than one project, the panel header is colored black. The Settings pane contains a key showing each available project state and the corresponding color. To view additional information about a panel: Click a panel for a single record while in 2D or 3D view to open a summary window that contains additional information. Click the heading in the summary window to open the full record. Click a panel that represents multiple records to open a list of those records. Click a record number to open the full record. The timeline administrator can configure the information that appears in summary windows. Related TasksView timeline visualizationRelated ConceptsUse the slider and slider trackRelated ReferencePersonalize Timeline VisualizationsWork with timeline visualizations
https://docs.servicenow.com/bundle/kingston-servicenow-platform/page/use/timeline-visualization/concept/c_Lanes.html
2018-08-14T13:53:01
CC-MAIN-2018-34
1534221209040.29
[]
docs.servicenow.com
A number of standard filter configurations are created and defined by default within the static properties file for the Tungsten Replicator configuration. Filters can be enabled through tpm to update the filter configuration --repl-svc-extractor-filters Apply the filter during the extraction stage, i.e. when the information is extracted from the binary log and written to the internal queue ( binlog-to-q). Apply the filter between the internal queue and when the transactions are written to the THL. ( q-to-thl). --repl-svc-applier-filters Apply the filter between reading from the internal queue and applying to the destination database ( q-to-dbms). Properties and options for an individual filter can be specified by setting the corresponding property value on the tpm command-line. For example, to ignore a database schema on a slave, the replicate filter can be enabled, and the replicator.filter.replicate.ignore specifies the name of the schemas to be ignored. To ignore the schema contacts: shell> ./tools/tpm update alpha --hosts=host1,host2,host3 \ --repl-svc-applier-filters=replicate \ --property=replicator.filter.replicate.ignore=contacts A bad filter configuration will not stop the replicator from starting, but the replicator will be placed into the OFFLINE state. To disable a previously enabled filter, empty the filter specification and (optionally) unset the corresponding property or properties. For example: shell> ./tools/tpm update alpha --hosts=host1,host2,host3 \ --repl-svc-applier-filters= \ --remove-property=replicator.filter.replicate.ignore Multiple filters can be applied on any stage, and the filters will be processes and called within the order defined within the configuration. For example, the following configuration: shell> ./tools/tpm update alpha --hosts=host1,host2,host3 \ --repl-svc-applier-filters=enumtostring,settostring,pkey \ --remove-property=replicator.filter.replicate.ignore The filters are called in order: The order and sequence can be important if operations are being performed on the data and they are relied on later in the stage. For example, if data is being filtered by a value that exists in a SET column within the source data, the settostring filter must be defined before the data is filtered, otherwise the actual string value will not be identified. In some cases, the filter order and sequence can also introduce errors. For example, when using the pkey filter and the optimizeupdates filters together, pkey may remove KEY information from the THL before optimizeupdates attempts to optimize the ROW event, causing the filter to raise a failure condition. The currently active filters can be determined by using the trepctl status -name stages command: shell> trepctl status -name stagesProcessing status command (stages)... ... NAME VALUE ---- ----- applier.class : com.continuent.tungsten.replicator.applier.MySQLDrizzleApplier applier.name : dbms blockCommitRowCount: 10 committedMinSeqno : 3600 extractor.class : com.continuent.tungsten.replicator.thl.THLParallelQueueExtractor extractor.name : parallel-q-extractor filter.0.class : com.continuent.tungsten.replicator.filter.MySQLSessionSupportFilter filter.0.name : mysqlsessions filter.1.class : com.continuent.tungsten.replicator.filter.PrimaryKeyFilter filter.1.name : pkey filter.2.class : com.continuent.tungsten.replicator.filter.BidiRemoteSlaveFilter filter.2.name : bidiSlave name : q-to-dbms processedMinSeqno : -1 taskCount : 5 Finished status command (stages)... The above output is from a standard slave replication installation showing the default filters enabled. The filter order can be determined by the number against each filter definition.
http://docs.continuent.com/tungsten-replicator-6.0/filters-disenabling.html
2018-08-14T13:42:04
CC-MAIN-2018-34
1534221209040.29
[]
docs.continuent.com
Welcome to JCMsuite’s Matlab® guide! [1]¶ This document shows how to use JCMsuite together with the high-level scripting language Matlab. This makes it is possible to setup complicated problems in a scripting-like manner and allows for a simple and flexible way to define parameter dependent problems and to run parameter scans. The major concept to combine JCMsuite with Matlab is called Embedded Scripting. In a nutshell, the structure of the solver input is still based on .jcm files. As an extension, these input files are allowed to contain placeholders (keys) for keyword substitution. Additionally, it is allowed to embed Matlab code snippets into the input files for looping over .jcm blocks, or to dynamically update the key-value specification. A step-by-step tutorial is given here. Table Of Contents
https://docs.jcmwave.com/JCMsuite/html/MatlabInterface/index.html?version=3.12.9
2018-08-14T14:13:31
CC-MAIN-2018-34
1534221209040.29
[]
docs.jcmwave.com
Use Upgrade Advisor to Prepare for Upgrades SQL Server Upgrade Advisor helps you prepare for upgrades to SQL Server 2014. The analysis examines objects that can be accessed, such as scripts, stored procedures, triggers, and trace files. Upgrade Advisor cannot analyze desktop applications or encrypted stored procedures. Output is in the form of an XML report. View the XML report by). 2014, and then click SQL Server 2014 Upgrade Advisor. For more information, see the Upgrade Advisor documentation included in the Upgrade Advisor download and the SQL Server 2014 Release Notes. See Also Work with Multiple Versions and Instances of SQL Server Supported Version and Edition Upgrades Backward Compatibility
https://docs.microsoft.com/en-us/sql/sql-server/install/use-upgrade-advisor-to-prepare-for-upgrades?view=sql-server-2014
2018-08-14T14:01:38
CC-MAIN-2018-34
1534221209040.29
[]
docs.microsoft.com
In addition to the default admission controllers, you can use admission webhooks as part of the admission chain. Admission webhooks call webhook servers to either mutate pods upon creation, such as to inject labels, or to validate specific aspects of the pod configuration during the admission process. Admission webhooks intercept requests to the master API prior to the persistence of a resource, but after the request is authenticated and authorized. In OpenShift Container Platform you can use admission webhook objects that call webhook servers during the API admission chain. There are two types of admission webhook objects you can configure: Mutating admission webhooks allow for the use of mutating webhooks to modify resource content before it is persisted. Validating admission webhooks allow for the use of validating webhooks to enforce custom admission policies. Configuring the webhooks and external webhook servers is beyond the scope of this document. However, the webhooks must adhere to an interface in order to work properly with OpenShift Container Platform. When an object is instantiated, OpenShift Container Platform makes an API call to admit the object. During the admission process, a mutating admission controller can invoke webhooks to perform tasks, such as injecting affinity labels. At the end of the admissions process, a validating admission controller can invoke webhooks to make sure the object is configured properly, such as verifying affinity labels. If the validation passes, OpenShift Container Platform schedules the object as configured. When the API request comes in, the mutating or validating admission controller uses the list of external webhooks in the configuration and calls them in parallel: If all of the webhooks approve the request, the admission chain continues. If any of the webhooks deny the request, the admission request is denied, and the reason for doing so is based on the first webhook denial reason. If more than one webhook denies the admission request, only the first will be returned to the user. If there is an error encountered when calling a webhook, that request is ignored and is be used to approve/deny the admission request. The communication between the admission controller and the webhook server needs to be secured using TLS. Generate a CA certificate and use the certificate to sign the server certificate used by your webhook server. The PEM-formatted CA certificate is supplied to the admission controller using a mechanism, such as Service Serving Certificate Secrets. The following diagram illustrates this process with two admission webhooks that call multiple webhooks. A simple example use case for admission webhooks is syntactical validation of resources. For example, you have an infrastructure that requires all pods to have a common set of labels, and you do not want any pod to be persisted if the pod does not have those labels. You could write a webhook to inject these labels and another webhook to verify that the labels are present. The OpenShift Container Platform will then schedule pod that have the labels and pass validation and reject pods that do not pass due to missing labels. Some common use-cases include: Mutating resources to inject side-car containers into pods. Restricting projects to block some resources from a project. Custom resource validation to perform complex validation on dependent fields. Cluster administrators can include mutating admission webhooks or validating admission webhooks in the admission chain of the API server. Mutating admission webhooks are invoked during the mutation phase of the admission process, which allows modification of the resource content before it is persisted. One example of a mutating admission webhook is the Pod Node Selector feature, which uses an annotation on a namespace to find a label selector and add it to the pod specification. apiVersion: admissionregistration.k8s.io/v1beta1 kind: MutatingWebhookConfiguration (1) metadata: name: <controller_name> (2) webhooks: - name: <webhook_name> (3) clientConfig: (4) service: namespace: (5) name: (6) path: <webhook_url> (7) caBundle: <cert> (8) rules: (9) - operations: (10) - <operation> apiGroups: - "" apiVersions: - "*" resources: - <resource> failurePolicy: <policy> (11) Validating admission webhooks are invoked during the validation phase of the admission process. This phase allows the enforcement of invariants on particular API resources to ensure that the resource does not change again. The Pod Node Selector is also an example of a validation admission, by ensuring that all nodeSelector fields are constrained by the node selector restrictions on the project. apiVersion: admissionregistration.k8s.io/v1beta1 kind: ValidatingWebhookConfiguration (1) metadata: name: <controller_name> (2) webhooks: - name: <webhook_name> (3) clientConfig: (4) service: namespace: default (5) name: kubernetes (6) path: <webhook_url> (7) caBundle: <cert> (8) rules: (9) - operations: (10) - <operation> apiGroups: - "" apiVersions: - "*" resources: - <resource> failurePolicy: <policy> (11) First deploy the external webhook server and ensure it is working properly. Otherwise, depending whether the webhook is configured as fail open or fail closed, operations will be unconditionally accepted or rejected. Configure a mutating or validating admission webhook object in a YAML file. Run the following command to create the object: oc create -f <file-name>.yaml After you create the admission webhook object, OpenShift Container Platform takes a few seconds to honor the new configuration. Create a front-end service for the admission webhook: apiVersion: v1 kind: Service metadata: labels: role: webhook (1) name: <name> spec: selector: role: webhook (1) Run the following command to create the object: oc create -f <file-name>.yaml Add the admission webhook name to pods you want controlled by the webhook: apiVersion: v1 kind: Pod metadata: labels: role: webhook (1) name: <name> spec: containers: - name: <name> image: myrepo/myimage:latest imagePullPolicy: <policy> ports: - containerPort: 8000 The following is an example admission webhook that will not allow namespace creation if the namespace is reserved: apiVersion: admissionregistration.k8s.io/v1beta1 kind: ValidatingWebhookConfiguration metadata: name: namespacereservations.admission.online.openshift.io webhooks: - name: namespacereservations.admission.online.openshift.io clientConfig: service: namespace: default name: webhooks path: /apis/admission.online.openshift.io/v1beta1/namespacereservations caBundle: KUBE_CA_HERE rules: - operations: - CREATE apiGroups: - "" apiVersions: - "b1" resources: - namespaces failurePolicy: Ignore The following is an example pod that will be evaluated by the admission webhook named webhook: apiVersion: v1 kind: Pod metadata: labels: role: webhook name: webhook spec: containers: - name: webhook image: myrepo/myimage:latest imagePullPolicy: IfNotPresent ports: - containerPort: 8000 The following is the front-end service for the webhook: apiVersion: v1 kind: Service metadata: labels: role: webhook name: webhook spec: ports: - port: 443 targetPort: 8000 selector: role: webhook
https://docs.openshift.com/container-platform/3.9/architecture/additional_concepts/dynamic_admission_controllers.html
2018-08-14T13:16:08
CC-MAIN-2018-34
1534221209040.29
[]
docs.openshift.com
help topic. Navigate to the Template Screen - In vManage NMS, select the Configuration ► Templates screen. - In the Device tab, click Create Template. - From the Create Template drop-down, select From Feature Template. - From the Device Model drop-down, select the type of device for which you are creating the template. - parameters. NTP Server Configuration The following parameters are required (unless otherwise indicated) to configure an NTP. Select the Server tab and click the plus sign (+): To add another NTP server, click the plus sign (+). You can configure up to four NTP servers. The Viptela software uses the server at the highest stratum level. To delete an NTP server, click the trash icon to the right of the entry. CLI equivalent: system ntp server (dns-server-address | ip-address) key key-id prefer source-interface interface-name version number vpn vpn-id Configure NTP Authentication To configure authentication keys used to authenticate the NTP servers, select the Authentication tab and click the plus sign (+): CLI equivalent: system ntp keys authentication key-id md5 md5-key trusted key-id Release Information Introduced in vManage NMS in Release 15.2.
https://sdwan-docs.cisco.com/Product_Documentation/vManage_Help/Release_17.1/Configuration/Templates/NTP
2018-08-14T13:36:12
CC-MAIN-2018-34
1534221209040.29
[]
sdwan-docs.cisco.com
2009.2 Search - Searching version 2009.2 only: title:2009.2 <terms_or_phrase_here> Enter terms (i.e. title:2009.2 "webworks api"): For more help on finding pages see FindPage Post Release Changes Pages that have been modified since the 2009.2 release. Discussions Discussion pages for the 2009.2 release. - ePublisher/2009.2/Help/01.Welcome_to_ePublisher/3.4.Preface (DiscussionButtonEnabled.PNG) - ePublisher/2009.2/Help/01.Welcome_to_ePublisher/3.4.Preface/Discussion - ePublisher/2009.2/Help/02.Designing_Templates_and_Stationery/3.088.Designing_Stationery/Discussion - ePublisher/2009.2/Help/02.Designing_Templates_and_Stationery/3.122.Designing_Stationery/Discussion - ePublisher/2009.2/Help/02.Designing_Templates_and_Stationery/4.05.Customizing_Stationery/Discussion - ePublisher/2009.2/Help/02.Designing_Templates_and_Stationery/4.06.Customizing_Stationery/Discussion - ePublisher/2009.2/Help/04.Reference_Information/2.006.ePublisher_Window_Descriptions/Discussion - ePublisher/2009.2/Help/04.Reference_Information/2.031.ePublisher_Window_Descriptions/Discussion - ePublisher/2009.2/Help/04.Reference_Information/2.037.ePublisher_Window_Descriptions/Discussion - ePublisher/2009.2/Help/04.Reference_Information/2.038.ePublisher_Window_Descriptions/Discussion Modifying or Contributing Pages If you are a contributer to the ePublisher/2009.2 documentation, please make sure the following wiki markup appears at the bottom of each new or modified wiki page. ---- . CategoryPostReleaseChange Unresolved Post Release Changes If a new, modified page has not been addressed in the next or future release, please make sure the following wiki markup appears at the bottom of that page to ensure that this information can eventually be resolved. ---- . CategoryPostReleaseChange CategoryUnresolved Unresolved Discussion Pages If a discussion page has not been addressed in the next or future release, please make sure the following wiki markup appears at the bottom of that page to ensure that this discussion can eventually be resolved. ---- . CategoryUnresolved Other Information Resources In addition to this Wiki, WebWorks provides the following additional information resources: WebWorks Technical Assistance:
http://docs.webworks.com/ePublisher/2009.2
2018-08-14T13:27:23
CC-MAIN-2018-34
1534221209040.29
[]
docs.webworks.com
Chat routing Your Business Edition Premise system comes pre-configured with the tools and data needed for chat routing. The Business Edition Premise chat routing application delivers chat interactions to the AG_Voice_Sample Agent Group. ImportantYou cannot install chat as a standalone application. Chat can only be activated if the email routing application is active. Acknowledgements The Chat application sends acknowledgements informing customers. This page was last modified on 11 July 2016, at 09:16. Feedback Comment on this article:
https://docs.genesys.com/Documentation/G1/latest/admin/chat
2018-08-14T13:36:28
CC-MAIN-2018-34
1534221209040.29
[]
docs.genesys.com
Campaign Callbacks Summary Report This page describes how you can use the Campaign Callbacks Summary Report to view a breakdown of campaign callbacks, including information about how many were scheduled, completed, or missed.[+] Tip: How do I generate a report? Understanding the Campaign Callbacks Summary The Main tab of this report summarizes the total number of callbacks processed by the contact center breaking them down into the total number scheduled, missed, and completed for each day of the reporting period and distinguishing personal callbacks from nonpersonal ones. The report design internally filters the dataset to return Outbound voice-only interactions. You can specify the Dates, Campaigns, and Tenants on which to report. The following tables explain the prompts you can select when you generate the report, and the measures that are represented in the report: Feedback Comment on this article:
https://docs.genesys.com/Documentation/PSAAS/latest/RPRT/HRCmpgnClbkReport
2018-08-14T13:36:30
CC-MAIN-2018-34
1534221209040.29
[]
docs.genesys.com
Creating Threads The CreateThread function creates a new thread for a process. The creating thread must specify the starting address of the code that the new thread is to execute. Typically, the starting address is the name of a function defined in the program code (for more information, see ThreadProc). This function takes a single parameter and returns a DWORD value. A process can have multiple threads simultaneously executing the same function. The following is a simple example that demonstrates how to create a new thread that executes the locally defined function, MyThreadFunction. The calling thread uses the WaitForMultipleObjects function to persist until all worker threads have terminated. The calling thread blocks while it is waiting; to continue processing, a calling thread would use WaitForSingleObject and wait for each worker thread to signal its wait object. Note that if you were to close the handle to a worker thread before it terminated, this does not terminate the worker thread. However, the handle will be unavailable for use in subsequent function calls. #include <windows.h> #include <tchar.h> #include <strsafe.h> #define MAX_THREADS 3 #define BUF_SIZE 255 DWORD WINAPI MyThreadFunction( LPVOID lpParam ); void ErrorHandler(LPTSTR lpszFunction); // Sample custom data structure for threads to use. // This is passed by void pointer so it can be any data type // that can be passed using a single void pointer (LPVOID). typedef struct MyData { int val1; int val2; } MYDATA, *PMYDATA; int _tmain() { PMYDATA pDataArray[MAX_THREADS]; DWORD dwThreadIdArray[MAX_THREADS]; HANDLE hThreadArray[MAX_THREADS]; // Create MAX_THREADS worker threads. for( int i=0; i<MAX_THREADS; i++ ) { // Allocate memory for thread data. pDataArray[i] = (PMYDATA) HeapAlloc(GetProcessHeap(), HEAP_ZERO_MEMORY, sizeof(MYDATA)); if( pDataArray[i] == NULL ) { // If the array allocation fails, the system is out of memory // so there is no point in trying to print an error message. // Just terminate execution. ExitProcess(2); } // Generate unique data for each thread to work with. pDataArray[i]->val1 = i; pDataArray[i]->val2 = i+100; // Create the thread to begin execution on its own. hThreadArray[i] = CreateThread( NULL, // default security attributes 0, // use default stack size MyThreadFunction, // thread function name pDataArray[i], // argument to thread function 0, // use default creation flags &dwThreadIdArray[i]); // returns the thread identifier // Check the return value for success. // If CreateThread fails, terminate execution. // This will automatically clean up threads and memory. if (hThreadArray[i] == NULL) { ErrorHandler(TEXT("CreateThread")); ExitProcess(3); } } // End of main thread creation loop. // Wait until all threads have terminated. WaitForMultipleObjects(MAX_THREADS, hThreadArray, TRUE, INFINITE); // Close all thread handles and free memory allocations. for(int i=0; i<MAX_THREADS; i++) { CloseHandle(hThreadArray[i]); if(pDataArray[i] != NULL) { HeapFree(GetProcessHeap(), 0, pDataArray[i]); pDataArray[i] = NULL; // Ensure address is not reused. } } return 0; } DWORD WINAPI MyThreadFunction( LPVOID lpParam ) { HANDLE hStdout; PMYDATA pDataArray; TCHAR msgBuf[BUF_SIZE]; size_t cchStringSize; DWORD dwChars; // Make sure there is a console to receive output results. hStdout = GetStdHandle(STD_OUTPUT_HANDLE); if( hStdout == INVALID_HANDLE_VALUE ) return 1; // Cast the parameter to the correct data type. // The pointer is known to be valid because // it was checked for NULL before the thread was created. pDataArray = (PMYDATA)lpParam; // Print the parameter values using thread-safe functions. StringCchPrintf(msgBuf, BUF_SIZE, TEXT("Parameters = %d, %d\n"), pDataArray->val1, pDataArray->val2); StringCchLength(msgBuf, BUF_SIZE, &cchStringSize); WriteConsole(hStdout, msgBuf, (DWORD)cchStringSize, &dwChars, NULL); return 0; } void ErrorHandler); // Free error-handling buffer allocations. LocalFree(lpMsgBuf); LocalFree(lpDisplayBuf); } The MyThreadFunction function avoids the use of the C run-time library (CRT), as many of its functions are not thread-safe, particularly if you are not using the multithreaded CRT. If you would like to use the CRT in a ThreadProc function, use the _beginthreadex function instead.. For more information about synchronization, see Synchronizing Execution of Multiple Threads. The creating thread can use the arguments to CreateThread to specify the following: -. - The initial stack size of the new thread. The thread's stack is allocated automatically in the memory space of the process; the system increases the stack as needed and frees it when the thread terminates. For more information, see Thread Stack Size. -. Related topics
https://docs.microsoft.com/en-us/windows/desktop/ProcThread/creating-threads
2018-08-14T13:54:00
CC-MAIN-2018-34
1534221209040.29
[]
docs.microsoft.com
Step 5: Add Access Information to the Stack Configuration and Deployment Attributes The appsetup.rb recipe depends on data from the AWS OpsWorks Stacks stack configuration and deployment attributes, which are installed on each instance and contain detailed information about the stack and any deployed apps. The object's deploy attributes have the following structure, which is displayed for convenience as JSON: { ... "deploy": { " app1": { "application" : " short_name", ... } " app2": { ... } ... } } The deploy node contains an attribute for each deployed app that is named with the app's short name. Each app attribute contains a set of attributes that define the app's configuration, such as the document root and app type. For a list of the deploy attributes, see deploy Attributes. You can represent stack configuration and deployment attribute values in your recipes by using Chef attribute syntax. For example, [:deploy][:app1][:application] represents the app1 app's short name. The custom recipes depend on several stack configuration and deployment attributes that represent database and Amazon S3 access information: The database connection attributes, such as [:deploy][:database][:host], are defined by AWS OpsWorks Stacks when it creates the MySQL layer. The table name attribute, [:photoapp][:dbtable], is defined in the custom cookbook's attributes file, and is set to foto. You must define the bucket name attribute, [:photobucket], by using custom JSON to add the attribute to the stack configuration and deployment attributes. To define the Amazon S3 bucket name attribute On the AWS OpsWorks Stacks Stack page, choose Stack Settings and then Edit. In the Configuration Management section, add access information to the Custom Chef JSON box. It should look something like the following: { "photobucket" : " yourbucketname" } Replace yourbucketnamewith the bucket name that you recorded in Step 1: Create an Amazon S3 Bucket. AWS OpsWorks Stacks merges the custom JSON into the stack configuration and deployment attributes before it installs them on the stack's instances; appsetup.rb can then obtain the bucket name from the [:photobucket] attribute. If you want to change the bucket, you don't need to touch the recipe; you can just override the attribute to provide a new bucket name.
https://docs.aws.amazon.com/opsworks/latest/userguide/using-s3-json.html
2018-08-14T14:47:20
CC-MAIN-2018-34
1534221209040.29
[]
docs.aws.amazon.com
Contents Examples of Screening Rules This section provides examples of screening rules. Credit Card Number To find text that includes a typical credit card number, you need to match a sequence of four groups of four digits, each group separated by -(hyphen): Or if you want to allow for the possibility that some people will omit the hyphens, use? to make the hyphen optional: You could also use the repetition notation to shorten each \d\d\d\d to \d{4}. North American Phone Number North American phone numbers consists of ten digits, grouped into two groups of three and one of four. There are a number of ways for the groups to be separated: The following regular expression matches all of the above: The table "Phone Number Regular Expression" analyzes this regular expression. Telltale Words To screen for interactions from dissatisfied customers, you might try a regular expression like the following: The first part of this expression matches not followed by zero or more words followed by pleased or satisfied; for example, not very pleased, not satisfied, not at all satisfied (but it also matches strings like can not believe how pleased I am). The rest matches the single words "unhappy" and "complain." Feedback Comment on this article:
https://docs.genesys.com/Documentation/PSAAS/latest/Administrator/exmplSRule
2018-08-14T13:37:57
CC-MAIN-2018-34
1534221209040.29
[]
docs.genesys.com
spipedEstimated reading time: 5 minutes Spiped is a utility for creating symmetrically encrypted and authenticated pipes between sockets. GitHub repo: Library reference This content is imported from the official Docker Library docs, and is provided by the original uploader. You can view the Docker Store page for this image at Supported) Supported architectures: (more info) amd64, arm32v5, arm32v6, arm32v7, arm64v8, i386, ppc64le, s390x spiped/ directory. As for any pre-built image usage, it is the image user’s responsibility to ensure that any use of this image complies with any relevant licenses for all software contained within.library, sample, spiped
https://docs.docker.com/samples/library/spiped/
2018-08-14T14:08:14
CC-MAIN-2018-34
1534221209040.29
[]
docs.docker.com
BITS Upload Protocol This section describes the network protocol for BITS upload and upload-reply job types. The BITS upload protocol is layered on top of HTTP 1.1 and uses many of the existing HTTP headers and defines new headers. The BITS upload protocol supports a single upload file per session. BITS uses packets to describe client requests and server responses. The BITS-Packet-Type header specifies the type of packet being sent. Each packet contains specific headers. BITS uses the BITS_POST verb to identify BITS upload packets. Response packets always use the Ack packet type which stands for acknowledge. The Ack packet is sent in the context of the previous request; there can be only one outstanding request at one time. For upload-reply jobs, BITS uses this protocol to upload the file but does not use this protocol to send the reply to the client. Instead, the BITS server sends the location of the reply file to the client and the client creates a BITS download job to download the reply file. Use the upload protocol to replace the BITS client or server software with your own implementation. For a list of request packets sent by the BITS client, see BITS Request Packets. For a list of response packets sent by the BITS server, see BITS Response Packets. The client determines how it responds to errors or unexpected packets from the BITS server. If the server receives a packet that it does not expect, the server should send an Ack packet with a 400 (Bad Request) return code.
https://docs.microsoft.com/en-us/windows/desktop/Bits/bits-upload-protocol
2018-08-14T14:02:38
CC-MAIN-2018-34
1534221209040.29
[]
docs.microsoft.com
If you are using deployment servers or testing with CI servers, you'll need a way to download your private modules to those servers. To do this, you can set up an .npmrc file which will authenticate your server with npm. One of the things that has changed in npm is that we now use auth tokens to authenticate in the CLI. To generate an auth token, you can log in on any machine. You'll end up with a line in your .npmrc file that looks like this: //registry.npmjs.org/:_authToken=00000000-0000-0000-0000-000000000000 The token is not derived from your password, but changing your password will invalidate all tokens. The token will be valid until the password is changed. You can also invalidate a single token by logging out on a machine that is logged in with that token. To make this more secure when pushing it up to the server, you can set this token as an environment variable on the server. For example, in Heroku you would do this: heroku config:set NPM_TOKEN=00000000-0000-0000-0000-000000000000 --app=application_name You will also need to add this to your environment variables on your development machine. In OSX or Linux, you would add this line to your ~/.profile: export NPM_TOKEN="00000000-0000-0000-0000-000000000000" and then refresh your environment variables: source ~/.profile .npmrc Then you can check in the .npmrc file, replacing your token with the environment variable. //registry.npmjs.org/:_authToken=${NPM_TOKEN} © npm, Inc. and Contributors Licensed under the npm License. npm is a trademark of npm, Inc.
http://docs.w3cub.com/npm/private-modules/ci-server-config/
2018-08-14T13:14:46
CC-MAIN-2018-34
1534221209040.29
[]
docs.w3cub.com
DbDependency represents a dependency based on the query result of a SQL statement. If the query result changes, the dependency is considered as changed. The query is specified via the $sql property. For more details and usage information on Cache, see the guide article on caching. The application component ID of the DB connection. public string $db = 'db' The parameters (name => value) to be bound to the SQL statement specified by $sql. public array $params = [] The SQL query whose result is used to determine if the dependency has been changed. Only the first row of the query result will be used. public string $sql = null Generates the data needed to determine if dependency has been changed. This method returns the value of the global state. © 2008–2017 by Yii Software LLC Licensed under the three clause BSD license.
http://docs.w3cub.com/yii~2.0/yii-caching-dbdependency/
2018-08-14T13:15:07
CC-MAIN-2018-34
1534221209040.29
[]
docs.w3cub.com
public interface BeanFactoryLocator BeanFactory, or a BeanFactorysubclass such as an ApplicationContext. Where this interface is implemented as a singleton class such as SingletonBeanFactoryLocator, the Spring team strongly suggests that it be used sparingly and with caution. By far the vast majority of the code inside an application is best written in a Dependency Injection style, where that code is served out of a BeanFactory/ ApplicationContext container, and has its own dependencies supplied by the container when it is created. However, even such a singleton implementation sometimes has its use in the small glue layers of code that is sometimes needed to tie other code together. For example, third party code may try to construct new objects directly, without the ability to force it to get these objects out of a BeanFactory. If the object constructed by the third party code is just a small stub or proxy, which then uses an implementation of this class to get a BeanFactory from which it gets the real object, to which it delegates, then proper Dependency Injection has been achieved. As another example, in a complex J2EE app with multiple layers, with each layer having its own ApplicationContext definition (in a hierarchy), a class like SingletonBeanFactoryLocator may be used to demand load these contexts. BeanFactory, DefaultLocatorFactory, ApplicationContext BeanFactoryReference useBeanFactory(String factoryKey) throws BeansException BeanFactory(or derived interface such as ApplicationContext) specified by the factoryKeyparameter. The definition is possibly loaded/created as needed. factoryKey- a resource name specifying which BeanFactorythe BeanFactoryLocatormust return for usage. The actual meaning of the resource name is specific to the implementation of BeanFactoryLocator. BeanFactoryinstance, wrapped as a BeanFactoryReferenceobject BeansException- if there is an error loading or accessing the BeanFactory
http://docs.spring.io/spring-framework/docs/3.2.0.RELEASE/javadoc-api/org/springframework/beans/factory/access/BeanFactoryLocator.html
2014-08-20T09:07:29
CC-MAIN-2014-35
1408500801235.4
[]
docs.spring.io
- Move back, move forward or refresh a webpage - Return to the browser home page - Close the browser - Playing media files, viewing pictures and downloading files - Copying and sendingise: Browser - Troubleshooting: Browser security - About TLS - Browser security options - Manage browser security - Add a trusted content server - Add or change a website that is associated with a certificate Previous topic: Turn on geolocation in the browser Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/smartphone_users/deliverables/37644/CX_Browser_security_32785_11.jsp
2014-08-20T08:51:50
CC-MAIN-2014-35
1408500801235.4
[]
docs.blackberry.com
Message-ID: <400967857.26541.1408525481438.JavaMail.haus-conf@codehaus02.managed.contegix.com> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_26540_1334103512.1408525481438" ------=_Part_26540_1334103512.1408525481438 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: These instructions show you how to stress test cometd from jetty 6.1.9 r= unning on unix. The same basic steps apply to running on windows or mac and= I'd be happy to add details instructions if somebody wants to contribute t= hem.=20 The basic steps are:=20 The main change needed to the operating system is that it needs to be ab= le to support the number of connections (=3D=3D file descriptors) for the t= est on both the server machine and the test client machines needed.=20 For a linux system, the file descriptor limit is change in the /etc/secu= rity/limit.conf file. Add the following two lines (or change any exis= ting nofile lines): * hard nofile 40000 * hard nofile 40000=20 There are many other things that can be tuned in the server stack, and t= he zeus ZXTM= documentation gives a good overview.=20 Jetty installation is trivial. See Downloading Jetty, Installing Jetty-6.1.x and Running Jetty-6.1.x.=20 For the purposes of cometd testing, the standard configuration of jetty = ( etc/jetty.xml needs to be edited to change the connector conf= iguration for: The relevant updated section is:=20 <Call name=3D"addConnector"> <Arg> <New class=3D"org.mortbay.jetty.nio.SelectChannelConnecto= r"> <Set name=3D"host"><SystemProperty name=3D&q= uot;jetty.host" /></Set> <Set name=3D"port"><SystemProperty name=3D&q= uot;jetty.port" default=3D"8080"/></Set> <Set name=3D"maxIdleTime">300000</Set> <Set name=3D"Acceptors">2</Set> <Set name=3D"statsOn">false</Set> <Set name=3D"confidentialPort">8443</Set> <Set name=3D"lowResourcesConnections">25000<= /Set> <Set name=3D"lowResourcesMaxIdleTime">5000</= Set> </New> </Arg> </Call>=20 Jetty comes with cometd installed in $JETTY_HOME/webapps/cometd.wa= r. To run the server with additional memory needed for the tes= t, use: java -Xmx2048m -jar start.jar etc/jetty.xml=20 You should now be able to point a browser at the server at either:= =20 http ://yourServerIpAddress:8080/ Specifically try out the cometd chat room with your browser to confirm t= hat it is working=20 The jetty cometd bayeux test client generates load simulating users in a= chat room. To run the client:=20 cd $JETTY_HOME/contrib/cometd/client bin/run.sh=20 The client has a basic text UI that operates in two phases: 1) global co= nfiguration 2) test runs. An example global configuration phase looks= like: # bin/run.sh 2008-04-06 13:43:57.545::INFO: Logging to STDERR via org.mortbay.log.StdEr= rLog server[localhost]: 192.126.8.11 port[8080]: context[/cometd]: base[/chat/demo]: rooms [100]: 10 rooms per client [1]: max Latency [5000]:=20 The Enter key can be used to accept the default value, or a new value ty= ped and then press Enter. The parameters are their meaning are:=20 After the global configuration, the test client loops through individual= tests cycles. Again Enter may be used to accept the default value. T= wo iterations of the test cycle are below:=20 clients [100]: 100 clients =3D 0010 clients =3D 0020 clients =3D 0030 clients =3D 0040 clients =3D 0050 clients =3D 0060 clients =3D 0070 clients =3D 0080 clients =3D 0090 clients =3D 0100 Clients: 100 subscribed:100 publish [1000]:=20 publish size [50]:=20 pause [100]:=20 batch [10]:=20 001111111122111111111111111110000000000000000000000000000000000000000000000= 0000000000000000000000000 Got:10000 of 10000 Got 10000 at 901/s, latency min/ave/max =3D2/41/922ms -- clients [100]:=20 Clients: 100 subscribed:100 publish [1000]:=20 publish size [50]:=20 pause [100]:=20 batch [10]:=20 000000000000000000000000000000000000000000000000000000000000000000000000000= 0000000000000000000000000 Got:10000 of 10000 Got 10000 at 972/s, latency min/ave/max =3D3/26/172ms --=20 The parameters that may be set are:=20 While the test is executing, a series of digits is output to show progre= ss. The digits represent the current average latency in units of 100ms. So = a 0 represent <100ms latency from the time the message was publish by th= e client to when it has been received on the client. 1 represents a latency= >=3D100ms and <200ms etc.=20 At the end of the test cycle the summary is printed showing the total me= ssages received, the message rate and the min/ave/max latency.=20 Before producing numbers for interpretation, it is important to run a nu= mber of trials and to allow the system to "warm up". During the i= nitial runs, the java JIT compiler will optimize the code and object pools = will be populated with reusable objects. Thus the first runs at a give numb= er.=20 It is also important to use long runs for producing results, so that:=20 Typically it is best to start with short low volume test cycles and to g= radually reduce the pause or increase the batch to determine approximate ma= ximum message rates. Then the test duration can be extended by increasing t= he number of messages published or the number of clients (which also increa= ses the message rate as there will be more users per room).=20 A normal run should report no exceptions or timeouts. For a single serve= r and single test client with 1 room per simulated client, then the expecte= d number of messages should always be received. If the server is running cl= ustered, then as this demo has no cluster support, the messages will be red= uced by the a factor equal to the number of servers. Similarly if multiple = clients are used, each test client will see messages published from the oth= er test client, so the number of messages received will be in excess.= =20 u= sed for affinity will need to be set by the balancer (the test client will = handle set cookies).=20 =20 If you are testing a load balancer, then you should start with a cluster= of 1, so that you can verify that no messages are being lost. Then increas= e the cluster size and be content that you will not have exact message coun= ts and must adjust by the number of nodes.
http://docs.codehaus.org/exportword?pageId=77333175
2014-08-20T09:04:41
CC-MAIN-2014-35
1408500801235.4
[]
docs.codehaus.org
Continuous integration There are currently two builds configured: "Continuous Builder" that runs on every check-in and "Full Build" that runs nightly. They are currently identical but eventually the nightly will also run functional tests. Code Coverage Code coverage data is being generated with each run of both builders. The code coverage data can be viewed by clicking the "Code Coverage" tab of the project or by going into the artifacts tab and downloading the source and coverage zips that can then be used from the NcoverExplorer to explore code coverage to the level of the line of code.
http://docs.orchardproject.net/Documentation/Continuous-integration
2014-08-20T08:47:32
CC-MAIN-2014-35
1408500801235.4
[]
docs.orchardproject.net
Welcome to the Aspose.Drawing for .NET Aspose.Drawing is a .NET graphics API that provides the capability of 2D drawings identical to GDI+ in your .NET applications. The drawing engine supports rendering vector graphics (such as lines, curves, and figures) and text (in a variety of fonts, sizes, and styles) onto raster images in all commonly used graphics file formats. The project is based on managed .NET core and does not have dependencies on native code and libraries, with the rendering algorithms working the same way on all supported platforms. The following are the links to some useful resources you may need to accomplish your tasks.
https://docs.aspose.com/display/drawingproductfamily/Home
2020-08-03T15:15:45
CC-MAIN-2020-34
1596439735812.88
[]
docs.aspose.com
When you analyse a website using Sitemap Creator , Sitemap Creator . Many of the advanced functionality of Sitemap Creator requires a link map to be present. If you disable the saving of link map data, some functionality may be impaired To toggle the saving of the link map in a project file - From the Project Properties dialog, select the Links category - Check or uncheck the Save link information in project field - Optionally, check or the Include headers option to save HTTP request and response headers Saving of headers may cause project files to be much larger, and performance of open/save operations may be affected To toggle the clearing of link maps before analysing a web site - From the Project Properties dialog, click the Links category - Check or uncheck the Clear link information before scan field* See Also Customizing Projects - Customizing Projects - Specifying the web site - Configuring site maps - Specifying default documents - Replacing the host name - Crawling additional root URL's - Canonical URL's - Fixing sites using mixed prefixes Advanced Project Customization - Advanced Project Customization -
https://docs.cyotek.com/cyosmc/1.2/advlinkmaps.html
2020-08-03T14:10:44
CC-MAIN-2020-34
1596439735812.88
[]
docs.cyotek.com
Objective 7: Act Local Most organizations have a complex app environment, with multiple internal and external apps for different purposes (e.g. staging environments, employee benefits portals, etc.). InsightAppSec can help protect your public apps, as well as the apps internal to your network. This requires the installation of the local scan engine to your network. This guide can help you set up an on-premises scan engine, and begin scanning your internal apps in no time. Did this page help you?
https://docs.rapid7.com/insightappsec/objective-7-act-local/
2020-08-03T15:46:54
CC-MAIN-2020-34
1596439735812.88
[]
docs.rapid7.com
Start by going to your Microsoft Teams dashboard and click the “Apps” button from the left side menu: Locate and click on the “Incoming Webhook” badge: On the pop-up window, click on the “Add to a team” button: Select the Team and Channel where the notifications will be sent to, and click on the “Set up a connector” button: Now, on the next step, give a friendly name to your new integration, optionally upload an avatar, and then click on the “Create” button. If you wish to upload our logo as the avatar for this integration, you can find it here: You will then be presented with your Incoming Webhook URL, which you should copy as you will need it in the next steps on our platform: Now head onto the HetrixTools platform, and access your Contact Lists from your client area: Either create a new Contact List or edit an existing one, and locate the Microsoft Teams section of the Contact List. In here paste the Webhook URL that you’ve gotten earlier, and save the Contact List: And that’s all, you will now receive our notifications in your Microsoft Teams chat.
https://docs.hetrixtools.com/microsoft-teams-integration/
2020-08-03T14:43:19
CC-MAIN-2020-34
1596439735812.88
[array(['https://i1.wp.com/docs.hetrixtools.com/wp-content/uploads/2020/03/r14.png?w=980&ssl=1', None], dtype=object) array(['https://i1.wp.com/docs.hetrixtools.com/wp-content/uploads/2020/03/r7.png?w=980&ssl=1', None], dtype=object) array(['https://i0.wp.com/docs.hetrixtools.com/wp-content/uploads/2020/03/r8.png?w=980&ssl=1', None], dtype=object) array(['https://i1.wp.com/docs.hetrixtools.com/wp-content/uploads/2020/03/r9.png?w=980&ssl=1', None], dtype=object) array(['https://i2.wp.com/docs.hetrixtools.com/wp-content/uploads/2020/03/r10.png?w=980&ssl=1', None], dtype=object) array(['https://i0.wp.com/docs.hetrixtools.com/wp-content/uploads/2020/03/r11.png?w=980&ssl=1', None], dtype=object) array(['https://i2.wp.com/docs.hetrixtools.com/wp-content/uploads/2019/08/a1.png?w=980&ssl=1', None], dtype=object) array(['https://i1.wp.com/docs.hetrixtools.com/wp-content/uploads/2020/03/r12.png?w=980&ssl=1', None], dtype=object) ]
docs.hetrixtools.com
Environment Overview After logging into KintoHub, all users begin at the Environment Overview page. This page provides a holistic view of your services in a single environment. You can imagine combining your web, backend APIs, jobs, databases, and all the other services that make up your app within a single environment. Environment Overview PageEnvironment Overview Page To get to the environment overview, you need to login to KintoHub and you will automatically be placed on the last Environment page you visited. If you have not created an environment yet, you will be prompted to create one. Create New EnvironmentCreate New Environment This feature is only available for Pay-As-You-Go users. Free users are limited to 1 environment. - Login to your account - Click on the dropdown at the top left next to your environment name - Click Create Environment - Select in your region of choice - Create a name for your environment. - Start adding services to your environment RegionsRegions Every KintoHub environment is hosted in a single cloud region. Regions are specific to the cloud host provider and its regions. No matter the region you choose, you will be charged the same price as specified in Billing note Google Cloud RegionsGoogle Cloud Regions - Google Cloud - North America - is hosted in us-west1or The Dalles, Oregon, USA - Google Cloud Europe - is hosted in europe-west1or St. Ghislain, Belgium - Google Cloud Asia - is hosted in asia-east1Changhua County, Taiwan Amazon Cloud RegionsAmazon Cloud Regions Azure Cloud RegionsAzure Cloud Regions Service ListService List Under the services tab, you will be able to see all of your services and their information. - Service Type - Displayed on the top left with an icon identifier on the far left. - Service Name - Name of your service, defaults to your repository name. - Service Sub-Type - Relevant information such as Dockerfileor Databasetype information of your service - Last Updated - Shows a time of when your service was last updated. - Status - A service can be Healthy or Unhealthy at any given time. - When your service is unhealthy, it means that it is currently not accessible and needs your attention!
https://docs.kintohub.com/anatomy/environment/
2020-08-03T14:43:22
CC-MAIN-2020-34
1596439735812.88
[]
docs.kintohub.com
Bug Check 0x28: CORRUPT_ACCESS_TOKEN The CORRUPT_ACCESS_TOKEN bug check has a value of 0x00000028..
https://docs.microsoft.com/en-us/windows-hardware/drivers/debugger/bug-check-0x28--corrupt-access-token
2020-08-03T16:48:04
CC-MAIN-2020-34
1596439735812.88
[]
docs.microsoft.com
Accessibility Requirements intended to be used as the basis for an accessible ICT procurement toolkit. The present requirements will primarily be useful for public procurers to identify the requirements for their purchases, and also for manufacturers to employ it within their design, build and quality control procedures. - Requirement NLX0050 - Source EN 301 549 - Category Web pages - Type Recommendation (Mandatory after implementation in 2018 of Dutch legislation: 'Wet Digitale Overheid' and 'AMvB: Besluit digitale toegankelijkheid') - Compliant Yes - Description Functional accessibility requirements applicable to ICT products and services, together with a description of the test procedures and evaluation methodology for each accessibility requirement in a form that is suitable for use in public procurement within Europe, in support of Mandate 376. Incorporates WCAG2 for web pages. - Implications - The API Discovery user interface which publishes the NLX API's must comply with "WCAG 2.0 Success Criterion 1.1.1 Non-text content" as specified in the EN 301 549 v1.1.2 standard.
https://docs.nlx.io/compliancy/accessibility/
2020-08-03T15:45:46
CC-MAIN-2020-34
1596439735812.88
[]
docs.nlx.io
Attack Policy This panel allows you to select the list of attack types, attacks locations, such as directory, file, or path, and other properties. The following elements are available on this panel: Attack Policy Template This dropdown list contains the available attack policies and a Load button to load the selected template. The Predefined makes an inventory of all the web pages in your app and helps you understand its topology.. - OWASP 2017 - Enables attacks related to the most critical web application security risks listed in the OWASP 2017 report. - Passive analysis - Enables only passive analysis modules. - SQL Injection - Includes all SQL Injection related attacks. - XSS - Enables all attacks related to Cross-site Scripting. - SQL Injection and XSS - In many real-life scenarios, attackers use a combination of SQL Injection and Cross-site scripting vulnerabilities to gather sensitive data from your application. This template gives you an overall picture of such vulnerabilities in your application. Note To change the current policy, you must switch the template from the “Attack Policy Template” dropdown and click the Load button to apply its settings. You can save currently selected modules as a custom policy using the Save button. The custom policies are located in the AppSpider data folder under the "AttackPolicy" directory. Each policy is located in a separate file. If a custom policy name matches a scanner default policy name, then the custom policy overwrites the scanner default policy. The interface merges default scanner policies with user policies in the Attack policy template list. Attack Policy tab The attack policy tab lets you configure the following properties: Module Policy tab An attack module contains the details of a vulnerability and the logic used by AppSpider to test for that vulnerability. The "Module Policy" table lists all the attack modules, and displays the following information: - Module Name - Identifies the vulnerability AppSpider will detect, such as SQL Injection or File Traversal. - Type - Whether the module is an active or passive attack. - Passive - These attacks do not send active payloads to your website -- examples include misconfigured headers or missing tags such as “autocomplete”. - Active - These modify query strings, POST and GET requests to search for vulnerabilities that require active manipulation of your site. Examples include SQL Injection and OS Commanding. - Severity - The default severities can be modified depending on their impact to your requirements, such as enhanced compliance needs. - Max Findings - The count of findings we will report before we ignore subsequent findings. These are useful in the case of site-wide findings. If your application has misconfigured headers, for example, this can affect the entire site. Reporting this vulnerability for each page of your site will just increase alert fatigue. This example is usually remediated in one spot -- the server configuration. - Description - A brief synopsis of the module’s goals. Attack locations tab This screen contains the list of attacks with options to select the attack location, such as Directory, File, Query, and Post. You can use this screen to prevent attacks from being run on unselected locations. For example, if a new comment on your application triggers a notification email, you may decide to prevent attacks from running on 'Post' locations.
https://docs.rapid7.com/appspider/attack-policy/
2020-08-03T15:15:32
CC-MAIN-2020-34
1596439735812.88
[array(['/areas/docs/_repos//product-documentation__master/219cac3b8113f35fe0fea66a9c8d532502b33c46/appspider/images/Attack Policy.png', None], dtype=object) array(['/areas/docs/_repos//product-documentation__master/219cac3b8113f35fe0fea66a9c8d532502b33c46/appspider/images/Attack locations.png', None], dtype=object) ]
docs.rapid7.com
On this page: Related Pages: This page describes common configuration, tuning, and environment requirements for the machine that hosts the Controller. These considerations apply whether the machine runs Linux or Windows or is a virtual machine. For specific considerations for your operating system type, see the related pages links. Time Synchronization Service A time synchronization service, such as the Network Time Protocol daemon (ntpd), should be enabled on the Controller host machine. MySQL Conflict Certain Linux installation types include MySQL as a bundled package. No MySQL instances other than the one included in the Controller host should run on the Controller host. Verify that no such MySQL processes are running. Virtual Memory Space The virtual memory size (swap space on Linux or Pagefile space on Windows) should be at least 10 GB on the target system, and ideally 20 GB. Verify the size of virtual memory on your system and modify it if it is less than 10 GB. Refer to documentation for your operating system for instructions on modifying the swap space or Pagefile size. Disk Space In addition to the minimum disk space required to install the Controller for your profile size, the Enterprise Console writes temporary files to the system temporary directory, typically /tmp on Linux or c:\tmp on Windows. The Enterprise Console requires 1024 MB of free temp space on the controller host. On Windows, in case of an error due to not meeting the above requirement, you can set the temporary directory environment variable to a directory with sufficient space for the duration of the installation process. You can restore the setting to the original temp directory when installation is complete. Network Ports Review the ports that the Controller uses to communicate with agents and the rest of the AppDynamics platform. For more information, see Port Settings. Note that on Linux systems, port numbers below 1024 may be considered privileged ports that require root access to open. The default Controller listen ports are not configured for numbers under 1024, but if you intend to set them to a number below 1024 (such as 80 for the primary HTTP port), you need to run the Enterprise Console as the root user.
https://docs.appdynamics.com/display/PRO44/Prepare+the+Controller+Host
2020-08-03T15:21:35
CC-MAIN-2020-34
1596439735812.88
[]
docs.appdynamics.com
Managing rule sets Rule sets logically group one or more rules. Rule sets can be used in span actions (for example, Deploy to Active) to make configuration changes and enabled for real-time auditing. When BMC Network Automation is installed for the first time, a sample set of industry-standard security-related rule sets and associated rules are imported to your database. The sample rules help you get started to define and organize your own rules for your organization. If you are upgrading BMC Network Automation, the sample rules are not imported but are available in the server installation directory (BCAN_HOME\public\bmc\bca-networks\rules). At any time, you can import the sample rules using the Rule Import task. The following table contains conceptual information and tasks that describe how to manage rule sets and provides links to applicable topics: Related topics About defining and organizing rules Adding a rule import task
https://docs.bmc.com/docs/NetworkAutomation/89/administering/performing-network-administration-tasks/managing-rule-sets
2020-08-03T15:44:30
CC-MAIN-2020-34
1596439735812.88
[]
docs.bmc.com
Workflow Error Handling Task Retries When an error is raised from the workflow itself, the workflow execution will fail - it will end with failed status, and should have an error message under its error field. There is no built-in retry mechanism for the entire workflow. However, there’s a retry mechanism for task execution within a workflow. Two types of errors can occur during task execution: Recoverable and NonRecoverable. By default, all errors originating from tasks are *Recoverable*. The maximum number of retries for workflow operations is 60, with retries occuring at 15 second intervals for a maximum of 15 minutes. If a NonRecoverable error occurs, the workflow execution will fail, similarly to the way described for when an error is raised from the workflow itself. If a Recoverable error occurs, the task execution might be attempted again from its start. This depends on the configuration of the task_retries and max_retries parameters, which determines how many retry attempts will be given by default to any failed task execution. The task_retries and max_retries parameters can be set in one of the following manners: If the operation max_retriesparameter has been set for a certain operation, it will be used. When installing the manager, the task_retriesparameter is a configuration parameter in the mgmtworker.workflowssection of the config.yaml file. task_retries, task_retry_intervaland subgraph_retriescan also all be set using the CLI ( cfy config). If the parameter is not set, it will default to the value of -1, which means maximum retries (i.e. 60). In addition to the task_retries parameter, there’s also the task_retry_interval parameter, which determines the minimum amount of wait time (in seconds) after a task execution fails before it is retried. It can be set in the very same way task_retries and max_retries are set. If it isn’t set, it will default to the value of 15. Lifecycle Retries (Experimental) In addition to task retries, there is a mechanism that allows retrying a group of operations. This mechanism is used by the built-in install, scale and heal workflows. By default it is turned off. To enable it, set the subgraph_retries parameter in the mgmtworker.workflows section of the config.yaml file to a positive value (or -1 for infinite subgraph retries). The parameter is named subgraph_retries because the mechanism is implemented using the subgraphs feature of the workflow framework. The following example demonstrates how this feature is used by the aforementioned built-in workflows. Consider the case where some cloudify.nodes.Compute node template is used in a blueprint to create a VM. The sequence of operations used to create, configure and start the VM will most likely be mapped using the node type’s cloudify.interfaces.lifecycle interface, create, configure and start operations, respectively; mapping the operations to some IaaS plugin implementation. The create operation may be implemented in such way, that it makes an API call to the relevant IaaS to create the VM. The start operation may be implemented in such way, that it waits for the the VM to be in some started state and have a private IP assigned to it. In such implementation, it is possible that the API call to create the VM was successful but the VM itself started in some malformed manner (e.g. no IP was assigned to it). The task retries mechanism alone, may not be sufficient to fix this problem, as simply retrying the start operation will not change the VM’s corrupted state. A possible solution in this case, is to run the stop and delete operations of the cloudify.interfaces.lifecycle interface and then re-run the create, configure and start again in hope that the new VM will be created in a valid state. This is exactly what the lifecycle retry mechanism does. Once the number of attempts to execute a lifecycle operation ( start in the example above) exceeds 1 + task_retries, the lifecycle retry mechanism kicks in. If subgraph_retries is set to a positive number (or -1 for infinity), a lifecycle retry is performed, which in essence means: run “uninstall” on the relevant node instance and then run “install” on it. Similarly to the task_retries parameters, the subgraph_retries parameter affects the number of lifecycle retries attempted before failing the entire workflow.
https://docs.cloudify.co/latest/working_with/workflows/error-handling/
2020-08-03T15:38:20
CC-MAIN-2020-34
1596439735812.88
[]
docs.cloudify.co
Debugging¶ This section includes documentation on how to debug Flatpak apps. Running debugging tools¶ Because Flatpak runs each application inside a sandbox, debugging tools can’t be used in the usual way, and must instead be run from inside the sandbox. To get a shell inside an application’s sandbox, it can be run with the --command option: $ flatpak run --command=sh --devel <application-id> This creates a sandbox for the application with the given ID and, instead of running the application, runs a shell inside the sandbox. From the shell prompt, it is then possible to run the application. This can also be done using any debugging tools that you want to use. For example, to run the application with gdb: $ gdb /app/bin/<application-binary> This works because the --devel option tells Flatpak to use the SDK as the runtime, which includes debugging tools like gdb. The --devel option also adjusts the sandbox setup to enable debugging. Note The Freedesktop SDK (on which many others are based), includes a range of debugging tools, such as gdb, strace, nm, dbus-send, dconf, and many others. gdb is much more useful when it has access to debug information for the application and the runtime it is using. Flatpak splits this information off into debug extensions, which you should install before debugging an application: $ flatpak install <runtime-id>.Debug When the --devel option is used, Flatpak will automatically use any matching debug extensions that it finds. It is also possible to get a shell inside an application sandbox without having to install it. This is done using flatpak-builder’s --run option: $ flatpak-builder --run <build-dir> <manifest> sh This sets up a sandbox that is populated with the build results found in the build directory, and runs a shell inside it. Creating a .Debug extension¶ Like many other packaging systems, Flatpak separates bulky debug information from regular content and ships it separately, in what is called a .Debug extension. When an application is built, flatpak-builder automatically creates a .Debug extension. This can be disabled with the no-debuginfo option. Overriding sandbox permissions¶ It is sometimes useful to have extra permissions in a sandbox when debugging. This can be achieved using the various sandbox options that are accepted by the run command. For example: $ flatpak run --devel --command=sh --system-talk-name=org.freedesktop.login1 <application-id> This command runs a shell in the sandbox for the given application, granting it system bus access to the bus name owned by logind. Inspecting portal permissions¶ Flatpak has a number of commands that allow to manage portal permissions for applications. To see all portal permissions of an application, use: $ flatpak permission-show <application-id> To reset all portal permissions of an application, use: $ flatpak permission-reset <application-id>
https://docs.flatpak.org/en/latest/debugging.html
2020-08-03T15:00:11
CC-MAIN-2020-34
1596439735812.88
[]
docs.flatpak.org
Performing an advanced search Advanced Search enables you to define and execute a multi-level search and save it for reuse later. Select inside the search box, and then select Advanced Search. Select the content to search for: Documents Documents and Emails Matters Folders Using Search Scope, select the desired database. If one database only is integrated with iManage Work, you will not see a drop-down list. From the criteria drop-down lists, select the filters and values to search for. In the text fields, enter the text you are looking for. The following figure shows the results for a document search where: The document must include briefing only in the title and The keyword appendix only must appear in the title, description, content, or anywhere else. Figure: Advanced Search Optional: Select Add Additional Criteria to add more levels of search criteria and narrow your search. next to a search criteria level to remove it. Select Clear to return all drop-down lists to their default state and to clear all text fields. Select Show Criteria or Hide Criteria to display or hide all search criteria fields. All to search for terms in a different language. All to display matching documents by all authors in the search results. If you have multiple databases integrated with iManage Work, the documents that you have authored are displayed below Personalized. In the search results list, click the required document and then: to display the Properties, Versions, and Preview tabs for the document. to open the document right-click menu that contains options for the various document tasks that you can perform. to return to the iManage Work home page. Saving the Advanced Search Select Save as Search Folder. Navigate to the location where you want to save the search. In the New Search Folder Properties pane, specify the following properties for the folder: Folder name (required) Description Default Security: To change the default security setting, select View Security Details. Clear the Inherit Security From Parent Folder check box. Select Private, Public, View from the drop-down list. Select Add Users/Groups to give access to the desired users and groups. Select Save.
https://docs.imanage.com/work-web-help/10.2.5/en-US/Performing_an_advanced_search.html
2020-08-03T15:43:54
CC-MAIN-2020-34
1596439735812.88
[]
docs.imanage.com
Identity service overview¶ The: - Server A centralized server provides authentication and authorization services using a RESTful interface. - Drivers Drivers or a service back end are integrated to the centralized server. They are used for accessing identity information in repositories external to OpenStack, and may already exist in the infrastructure where OpenStack is deployed (for example, SQL databases or LDAP servers). - Modules Middleware modules run in the address space of the OpenStack component that is using the Identity service. These modules intercept service requests, extract user credentials, and send them to the centralized server for authorization. The integration between the middleware modules and OpenStack components uses the Python Web Server Gateway Interface.
https://docs.openstack.org/keystone/latest/install/get-started-rdo.html
2020-08-03T14:35:29
CC-MAIN-2020-34
1596439735812.88
[]
docs.openstack.org
Getting Started - 2 minutes to read In this section, three eXpressApp Framework (XAF) tutorials are available - a basic tutorial, comprehensive tutorial and the mobile application tutorial. You can also start by watching the Gentle Introduction to XAF webinar video, and then proceed to the tutorials. Basic Tutorial (SimpleProjectManager Application). Comprehensive Tutorial (MainDemo Application). Building a CRM application using XAF with Dave and Adam (Video).
https://docs.devexpress.com/eXpressAppFramework/113577/getting-started?p=netstandard
2020-08-03T14:48:24
CC-MAIN-2020-34
1596439735812.88
[array(['/eXpressAppFramework/images/gs_gallery1_img4121952.png?p=netstandard', 'GS_gallery1_img4'], dtype=object) array(['/eXpressAppFramework/images/gs_galery2_img2121940.png?p=netstandard', 'GS_galery2_img4'], dtype=object) ]
docs.devexpress.com
The Create User function is for administrators to establish new users in the enterprise directory and the SecureAuth IdP environment. 1. Create a New Realm for the Create User function 2. The SecureAuth IdP directory Service Account must have the write privileges to add users This is not required, as a company may wish to have users create their own accounts Click Save once the configurations have been completed and before leaving the Data page to avoid losing changes 2. Select Create User from the Authenticated User Redirect dropdown in the Post Authentication tab in the Web Admin 3. An unalterable URL will be auto-populated in the Redirect To field, which will append to the domain name and realm number in the address bar (Authorized/CreateUser.aspx) 4. A customized post authentication page can be uploaded, but it is not required 5. Select the type of User ID that will be asserted to the Create User function from the User ID Mapping dropdown This is typically the Authenticated User ID No configuration is required for the Name ID Format and Encode to Base64 fields 6. Select Hide, Show, or Required for each SecureAuth Field (corresponding to the Profile Properties in the Data tab) to elect what will appear and what can be modified on the Create User page Hide will not show the SecureAuth Field on the page Show will show the SecureAuth Field on the page, and the administrator can edit the information Required will show the SecureAuth Field on the page, and the administrator must edit the information 7. Select Enter Manually to create a specific password or Generate Automatically to create a random password from the Password field 8. Choose whether to Mask Password and whether the user Must Change Password after the account is created 9. Select the KBQ Count 10. Provide the Group List to assign the new user into the appropriate groups, separated by commas 11. Choose whether to send an Email Notification to the user when the account is created Click Save once the configurations have been completed and before leaving the Post Authentication page to avoid losing changes 12. Click View and Configure FormsAuth keys / SSO token to configure the token/cookie settings and to configure this realm for Single Sign-on (SSO) These are optional configurations
https://docs.secureauth.com/plugins/viewsource/viewpagesrc.action?pageId=25723701
2020-08-03T15:56:06
CC-MAIN-2020-34
1596439735812.88
[]
docs.secureauth.com
Function: a!cardLayout() Displays any arrangement of layouts and components within a card on an interface. Can be styled or linked. Displays the following: The following patterns include usage of the Card Layout. Alert Banner Patterns (Choice Components): The alert banners pattern is good for creating a visual cue of different types of alerts about information on a page.. Icon Navigation Pattern (Conditional Display, Formatting): The icon navigation pattern displays a vertical navigation pane where users can click on an icon to view its associated interface. KPI Patterns (Formatting): The Key Performance Indicator (KPI) patterns provide a common style and format for displaying important performance measures.. Navigation Pattern (Looping): Use the navigation pattern as a way to structure a group of pages with icon and text based left navigation. When an icon and text are selected, the corresponding page is displayed.
https://docs.appian.com/suite/help/20.2/card_layout.html
2020-08-03T15:27:58
CC-MAIN-2020-34
1596439735812.88
[]
docs.appian.com
The Knowledge Module (KM) for UNIX contains the knowledge that PATROL uses during system monitoring, analysis, and management activities. This knowledge can include command descriptions, applications, parameters, recovery actions, and other information useful in administering UNIX systems. The KM parameters allow you to perform system performance analysis quickly and easily because they can provide a detailed statement of all system activity through time. You can clearly identify peaks, troughs, and trends in the performance of system resources. By enabling you to detect problems, optimize systems, analyze trends, plan for capacity, and manage simultaneous, multiple host, the KM helps you ensure that your UNIX systems run efficiently 24 hours a day. The components and products that are included in version 9.13.00 of PATROL for UNIX and Linux are shown below. Note The following topics briefly describe each of the components and products that support the BMC PATROL for UNIX and Linux knowledge module:
https://docs.bmc.com/docs/display/public/unixlinux913/Components
2020-08-03T15:55:15
CC-MAIN-2020-34
1596439735812.88
[]
docs.bmc.com
Understand Azure Synapse Analytics Azure Synapse Analytics provides a unified environment by combining the enterprise data warehouse of SQL, the Big Data analytics capabilities of Spark, and data integration technologies to ease the movement of data between both, and from external data sources. Using Azure Synapse Analytics, you are able to ingest, prepare, manage, and serve data for immediate BI and machine learning needs more easily. Using SQL Analytics and SQL Pools in Azure Synapse Analytics, you will be able to work with modern data warehouse use cases to create database and tables. In addition, you can load data using PolyBase and query a data warehouse using the Massively Parallel Processing architecture. In fact, you can use any existing code that you have created for Azure SQL Data Warehouse with Azure Synapse Analytics seamlessly. Azure Synapse Analytics stores the incoming data into relational tables with columnar storage. This format significantly reduces the data storage costs and improves the query performance. Once information is stored in Azure Synapse Analytics, you can run analytics at massive scale. Compared to the traditional database systems, queries on Azure Synapse Analytics finish in a fraction of the time. Within this platform, SQL Analytics offers T-SQL for batch, streaming, and interactive processing of data Data Warehousing with Azure Synapse Analytics SQL Pools using Data Warehouse Units With Azure Synapse Analytics. CPU, memory, and IO are bundled into units of compute scale called SQL pool. It is a normalized measure of compute resources and performance, and the size of SQL pool is determined by Data Warehousing Units (DWU). By changing your service level, you alter the number of DWUs that are allocated to the system. This. Benefits of data warehousing with Azure Synapse Analytics Azure Synapse Analytics is a key component required for creating end-to-end relational big data solutions in the cloud today. It allows data to be ingested from a variety of data sources, and leverages a scale-out architecture to distribute computational processing of data across a large cluster of nodes, which can: - Independently size compute power irrespective of the storage needs. - Grow or shrink compute power without moving data. - Pause compute capacity while leaving data intact, so you only pay for storage. - Resume compute capacity during operational hours. The Azure Synapse Analytics capabilities are made possible due to the decoupling of computation and storage using the Massively Parallel Processing architecture. Need help? See our troubleshooting guide or provide specific feedback by reporting an issue.
https://docs.microsoft.com/en-gb/learn/modules/design-azure-sql-data-warehouse/2-azure-synapse-analytics
2020-08-03T15:58:32
CC-MAIN-2020-34
1596439735812.88
[array(['../../data-ai-cert/design-azure-sql-data-warehouse/media/3-sql-dw-architecture.png', 'Azure SQL Data Warehouse architecture'], dtype=object) ]
docs.microsoft.com
Tip o’ the Week #44 – Making Outlook show only email from external senders This? … - Save this SCL.CFG file to your PC –it needs to be dropped into a particular folder where a load of other .CFG and .ICO files already exist: it’s the definition for a custom Outlook form that we’ll use to define what the SCL value is. Save it to your desktop or somewhere else you can find it easily, for now. - Now, open up the correct destination for the CFG file – the default locations are … (open using Windows Explorer, or click below to try to open) - o [C:\Program Files\microsoft office\Office14\FORMS\1033\](:\Program%20Files\microsoft%20office\Office14\FORMS\1033\) - o or [C:\Program Files (x86)\microsoft office\Office14\FORMS\1033\](:\Program%20Files%20(x86)\microsoft%20office\Office14\FORMS\1033\) (it would be the latter if you are running 64-bit Windows – if you try to open/navigate to the first one but the FORMS folder doesn’t exist, try the 2nd location… and if you know your programs are installed on a drive other than C:, the substitute appropriately) - Move the CFG file from your desktop into the appropriate folder you’ve opened up by docking the newly opened window to the side (press WindowsKey ÿ– Right) and drag/drop it – you’ll need to confirm that you want to provide administrative privileges for this. - Back in Outlook 2010, go to File | Options | Advanced | Custom Forms (button, about 2/3 of the way down the page) | Manage Forms | Install (phew) - Navigate within the dialog to the CFG file you saved in step 1 above, and Open it. - Press OK on the form properties dialog – you should now see the SCL Extension Form listed in the right hand side – now hit Close | OK | OK to return to the main Outlook view. OK, you could now add SCL to your default view if you really want … otherwise skip to step 2… - In the View tab on the Ribbon, select View Settings then click on the Columns button. In the “Select available columns from…” drop-down box, look right at the bottom, select Forms… then point to Personal Forms in the drop-down list, and you should be able to select SCL Extension Form and add it to the right. - Now, SCL will be available as a column if you select “SCL Extension Form” again, from the “Select available columns…” drop-down; add SCL to the right. If you now return to the standard Outlook view and hover over an external message, you should see something like… ). - In the Search Folder dialog that appears, scroll to the very bottom and select “Create a custom Search Folder” then click the Choose button to select the criteria. - Give it a meaningful name (like External Mail since yesterday) , then hop to the Advanced tab to set the criteria… - Now you can add multiple sets of criteria if you like, but the main one is to select the SCL Extension Form in the Field drop down, then choose the SCL value and set the condition to be at least 0: this would show all external mail. - You might want to add another one like set the Received field to be on or after “8am yesterday” (if you set that, literally, as the condition, Outlook will figure it out). I’ve also excluded some folders by name in this example – any folder that has Junk or Deleted in its name, won’t show in the list. You’ll find “Received” and “In Folder” fields in the “All Mail Fields” group. - DONE! Now you should see the new search folder, it can be added to your Favourites collection if you like (right click on it, choose Show in Favorites) and if you want to go back in to tweak it further, then simply right-click on it and Customize.
https://docs.microsoft.com/en-us/archive/blogs/ewan/tip-o-the-week-44-making-outlook-show-only-email-from-external-senders
2020-08-03T15:18:11
CC-MAIN-2020-34
1596439735812.88
[array(['https://msdnshared.blob.core.windows.net/media/TNBlogsFS/prod.evol.blogs.technet.com/CommunityServer.Blogs.Components.WeblogFiles/00/00/00/48/08/metablogapi/8233.clip_image002_thumb_4251F7ED.jpg', 'clip_image002 clip_image002'], dtype=object) ]
docs.microsoft.com