content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
Become a Partner!
While our Community partner level is FREE to developers and provides access to the Blackboard Learn AMI for developers to build REST and LTI applications, API limits apply and Behind the Blackboard support access is not included. Consider investing in one of our other partnership levels to receive added benefits that help partners deepen integrations, promote solutions and connect with Blackboard clients.
Blackboard Developers Network (BbDN) is Blackboard’s basic partnership and is available for $3,000 annually. BbDN Partnerships include the following benefits:
Member partners receive:
- One (1) Blackboard Learn Developer License (for a self-hosted instance)
- Access to Shared SaaS sites
- Access to Behind the Blackboard Support portal
- Custom listing in our Partner Catalog
- Listing in our App Catalog
- Use of Blackboard Licensed Marks in marketing materials (subject to approval)
- Eligible to sponsor exhibit at BbWorld® and other Blackboard events.
BbDN Partners also have the option to add-on of a “Blackboard Learn SaaS Starter site” for $2,000 annually.
- Limited use for testing/demo purposes
- Licensed for 20 active users and 20 GB Storage
- Updated automatically against the Blackboard Learn SaaS Test Release schedule
- For more info, see our help details About the Blackboard Learn SaaS Deployment
Sign up for the BbDN agreement today on our registration page, or visit our Partnerships Program page to learn about additional opportunities to work together. | https://docs.blackboard.com/partners/become-a-partner | 2021-06-12T17:31:59 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.blackboard.com |
End-of-Life (EoL)
Virtual Routers
The firewall uses virtual routers to obtain routes to other subnets by you manually defining static routes or through participation in one or more Layer 3 routing protocols (dynamic routes). The routes that the firewall obtains through these methods populate the firewall’s IP routing information base (RIB). When a packet is destined for a different subnet than the one it arrived on, the virtual router obtains the best route from the RIB, places it in the forwarding information base (FIB), and forwards the packet to the next hop router defined in the FIB. The firewall uses Ethernet switching to reach other devices on the same IP subnet. (An exception to one best route going in the FIB occurs if you are using ECMP, in which case all equal-cost routes go in the FIB.)
The Ethernet, VLAN, and tunnel interfaces defined on the firewall receive and forward Layer 3 packets. The destination zone is derived from the outgoing interface based on the forwarding criteria, and the firewall consults policy rules to identify the security policies that it applies to each packet. In addition to routing to other network devices, virtual routers can route to other virtual routers within the same firewall if a next hop is specified to point to another virtual router.
You can configure Layer 3 interfaces on a virtual router to participate with dynamic routing protocols (BGP, OSPF, OSPFv3, or RIP) as well as add static routes. You can also create multiple virtual routers, each maintaining a separate set of routes that aren’t shared between virtual routers, enabling you to configure different routing behaviors for different interfaces.
Each Layer 3 Ethernet, loopback, VLAN, and tunnel interface defined on the firewall must be associated with a virtual router. While each interface can belong to only one virtual router, you can configure multiple routing protocols and static routes for a virtual router. Regardless of the static routes and dynamic routing protocols you configure for a virtual router, one general configuration is required:
- Gather the required information from your network administrator.
- Interfaces on the firewall that you want to perform routing.
- Administrative distances for static, OSPF internal, OSPF external, IBGP, EBGP and RIP.
- Create a virtual router and apply interfaces to it.The firewall comes with a virtual router nameddefault. You can edit thedefaultvirtual router or add a new virtual router.
- Select.NetworkVirtual Routers
- Select a virtual router (the one nameddefaultor a different virtual router) orAddtheNameof a new virtual router.
- Select.Router SettingsGeneral
- ClickAddin theInterfacesbox and select an already defined interface from the drop-down.Repeat this step for all interfaces you want to add to the virtual router.
- ClickOK.
- Set Administrative Distances for static and dynamic routing.Set Administrative Distances for types of routes as required for your network. When the virtual router has two or more different routes to the same destination, it uses administrative distance to choose the best path from different routing protocols and static routes, by preferring a lower distance.
- Static—Range is 10-240; default is 10.
- OSPF Internal—Range is 10-240; default is 30.
- OSPF External—Range is 10-240; default is 110.
- IBGP—Range is 10-240; default is 200.
- EBGP—Range is 10-240; default is 20.
- RIP—Range is 10-240; default is 120.
- Commit virtual router general settings.ClickOKandCommit.
- Configure Ethernet, VLAN, loopback, and tunnel interfaces as needed.
Recommended For You
Recommended Videos
Recommended videos not found. | https://docs.paloaltonetworks.com/pan-os/8-0/pan-os-admin/networking/virtual-routers.html | 2021-06-12T17:58:39 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.paloaltonetworks.com |
Controller replacement¶
The OSP Director allows to perofrm controller replacement procedure. More details can be found here:
The
cloud-config plugin automates that procedure. Suppose you already have a deployment with more than one controller.
First step is to extend existing deployment with a new controller node. For virtaul deployment the
virsh plugin can be used:
infrared virsh --topology-nodes controller:1 \ --topology-extend True \ --host-address my.hypervisor.address \ --host-key ~/.ssh/id_rsa
Next step is to perform controller replacement procedure using
cloud-config plugin:
infrared cloud-config --tasks=replace_controller \ --controller-to-remove=controller-0 \ --controller-to-add=controller-3 \
This will repalce controller-0 with the newly added controller-3 node. Nodes index start from 0.
Currently controller replacement is supported only for OSP13 and above.
Advanced parameters¶
In case the controller to be replaced cannot be connected by ssh, the
rc_controller_is_reachable should be set to
no.
This will skip some tasks that should be performed on the controller to be removed:
infrared cloud-config --tasks=replace_controller \ --controller-to-remove=controller-0 \ --controller-to-add=controller-3 \ -e rc_controller_is_reachable=no | https://infrared.readthedocs.io/en/stable/controller_replacement.html | 2021-06-12T18:23:45 | CC-MAIN-2021-25 | 1623487586239.2 | [] | infrared.readthedocs.io |
Creation and registration
To use the GraphLinq Engine test-net or main-net you need to have a wallet for submitting graphs execution over the network. Basically, a web3 compatible wallet is needed for linking with our engine.> What is Web3?
On metamask, you can review your list of tokens and receive payments in your own ETH address. You can use any of the compatible wallet over the list above:
You need to have a chromium compatible browser to install plugins to create an Ethereum wallet, MetaMask is the most mainstream and used extension.> MetaMask Extension Link
Once you have your wallet and created a new Ethereum address, you can use it to make transactions over the Ethereum network, to add the GraphLinq Token to your token list browse to "Add a token" and "Custom Tokens": copy-paste the GLQ token address available at the home page of this documentation.
Now, you need to sign up into the Dashboard interface with your wallet to get registered into GraphLinq, select a wallet from the list on app.graphlinq.io then follow the steps:
You will need to sign a transaction that notifies us about the ownership of your wallet (it's signed with your private key) which allows the Engine to read and check that the signed transaction is from your wallet. A session will start with the engine on your browser linked to the Engine network, you can now deploy and manage your graphs securely. | https://docs.graphlinq.io/wallet/1-index/ | 2021-06-12T16:48:22 | CC-MAIN-2021-25 | 1623487586239.2 | [array(['https://graphlinq.io/docs-images/wallet.png', None], dtype=object)
array(['https://graphlinq.io/docs-images/sign.png', None], dtype=object)] | docs.graphlinq.io |
Download Rundeck 3.0.19 now
A copy of the release notes can be found below:
Release 3.0.19
Date: 2019-03-27
Name: “jalapeño popper peachpuff phone”
Notes
Bug fixes
Contributors
- David Hewitt (davidmhewitt)
- Jaime Tobar (jtobard)
- Greg Zapp (ProTip)
- Stephen Joyner (sjrd218)
- carlos (carlosrfranco)
Bug Reporters
- Daryes
- ProTip
- carlosrfranco
- cwaltherf
- danifr
- davidmhewitt
- gschueler
- jplassnibatt
- sjrd218
Issues
- Switch edit regex field visibility to knockout handler
- Fix file upload plugin on job creation page.
- Making GUI feature as default on production envirionment to show step…
- Stop holding DB connections during job execution
- Fixes configurability of file upload plugin #4369
- Display label instead of name for remote options
- Update README for spring boot launch
- Addresses #1111 and #3939.
- Rundeck holds connections on job referenced jobs
- step labels from child jobs not listed in GUI Report view
- RD3 : NullPointerException on session timeout with LDAP
- option regex field doesn’t appear after selecting File type
- Null Pointer Exception while logging out using an AD user | https://docs.rundeck.com/news/2019/03/27/rundeck-3.0.19.html | 2021-06-12T17:43:40 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.rundeck.com |
Subscribe Pro PHP SDK Release Notes
Our PHP SDK can be used to perform API Requests to the Subscribe Pro system to modify and enhance your existing subscription functionality. Our PHP SDK is used in our official Magento 1, Magento 2 and Zoey Commerce integrations.
You can download the SDK, follow development of the SDK and participate by submitting pull requests or issues via the Subscribe Pro PHP SDK Project on GitHub.
Release Notes
Full releases notes for our PHP SDK are available on GitHub: | https://docs.subscribepro.com/technical/release-notes/subscribe-pro-php-sdk/ | 2021-06-12T18:01:17 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.subscribepro.com |
Contents:
Contents:
- input Datetime value is assumed to be in UTC time zone.:
convertfromutc(myUTCtimestamp,'US/Eastern')
Output: Returns the values of the
myUTCtimestamp converted to US Eastern time zone.
Syntax and Arguments
convertfromutc(date, 'enum-timezone').
Values are assumed to be in UTC time zone format. Coordinated Universal Time is the primary standard time by which clocks are coordinated around the world.
- UTC is also known as Greenwich Mean Time.
- UTC does not change for daylight savings time.
- For more information, see.
String literal value for the time zone to which to convert., Trifacta Wrangler. | https://docs.trifacta.com/display/SS/CONVERTFROMUTC+Function | 2021-06-12T17:46:35 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.trifacta.com |
Remediating compliance results
After running a Compliance Job based on one of the Compliance Content component templates, you can access job results and manually remediate the configuration of components that failed the Compliance Job. The remediation process runs a Deploy Job and deploys one of the BLPackages provided in the Compliance Content libraries, as specified in the remediation options of a specific compliance rule.
After performing remediation, you can still change your mind and undo the remediation.
Before you begin
Remediation for the CIS, DISA, HIPAA, PCIv2, and PCIv3 templates for Windows is provided for both Member Servers and Domain Controller servers. For Domain Controller servers, remediation is provided on Default Domain Controller Security Policy and/or Default Domain Security Policy, as per the settings you have specified for the REMEDIATE_SETTING_FOR_GPO template property.
Before performing the remediation operation, you must ensure that you have set appropriate values for the following properties:
In addition, ensure that the following properties in the Server built-in property class are set with appropriate values:
- IS_SSLF
PCI Properties / CIS Properties / DISA Properties – pointing to the correct instance of the custom property class
- Remediation for any policy on Windows or Linux computers fails if any built-in users or groups that are referred to in rules in the component template are renamed or deleted. You must modify or delete the offending user names or group names within the rules and remediation packages in the component template before you can successfully perform remediation.
- Remediation and undo of audit rules for the CIS - RedHat Linux 5 and PCIv2 - RedHat Linux 5 templates will not take effect if the /etc/audit/audit.rules file contains the -e 2 entry. You must manually remove the entry and restart the target server.
- In the component templates for any policy on a Windows operating system, rules for security settings are designed to check both the local settings and the effective settings. However, on a Member Server only the local settings are modified during remediation, because effective settings are pushed only from the domain controller. As a result, rules for user rights and security settings on a Member Server will show as non-compliant even after running a remediation job if effective settings, which reflect the Group Policy Objects (GPOs), are not in line with the compliance policy design. In such a case, consult your local system administrator to bring the Group Policy in line with the BMC Server Automation Compliance Policy.
Note
Although on a Member Server the User Rights Assignment and Security Options group of rules are designed to remediate only the local settings, the BMC Server Automation Console may display remediated values for both local and effective settings. Similarly, if you push a value from the domain controller, the BMC Server Automation Console may display that value for both local and effective settings. Consult your local system administrator to bring the Group Policy in line with the BMC Server Automation Compliance Policy.
To begin the remediation process
- Navigate to the relevant Compliance Job, right-click it, and select Show Results.
- In the content editor, expand a particular run of the Compliance Job.
- Under the Rules View node, navigate to the relevant component template, rule group, or single compliance rule, right click it, and select Remediate.
For full instructions, see Manually remediating compliance results. | https://docs.bmc.com/docs/ServerAutomation/87/using/analyzing-system-compliance/compliance-content-analysis-and-remediation/remediating-compliance-results | 2021-06-12T18:23:38 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.bmc.com |
Generic ADC.
Add to your firmware section: This example contains all possible configuration options, not all of them are mandatory!
ADC_CHANNEL_GENERIC1value: ADCX
ADC_CHANNEL_GENERIC2value: ADCX
ADC_GENERIC_PERIODIC_SENDvalue: TRUE|FALSE
These initialization functions are called once on startup.
These functions are called periodically at the specified frequency from the module periodic loop.
The following headers are automatically included in modules.h | https://docs.paparazziuav.org/v5.18/module__adc_generic.html | 2021-06-12T16:45:34 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.paparazziuav.org |
Base stabilization code for rotorcraft
Also provide the direct mode 'stabilization_none' As a durty hack, it also provides navigation and guidance: this should be done in the airframe at some point
Add to your firmware section:
STABILIZATION_FILTER_CMDprefix:
STABILIZATION_FILTER_CMD_
ROLL_PITCHvalue: 0
YAWvalue: 0
ROLL_CUTOFFvalue: 20.0
PITCH_CUTOFFvalue: 20.0
YAW_CUTOFFvalue: 20.0
These initialization functions are called once on startup.
The following headers are automatically included in modules.h | https://docs.paparazziuav.org/v5.18/module__stabilization_rotorcraft.html | 2021-06-12T18:09:54 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.paparazziuav.org |
clippy¶
clippy is the tool for Rust static analysis.
Run Locally¶
The mozlint integration of clippy can be run using mach:
$ mach lint --linter clippy <file paths>
Note
clippy expects a path or a .rs file. It doesn’t accept Cargo.toml as it would break the mozlint workflow.
Configuration¶
To enable clippy on new directory, add the path to the include section in the clippy.yml file.
Autofix¶
This linter provides a
--fix option. It requires using nightly
which can be installed with:
$ rustup component add clippy --toolchain nightly-x86_64-unknown-linux-gnu | https://firefox-source-docs.mozilla.org/code-quality/lint/linters/clippy.html | 2021-06-12T17:42:02 | CC-MAIN-2021-25 | 1623487586239.2 | [] | firefox-source-docs.mozilla.org |
Fuzzing Interface¶
The fuzzing interface is glue code living in mozilla-central in order to make it easier for developers and security researchers to test C/C++ code with either libFuzzer or afl-fuzz.
These fuzzing tools, are based on compile-time instrumentation to measure things like branch coverage and more advanced heuristics per fuzzing test. Doing so allows these tools to progress through code with little to no custom logic/knowledge implemented in the fuzzer itself. Usually, the only thing these tools need is a code “shim” that provides the entry point for the fuzzer to the code to be tested. We call this additional code a fuzzing target and the rest of this manual describes how to implement and work with these targets.
As for the tools used with these targets, we currently recommend the use of libFuzzer over afl-fuzz, as the latter is no longer maintained while libFuzzer is being actively developed. Furthermore, libFuzzer has some advanced instrumentation features (e.g. value profiling to deal with complicated comparisons in code), making it overall more effective.
What can be tested?¶
The interface can be used to test all C/C++ code that either ends up in
libxul (more precisely, the gtest version of
libxul) or is
part of the JS engine.
Note that this is not the right testing approach for testing the full browser as a whole. It is rather meant for component-based testing (especially as some components cannot be easily separated out of the full build).
Note
Note: If you are working on the JS engine (trying to reproduce a bug or seeking to develop a new fuzzing target), then please also read the JS Engine Specifics Section at the end of this documentation, as the JS engine offers additional options for implementing and running fuzzing targets.
Reproducing bugs for existing fuzzing targets¶
If you are working on a bug that involves an existing fuzzing interface target, you have two options for reproducing the issue:
Using existing builds¶
We have several fuzzing builds in CI that you can simply download. We recommend
using
fuzzfetch for this purpose, as it makes downloading and unpacking
these builds much easier.
You can install
fuzzfetch from
Github or
via pip.
Afterwards, you can run
$ python -m fuzzfetch -a --fuzzing --gtest -n firefox-fuzzing
to fetch the latest optimized build. Alternatively, we offer non-ASan debug builds which you can download using
$ python -m fuzzfetch -d --fuzzing --gtest -n firefox-fuzzing
In both commands,
firefox-fuzzing indicates the name of the directory that
will be created for the download.
Afterwards, you can reproduce the bug using
$ FUZZER=TargetName firefox-fuzzing/firefox test.bin
assuming that
TargetName is the name of the fuzzing target specified in the
bug you are working on and
test.bin is the attached testcase.
Note
Note: You should not export the
FUZZER variable permanently
in your shell, especially if you plan to do local builds. If the
FUZZER
variable is exported, it will affect the build process.
If the CI builds don’t meet your requirements and you need a local build instead, you can follow the steps below to create one:
Local build requirements and flags¶
You will need a Linux environment with a recent Clang. Using the Clang downloaded
by
./mach bootstrap or a newer version is recommended.
The only build flag required to enable the fuzzing targets is
--enable-fuzzing,
so adding
ac_add_options --enable-fuzzing
to your
.mozconfig is already sufficient for producing a fuzzing build.
However, for improved crash handling capabilities and to detect additional errors,
it is strongly recommended to combine libFuzzer with AddressSanitizer
by adding
ac_add_options --enable-address-sanitizer
at least for optimized builds and bugs requiring ASan to reproduce at all (e.g. you are working on a bug where ASan reports a memory safety violation of some sort).
Once your build is complete, you must additionally run
$ ./mach gtest dontruntests
to force the gtest libxul to be built.
Note
Note: If you modify any code, please ensure that you run both build
commands to ensure that the gtest libxul is also rebuilt. It is a common mistake
to only run
./mach build and miss the second command.
Once these steps are complete, you can reproduce the bug locally using the same steps as described above for the downloaded builds.
Developing new fuzzing targets¶
Developing a new fuzzing target using the fuzzing interface only requires a few steps.
Determine if the fuzzing interface is the right tool¶
The fuzzing interface is not suitable for every kind of testing. In particular if your testing requires the full browser to be running, then you might want to look into other testing methods.
The interface uses the
ScopedXPCOM implementation to provide an environment
in which XPCOM is available and initialized. You can initialize further subsystems
that you might require, but you are responsible yourself for any kind of
initialization steps.
There is (in theory) no limit as to how far you can take browser initialization. However, the more subsystems are involved, the more problems might occur due to non-determinism and loss of performance.
If you are unsure if the fuzzing interface is the right approach for you or you require help in evaluating what could be done for your particular task, please don’t hestitate to contact us.
Develop the fuzzing code¶
Where to put your fuzzing code¶
The code using the fuzzing interface usually lives in a separate directory
called
fuzztest that is on the same level as gtests. If your component
has no gtests, then a subdirectory either in tests or in your main directory
will work. If such a directory does not exist yet in your component, then you
need to create one with a suitable
moz.build. See the transport target
for an example
In order to include the new subdirectory into the build process, you will
also have to modify the toplevel
moz.build file accordingly. For this
purpose, you should add your directory to
TEST_DIRS only if
FUZZING_INTERFACES
is set. See again the transport target for an example.
How your code should look like¶
In order to define your fuzzing target
MyTarget, you only need to implement 2 functions:
A one-time initialization function.
At startup, the fuzzing interface calls this function once, so this can be used to perform one-time operations like initializing subsystems or parsing extra fuzzing options.
This function is the equivalent of the LLVMFuzzerInitialize function and has the same signature. However, with our fuzzing interface, it won’t be resolved by its name, so it can be defined
staticand called whatever you prefer. Note that the function should always
return 0and can (except for the return), remain empty.
For the sake of this documentation, we assume that you have
static int FuzzingInitMyTarget(int* argc, char*** argv);
The fuzzing iteration function.
This is where the actual fuzzing happens, and this function is the equivalent of LLVMFuzzerTestOneInput. Again, the difference to the fuzzing interface is that the function won’t be resolved by its name. In addition, we offer two different possible signatures for this function, either
static int FuzzingRunMyTarget(const uint8_t* data, size_t size);
or
static int FuzzingRunMyTarget(nsCOMPtr<nsIInputStream> inputStream);
The latter is just a wrapper around the first one for implementations that usually work with streams. No matter which of the two signatures you choose to work with, the only thing you need to implement inside the function is the use of the provided data with your target implementation. This can mean to simply feed the data to your target, using the data to drive operations on the target API, or a mix of both.
While doing so, you should avoid altering global state in a permanent way, using additional sources of data/randomness or having code run beyond the lifetime of the iteration function (e.g. on another thread), for one simple reason: Coverage-guided fuzzing tools depend on the deterministic nature of the iteration function. If the same input to this function does not lead to the same execution when run twice (e.g. because the resulting state depends on multiple successive calls or because of additional external influences), then the tool will not be able to reproduce its fuzzing progress and perform badly. Dealing with this restriction can be challenging e.g. when dealing with asynchronous targets that run multi-threaded, but can usually be managed by synchronizing execution on all threads at the end of the iteration function. For implementations accumulating global state, it might be necessary to (re)initialize this global state in each iteration, rather than doing it once in the initialization function, even if this costs additional performance.
Note that unlike the vanilla libFuzzer approach, you are allowed to
return 1in this function to indicate that an input is “bad”. Doing so will cause libFuzzer to discard the input, no matter if it generated new coverage or not. This is particularly useful if you have means to internally detect and catch bad testcase behavior such as timeouts/excessive resource usage etc. to avoid these tests to end up in your corpus.
Once you have implemented the two functions, the only thing remaining is to register them with the fuzzing interface. For this purpose, we offer two macros, depending on which iteration function signature you used. If you sticked to the classic signature using buffer and size, you can simply use
#include "FuzzingInterface.h" // Your includes and code MOZ_FUZZING_INTERFACE_RAW(FuzzingInitMyTarget, FuzzingRunMyTarget, MyTarget);
where
MyTarget is the name of the target and will be used later to decide
at runtime which target should be used.
If instead you went for the streaming interface, you need a different include, but the macro invocation is quite similar:
#include "FuzzingInterfaceStream.h" // Your includes and code MOZ_FUZZING_INTERFACE_STREAM(FuzzingInitMyTarget, FuzzingRunMyTarget, MyTarget);
For a live example, see also the implementation of the STUN fuzzing target.
Add instrumentation to the code being tested¶
libFuzzer requires that the code you are trying to test is instrumented
with special compiler flags. Fortunately, adding these on a per-directory basis
can be done just by including the following directive in each
moz.build
file that builds code under test:
# Add libFuzzer configuration directives include('/tools/fuzzing/libfuzzer-config.mozbuild')
The include already does the appropriate configuration checks to be only active in fuzzing builds, so you don’t have to guard this in any way.
Note
Note: This include modifies CFLAGS and CXXFLAGS accordingly
but this only works for source files defined in this particular
directory. The flags are not propagated to subdirectories automatically
and you have to ensure that each directory that builds source files
for your target has the include added to its
moz.build file.
By keeping the instrumentation limited to the parts that are actually being tested using this tool, you not only increase the performance but also potentially reduce the amount of noise that libFuzzer sees.
Build your code¶
See the Build instructions above for instructions
how to modify your
.mozconfig to create the appropriate build.
Running your code and building a corpus¶
You need to set the following environment variable to enable running the fuzzing code inside Firefox instead of the regular browser.
FUZZER=name
Where
name is the name of your fuzzing module that you specified
when calling the
MOZ_FUZZING_INTERFACE_RAW macro. For the example
above, this would be
MyTarget or
StunParser for the live example.
Now when you invoke the firefox binary in your build directory with the
-help=1 parameter, you should see the regular libFuzzer help. On
Linux for example:
$ FUZZER=StunParser obj-asan/dist/bin/firefox -help=1
You should see an output similar to this:
Running Fuzzer tests... Usage: To run fuzzing pass 0 or more directories. obj-asan/dist/bin/firefox [-flag1=val1 [-flag2=val2 ...] ] [dir1 [dir2 ...] ] To run individual tests without fuzzing pass 1 or more files: obj-asan/dist/bin/firefox [-flag1=val1 [-flag2=val2 ...] ] file1 [file2 ...] Flags: (strictly in form -flag=value) verbosity 1 Verbosity level. seed 0 Random seed. If 0, seed is generated. runs -1 Number of individual test runs (-1 for infinite runs). max_len 0 Maximum length of the test input. If 0, libFuzzer tries to guess a good value based on the corpus and reports it. ...
Reproducing a Crash¶
In order to reproduce a crash from a given test file, simply put the file as the only argument on the command line, e.g.
$ FUZZER=StunParser obj-asan/dist/bin/firefox test.bin
This should reproduce the given problem.
FuzzManager and libFuzzer¶
Our FuzzManager project comes with a harness for running libFuzzer with an optional connection to a FuzzManager server instance. Note that this connection is not mandatory, even without a server you can make use of the local harness.
You can find the harness here.
An example invocation for the harness to use with StunParser could look like this:
FUZZER=StunParser python /path/to/afl-libfuzzer-daemon.py --fuzzmanager \ --stats libfuzzer-stunparser.stats --libfuzzer-auto-reduce-min 500 --libfuzzer-auto-reduce 30 \ --tool libfuzzer-stunparser --libfuzzer --libfuzzer-instances 6 obj-asan/dist/bin/firefox \ -max_len=256 -use_value_profile=1 -rss_limit_mb=3000 corpus-stunparser
What this does is
run libFuzzer on the
StunParsertarget with 6 parallel instances using the corpus in the
corpus-stunparserdirectory (with the specified libFuzzer options such as
-max_lenand
-use_value_profile)
automatically reduce the corpus and restart if it grew by 30% (and has at least 500 files)
use FuzzManager (need a local
.fuzzmanagerconfand a
firefox.fuzzmanagerconfbinary configuration as described in the FuzzManager manual) and submit crashes as
libfuzzer-stunparsertool
write statistics to the
libfuzzer-stunparser.statsfile
JS Engine Specifics¶
The fuzzing interface can also be used for testing the JS engine, in fact there are two separate options to implement and run fuzzing targets:
Implementing in C++¶
Similar to the fuzzing interface in Firefox, you can implement your target in entirely C++ with very similar interfaces compared to what was described before.
There are a few minor differences though:
All of the fuzzing targets live in js/src/fuzz-tests.
All of the code is linked into a separate binary called fuzz-tests, similar to how all JSAPI tests end up in jsapi-tests. In order for this binary to be built, you must build a JS shell with
--enable-fuzzingand
--enable-tests. Again, this can and should be combined with AddressSanitizer for maximum effectiveness. This also means that there is no need to (re)build gtests when dealing with a JS fuzzing target and using a shell as part of a full browser build.
The harness around the JS implementation already provides you with an initialized
JSContextand global object. You can access these in your target by declaring
extern JS::PersistentRootedObject gGlobal;
and
extern JSContext* gCx;
but there is no obligation for you to use these.
For a live example, see also the implementation of the StructuredCloneReader target.
Implementing in JS¶
In addition to the C++ targets, you can also implement targets in JavaScript
using the JavaScript Runtime (JSRT) fuzzing approach. Using this approach is
not only much simpler (since you don’t need to know anything about the
JSAPI or engine internals), but it also gives you full access to everything
defined in the JS shell, including handy functions such as
timeout().
Of course, this approach also comes with disadvantages: Calling into JS and performing the fuzzing operations there costs performance. Also, there is more chance for causing global side-effects or non-determinism compared to a fairly isolated C++ target.
As a rule of thumb, you should implement the target in JS if
you don’t know C++ and/or how to use the JSAPI (after all, a JS fuzzing target is better than none),
your target is expected to have lots of hangs/timeouts (you can catch these internally),
or your target is not isolated enough for a C++ target and/or you need specific JS shell functions.
There is an example target in-tree that shows roughly how to implement such a fuzzing target.
To run such a target, you must run the
js (shell) binary instead of the
fuzz-tests binary and point the
FUZZER variable to the file containing
your fuzzing target, e.g.
$ FUZZER=/path/to/jsrtfuzzing-example.js obj-asan/dist/bin/js --fuzzing-safe --no-threads -- <libFuzzer options here>
More elaborate targets can be found in js/src/fuzz-tests/.
Troubleshooting¶
Fuzzing Interface: Error: No testing callback found¶
This error means that the fuzzing callback with the name you specified
using the
FUZZER environment variable could not be found. Reasons
for are typically either a misspelled name or that your code wasn’t
built (check your
moz.build file and build log). | https://firefox-source-docs.mozilla.org/tools/fuzzing/fuzzing_interface.html | 2021-06-12T18:34:49 | CC-MAIN-2021-25 | 1623487586239.2 | [] | firefox-source-docs.mozilla.org |
The OpenStack control plane upgrade stage includes upgrading of the
OpenStack services APIs. To minimize the API downtime, we recommend that
you select the quickest upgrade depth that does not include running
OS_UPGRADE or
OS_DIST_UPGRADE. You can perform both
OS_UPGRADE
and
OS_DIST_UPGRADE during the post-upgrade stage if required.
To upgrade the OpenStack VCP:
Log in to the Jenkins web UI.
If Ironic will be upgraded, perform the following steps to upgrade Ironic
Conductor before the Ironic API. The
nova-compute service running on
the Ironic Conductor nodes must be upgraded only after the Nova Controller
has been upgraded.
Caution
The upgrade of Ironic is available starting from the MCP 2019.2.6 update. See MCP Release Notes: Maintenance updates for details.
Verify that the following variables are set on each Ironic Conductor
node in Reclass in the
classes/nodes/_generated/<node_name>.yml
files:
parameters: _param: nova: upgrade: enabled: false
Refresh pillars:
salt '*' saltutil.refresh_pillar
Run the Deploy - upgrade control VMs pipeline job on the
Ironic Conductor nodes in the interactive mode setting the
TARGET_SERVERS parameter to
bmt*.
Once the pipeline job execution is finished, verify that the following
variables are set for each Ironic Conductor node in Reclass in the
classes/nodes/_generated/<node_name>.yml files:
parameters: _param: nova: upgrade: enabled: true ironic: upgrade: enabled: false
Refresh pillars:
salt '*' saltutil.refresh_pillar
Run the Deploy - upgrade control VMs pipeline job on the OpenStack controller nodes in the interactive mode setting the parameters as follows:
TARGET_SERVERS=ctl*
After you upgrade the
ctl nodes, define the following values
one by one to upgrade additional OpenStack components from Pike to Queens
as required:
TARGET_SERVERS=share* to upgrade the Manila control plane
TARGET_SERVERS=mdb* to upgrade the Tenant Telemetry including Ceilometer, Gnocchi, Aodh, and Panko
TARGET_SERVERS=kmn* to upgrade Barbican
TARGET_SERVERS=bmt* to upgrade Ironic
Note
During the second execution of the pipeline job on the Ironic
Conductor nodes, only
nova-compute will be upgraded since Ironic
has already been upgraded in step 2.
MODE=INTERACTIVE mode to get the detailed description of the pipeline job flow through the stages
Verify that the control plane is up and the OpenStack services from the data plane are reconnected and working correctly with the newly upgraded control plane.
Run the Deploy - upgrade control VMs pipeline job on the proxy nodes with TARGET_SERVERS=’prx*’ set.
Verify that the public API is accessible and Horizon is working.
Perform the upgrade of other control plane nodes where required depending on your deployment.
Verify that the control plane is upgraded to the intended OpenStack release, APIs work correctly and are available, and the services enable the users to manage their resources.
Note
The new features of the intended OpenStack release are not available till the data plane nodes are upgraded.
Proceed to Upgrade the OpenStack data plane. | https://docs.mirantis.com/mcp/q4-18/mcp-operations-guide/update-upgrade/major-upgrade/upgrade-openstack/os-pike-queens-upgrade-detailed/upgrade-vcp-p-q.html | 2021-06-12T17:29:07 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.mirantis.com |
Event Grid SDKs for management and publishing
Event Grid provides SDKs that enable you to programmatically manage your resources and post events.
Management SDKs
The management SDKs enable you to create, update, and delete event grid topics and subscriptions. Currently, the following SDKs are available:
Data plane SDKs
The data plane SDKs enable you to post events to topics by taking care of authenticating, forming the event, and asynchronously posting to the specified endpoint. They also enable you to consume first party events. Currently, the following SDKs are available:
Next steps
- For example applications, see Event Grid code samples.
- For an introduction to Event Grid, see What is Event Grid?
- For Event Grid commands in Azure CLI, see Azure CLI.
- For Event Grid commands in PowerShell, see PowerShell. | https://docs.azure.cn/en-us/event-grid/sdk-overview | 2022-08-08T00:21:56 | CC-MAIN-2022-33 | 1659882570741.21 | [] | docs.azure.cn |
- 15 Nov 2021
- 6 Minutes to read
-
- DarkLight
v10.0
- Updated on 15 Nov 2021
- 6 Minutes to read
-
- DarkLight
Release Date: 9th June 2021
Highlights
In the last 15 months, the product team has worked hard on this release. The main focus of BizTalk360 version 10 is improving the user experience and bring a fresh look to the entire application. Besides that, also a set of new features and improvements have been brought to the product.
Important updates
Browser support
BizTalk360 v10 supports the modern chromium browsers as recommended by Angular
Messaging patterns
From this version on, messaging patterns will be determined using the BizTalk Management database(BizTalkMgmtDb). In previous versions, the messaging patterns were determined from either the Tracking database or the Management database. However, because determining the messaging patterns using the Tracking database the performance can be impacted. Hence, we moved to determine the message patterns via the Management database.
Manage dependencies
Analytics Reporting: To support the new client framework(Angular), the dependencies file is upgraded to the latest version of Select PDF. To know more about upgrading the dependencies file read this article.
Upgrade support
Similar to the previous versions of the BizTalk360, the upgrade process is the same.
Two features are impacted in this version to support the Angular framework.
Feature compatibility
Deprecated / Not Available features
- HPOM Integration - Deprecated
- Live Feed - Deprecated
- Integration Account - Planned for the future release
- Integrated Troubleshooter - Planned for the future release
New features
UI/UX Refresh
- Query builder: The Query builder to access saved queries has been enriched with an interactive user interface (UI).
- Dashboard & widgets: Widgets configuration and graphical representation of the widgets in Operations and Analytics dashboard's user experience has been improved.
- Monitoring dashboard: The monitoring dashboard's full & collapsed graph view is consolidated into a single view. Users can set their preferred options to visualize the monitoring dashboard with a dark/light theme, fit to read/view, and enhanced filtering capabilities
- New filtering capabilities: Enhanced filtering component is helpful to view the drill-down data in manage alarms, monitoring dashboard, data monitoring dashboard, and governance audit
- Card view layout: The card view layout can represent the data in a structured way. Users can switch between Card and Grid view based on their preference in manage alarms, license features
- Message pattern: Message pattern layout is enriched with a modern look & feel with sharp & initiative UI to represent the flow of messages. Editing the name of the pattern has been made easier. To know more...
- BizTalk group topology:The user experience is improved in the BizTalk group topology diagram to visualize the processing, tracking communication between the BizTalk Server and SQL Server nodes.
SQL Server availability monitoring
Server availability monitoring provides the ability to monitor failover clusters and standalone SQL Servers using the protocols Ping or Telnet. BizTalk Server highly depends on its SQL Server databases for storing messages, picking them up for processing, maintaining the state and configuration of all kinds of artifacts, etc. It is crucial that BizTalk Server can access its databases via SQL Server, hence we brought this feature to monitor SQL Server availability. To know more...
Enriched Knowledge Base
We enriched the Knowledge Base feature by allowing users to create articles within the features like Message Box queries(suspended service instances), the Advanced Event Viewer, and the ESB portal. The Knowledge Base article creation flow and editing of the article are enhanced to manage the content in a better way. To know more...
Renewed data monitoring dashboard
The data monitoring dashboard is completely redesigned to grid view from calendar view. This will help the users get the consolidated view to visualize the scheduled results. Besides, enriched filtering capabilities and export options have made the user can generate the drill-down report. To know more...
Business Rules composer
Business Rules are composed using XSLT from this version. User Experience of managing the rules and policies has been aligned as like BizTalk Rule Composer with context menus. To know more...
Landing page
A collaborative landing page is to present the quick status of BizTalk environments in a card layout. In the addition to that users can able to view the statistical information of the BizTalk Group, view the BizTalk Group Topology, and manage the license activation. To know more...
Enhancements
Pin the results to the dashboard
Pin the results of suspended service instances, secure SQL queries as a table view into the operations/analytics preferred dashboard.
BizTalk Health Monitor
Enhanced BHM Schedule configurations are
- Option to remove the schedule configuration is implemented.
- In each BizTalk environment, users can be able to configure different BHM schedules.
- In BizTalk360 high availability configuration has shown the server name in which BHM schedules are executed.
ESB exceptions
In ESB Exception data the functional improvements are
- A new filter to get already submitted fault exceptions is implemented.
- Resubmitted fault exceptions are indicated with an icon in the grid.
Manage SQL Server configuration
From this version, the managing the SQL Server in a centralized place for operations, monitoring, and analytics. Removed the SQL server configuration from performance data collection.
Azure subscriptions configuration
Azure subscriptions configuration is no longer requires the publish settings file.
Schedule maintenance configuration
Schedule maintenance configuration's UI/UX is improved in the Business Holiday calendar list and configuration can be managed within schedule maintenance
Export to Excel & PDF
Users can able to download the data in excel and PDF formats. With the enhanced export all options users can able to download in a single file.
Secure SQL queries
Grant access to all secure SQL queries permission has been implemented, by which normal users/NT users can able to access all the existing or queries that are created in the future.
Bug fixes
Operations
- Multiple actions(Resume/Terminate) on the same suspended service instances are allowed in the MessageBox queries. It has been fixed by allowing one action at a time
- Fixed the issue of NT users are not able to create secure SQL queries
Monitoring
- File location monitoring: Support of different file mask patterns(PPI*, Order*.xml) in File and FTP locations is implemented in this version
- Web endpoints don’t appear on the BizTalk Group dashboard. This issue has been resolved in this version
- Data monitoring schedules get stuck if there is a data monitoring alarm with an end date. This issue has been fixed
- Addressed an issue in MessageBox data monitoring, with an orchestration service name as a filter, action is not taken on suspended service instances
General
- Knowledge base articles are allowed to be created without error code. It helps to associate KB articles with dehydrated service instances
Hotfixes
The key focus of these hot fix releases are to address the enhancement and issue fixes that have been identified as part of the v10 release; the details of those are as follows.
v10.0.3049.0207
Enhancements / Bug fixes
- Column resize option for pinned SQL queries widgets in the dashboard
- In the API documentation, routing to the WCF service failed. This issue has been addressed
- BAM views using alias and order by implementation has been resolved
- Values for dates/times in filters are fixed to reflect as per the user profile timezone
- Enabled the option to send ESB fault details in the ESB Data monitoring alerts
- Analytics widget date/time issue has been fixed to reflect today’s data
- Audit history in the schedule maintenance is getting removed. This issue has been resolved in this version
v10.0.3117.2607
Enhancements / Bug fixes
- SQL server disk usage monitoring section shows wrong 100% usage
- Date time filter in BAM had an issue in filtering values which has been addressed
- Saved queries section in tracking queries had an issue in persisting filter values has been resolved
- Export to excel download issue is resolved in Message box queries to download the files without error | https://docs.biztalk360.com/docs/v10030030906 | 2022-08-08T01:01:59 | CC-MAIN-2022-33 | 1659882570741.21 | [array(['https://cdn.document360.io/253f6006-3994-42d0-98cd-fdc637f51791/Images/Documentation/knowledge-base.gif',
None], dtype=object)
array(['https://cdn.document360.io/253f6006-3994-42d0-98cd-fdc637f51791/Images/Documentation/Data%20Monitoring%20Dashboard.png',
None], dtype=object) ] | docs.biztalk360.com |
You use these settings to prevent communication of Active Directory domain names to unauthenticated users using the various Horizon clients. These settings govern whether the information about the Active Directory domains that are registered with your Horizon Cloud environment is sent to the Horizon end-user clients and, if sent, how it is displayed in end-user clients' login screens.
Configuring your environment includes registering your environment with your Active Directory domains. When your end users use a Horizon client to access their entitled desktops and remote applications, those domains are associated with their entitled access. Prior to the March 2019 quarterly service release, the system and clients had default behavior with no options to adjust that default behavior. Starting in March 2019, the defaults are changed, and you can optionally use the new Domain Security Settings controls to change from the defaults.
This topic has the following sections.
- Domain Security Settings
- This Release's Default Behavior Compared with Past Releases
- Relationship to Your Pods' Manifest Levels
- Single Active Directory Domain Scenarios and User Login Requirements
- Multiple Active Directory Domain Scenarios and User Login Requirements
- About Pods in Microsoft Azure with Unified Access Gateway Instances Configured with Two-Factor Authentication
Domain Security Settings
Combinations of these settings determine whether domain information is sent to the client and whether a domain selection menu is available to the end user in the client.
This Release's Default Behavior Compared with Past Releases
The following table details the previous default behavior, the new default behavior, and the settings you can use to adjust the behavior to meet your organization's needs.
Relationship to Your Pods' Manifest Levels
When you are an existing customer with pods created in an earlier service release, until all of your pods in Microsoft Azure are updated to the manifest level for this Horizon Cloud release, your environment is configured by default to provide the same behavior as it had in the previous Horizon Cloud release. That legacy behavior is:
- The system sends the Active Directory domain names to the client (Show Default Domain Only is set to No).
- The clients have a drop-down menu that displays the list of domain names to the end user prior to logging in (Hide Domain Field is set to No).
Also, until all of your pods are at this service release level, the General Settings page does not display the Domain Security Settings controls. If you have a mixed environment with existing non-updated pods and newly deployed pods at this release level, the new controls are not available. As a result, you cannot change from the legacy behavior until all of your pods are at this service release level.
When all of your environment's pods are updated, the settings are available in the Horizon Cloud administrative console. The post-update defaults are set to the pre-update behavior (Show Default Domain Only is No and Hide Domain Field is No). The post-update default settings are different than the new-customer defaults. These settings are applied so that the pre-update legacy behavior continues for your end users after the update, until you choose to change the settings to meet your organization's security needs.
Single Active Directory Domain Scenarios and User Login Requirements
The following table describes the behavior for various setting combinations when your environment has a single Active Directory domain, without two-factor authentication, and your end users use the Horizon Clients 5.0 and later versions.
This table describes the behavior when your environment has a single Active Directory domain and your end users use previous versions of the Horizon clients (pre-5.0).
*DefaultDomain*for the command's domain option or update the client to the 5.0 version. However, when you have more than one Active Directory domain, passing
*DefaultDomain*does not work.
Multiple Active Directory Domain Scenarios and User Login Requirements
This table describes the behavior for various setting combinations when your environment has multiple Active Directory domains, without two-factor authentication, and your end users use the Horizon Clients 5.0 and later versions.
Basically, the end user has to include the domain name when they type in their user name, like
domain\username, except for the legacy combination where the domain names are sent and are visible in the client.
This table describes the behavior when your environment has multiple Active Directory domains and your end users use previous versions of the Horizon clients (pre-5.0).
- Setting Hide Domain Field to Yes allows end users to enter their domain in the User name text box in these pre-5.0 Horizon clients. When you have multiple domains and you want to support use of pre-5.0 Horizon clients by your end users, you must set Hide Domain Field to Yes so that your end users can include the domain name when they type in their user name.
- Using the command-line client launch of older (pre-5.0) clients and specifying the domain in the command fails for all of the combinations below. The only work around when you have multiple Active Directory domains and want to use command-line client launch is to update the client to the 5.0 version.
About Pods in Microsoft Azure with Unified Access Gateway Instances Configured with Two-Factor Authentication
As described in Specify Two-Factor Authentication Capability for the Pod, when you deploy a pod into Microsoft Azure, you have the option of deploying it with two-factor authentication configured on its Unified Access Gateway instances.
When a pod in Microsoft Azure has its Unified Access Gateway configured with two-factor authentication, end users attempting to authenticate with their Horizon clients first see a screen asking for their two-factor authentication credentials, followed by a login screen asking for their Active Directory domain credentials. In this case, the system sends the domain list to the clients only after the end user's credentials successfully pass that initial authentication screen.
Generally speaking, if all of your pods have two-factor authentication configured on their Unified Access Gateway instances, you might consider having the system send the domain list to the clients and have the clients display the domain drop-down menu. That configuration provides the same legacy end-user experience for all of your end users, regardless of which Horizon client version they are using or how many Active Directory domains you have. After the end user successfully completes the two-factor authentication passcode step, they can then select their domain from the drop-down menu in the second login screen. They can avoid having to include their domain name when they enter their credentials into the initial authentication screen.
However, because the Domain Security Settings are applied at the Horizon Cloud customer account (tenant) level, if some of your pods do not have two-factor authentication configured, you might want to avoid sending the domain list, because those pods will send the domain names to the clients connecting to them prior to the end users logging in.
The end-user login requirements by Horizon client follow the same patterns that are described in Single Active Directory Domain Scenarios and User Login Requirements and Multiple Active Directory Domain Scenarios and User Login Requirements. When connecting to a pod that has two-factor authentication configured and you have multiple Active Directory domains, the end user must provide their domain name as
domain\username if Hide Domain Field is set to Yes. | https://docs.vmware.com/en/VMware-Horizon-Cloud-Service/services/hzncloudmsazure.admin15/GUID-FE87529D-CF7F-464D-B74F-A21F19A22F54.html | 2022-08-08T01:16:30 | CC-MAIN-2022-33 | 1659882570741.21 | [] | docs.vmware.com |
Follow this guide to get a brief guided tour of Virtual Garage Manager’s TyreShop functionality.
Introduction
TyreShop is a subset of functionality located within 2 of our products;
- Virtual Garage Manager allows you to manage local tyre stock, consolidate stock, check-in stock (manually or with a barcode scanner) and if you’re a GroupTyre member, you can also search for and order tyres directly.
- Motasite allows your customers to order tyres for their vehicle through your website. You can configure your own markup criteria so potential customers are offered a fully fitted price. You customers can then place a booking to get these tyres fitted and the tyres are automatically ordered from GroupTyre.
TyreShop – Virtual Garage Manager
In the Summer of 2016, Virtual Garage Manager’s parts interfaces were overhauled due to the introduction of a live tyre feed from GroupTyre. We’ve streamlined the parts data entry process and have made it even easier to go from purchase order, to purchase invoice as well as keeping track of your stock with as little data entry as possible.
Live tyre feed from GroupTyre
Working alongside GroupTyre, we have added the ability to order tyres from your local wholesaler. You’ll be able to search through all of the available stock and will be able to place an order which will be delivered directly to your depot.
How to see a live tyre feed if you’re not a GroupTyre member.
If you’re not yet a member of GroupTyre the best way to proceed is to give us a call and we can make the necessary introductions. There’s no exclusivity required, so you can still use other Tyre wholesalers, but in order to make tyre orders within VGM you’ll need a GroupTyre account.
How to see a live tyre feed if you are a GroupTyre member.
If you’re already a member of GroupTyre, simply get in touch with us and we’ll get access to your GroupTyre price file and will get everything set up for you.
Once we’ve got your GroupTyre data feed hooked up, there are several configuration options that you’ll have access to.
Setting up your tyre markup (local pricing).
Setting up your tyre markup (web pricing).
Local tyre stock
The local tyre stock window will allow you to quick search through your stock with a selection of filters. It’s from here that you can also add tyres to purchase orders.
Searching through your local tyre stock.
Create new tyres manually.
Editing existing tyres manually.
Deleting a tyre.
Printing a tyre label.
Favouriting a tyre.
Purchase orders
You can create purchase orders in a number of ways within VGM, depending on whether or not you’re ordering tyres online from GroupTyre, or raising purchases orders to be sent manually to other suppliers.
Ordering stock – Creating a purchase order and sending an order to GroupTyre.
Ordering stock – Creating a purchase order manually.
Deleting purchase orders.
Finding purchase orders.
Editing purchase orders.
Viewing relationships between purchase orders & purchase invoices.
Purchase invoices
Once you’ve received your parts / tyres, you’ll want to create a purchase invoice and to consolidate that stock. VGM will help you do this automatically so that everything can be synced nicely together.
Creating a new purchase invoice.
Receiving stock – Creating a new purchase invoice from an existing purchase order.
Searching existing purchase invoices.
Deleting purchase invoices.
Posting purchase invoices.
Adding items to a purchase invoice.
Purchase credits
Occasionally, you’ll have to send parts and tyres back to the supplier and you’ll need to raise purchase credits in order to balance the books and to keep your stock up to date.
Creating a purchase credit.
Searching through purchase credits.
Adding items to a purchase credit.
Tyre history
The history tab allows you to see the full history for a selected part or tyre.
Viewing the history of a tyre or part.
Tyre reporting
There are several reports that you can run in order to keep on top of your stock and accounts. The following reports are currently available.
Running a stock report on tyres.
TyreShop – Motasite
If you’re a VGM customer with a GroupTyre account, then we can build a special website for you to sell tyres online (alongside MOTs, Servicing etc…). To save the sales pitch, if this interests you then visit our motasite page. If you have VGM, Motasite and GroupTyre account, then the following articles will be helpful so that you can configure your online pricing.
Setting up your tyre markup (web pricing). | https://docs.motasoft.co.uk/tyreshop-quick-start-guide/ | 2022-08-08T01:47:11 | CC-MAIN-2022-33 | 1659882570741.21 | [array(['http://knowledgebase.motasoft.co.uk/wp-content/uploads/2016/08/image80.png',
'image80'], dtype=object) ] | docs.motasoft.co.uk |
Port configuration requirements
The Splunk Data Stream Processor uses the following ports.
Installer Ports
These ports are required during initial installation and can be closed after installation is complete.
Cluster Ports
These ports are used for Cluster operation and should be open between cluster nodes.
External Ports
The following are ports used by end users outside of the cluster. Not all cluster nodes need to be exposed, but the node externally accessible needs to have the following ports open.
You must open the
30000,
31000, and the
30002 ports to use DSP. The
30001 port is only required if you are planning to send data from a Splunk forwarder to DSP.
This documentation applies to the following versions of Splunk® Data Stream Processor: 1.2.0, 1.2.1-patch02, 1.2.1, 1.2.2-patch02, 1.2.4, 1.2.5, 1.3.0, 1.3.1
Feedback submitted, thanks! | https://docs.splunk.com/Documentation/DSP/1.3.0/Admin/GravityPorts | 2022-08-08T00:37:59 | CC-MAIN-2022-33 | 1659882570741.21 | [array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'],
dtype=object) ] | docs.splunk.com |
Print a document and add a barcode
In this case, a barcode will be added to document that is being printed.
The barcode can contain dynamic information from the document (e.g. a customer number, zipcode, ID,...).
Print&Share Configuration
This article assumes you know how to use and add recognition parameters to insert dynamic values in the barcode.
Please follow the steps to build this case:
- Print your document to Print&Share, this makes the configuration of the profile easier.
- Create a new profile containing a channel with print functionality.
- Go to the Get More Editor and click the Insert barcode button.
- Select the symbology you want to us, in our case we select the option
QRCode.
- Click Add next to the Value-label and select Recognition parameters, because we will add a dynamic value based on the content of the document.
- Configure the recognition. (for example Label-recognition for a label called
DocNumtur)
- Click OK to close the Barcode editor dialog.
- Select and move the barcode to the position on the page that you like.
- Optionally you can modify the Barcode-properties. By changing the Range-property you can specify on which page the barcode or QR-code should appear. | https://docs.winking.be/Tn/Article/103 | 2022-08-08T01:03:39 | CC-MAIN-2022-33 | 1659882570741.21 | [] | docs.winking.be |
Getting Help
Before asking for help, it’s very important for you to try to solve your own problems.
The most important thing you can do is read your log files, and look for things which are out of the ordinary compared to rest of the pattern. A good place to start is at the end of the file, however due to use of parallel processing, this is not always the where you’ll find the problem.
Words to look for to isolate a problem:
segmentation fault
killed
error
fail
warning– sometimes
Be sure to check case.
You can use
less to browse a log file and
/ inside
less to search. Use the keyboard to move around.
When asking for help, it’s important to fully explain the problem you’re having, this means explaining
what you’re doing, including anything out of the ordinary
where you’re doing it (what computer, and how you’re accessing it, i.e. locally/remotely, SSH, X2Go, wifi, ethernet, inside/outside Douglas network)
what the error/problem/unexpected result you’re getting
the complete paths/access to the inputs, outputs, and logs
what you tried to do to solve your problem (in detail) and the results
For more background on getting help, these documents are excellent in understanding how to communicate about this: | https://docs.douglasneuroinformatics.ca/en/latest/getting_help/index.html | 2022-08-08T01:39:18 | CC-MAIN-2022-33 | 1659882570741.21 | [] | docs.douglasneuroinformatics.ca |
EmR2011 Rule Text
Please see the EmR2011 info page for dates, other information, and more related documents.
Department of Workforce Development (DWD)
Administrative Code Chapter Group Affected:
Chs. DWD 100-150; Unemployment Insurance
Administrative Code Chapter Affected:
Ch. DWD 113 (Revised)
Related to: Waiving interest in limited circumstances for employers subject to reimbursement financing when reimbursements are delinquent due to COVID-19
Comment on this emergency rule
Related documents:
EmR2011 Fiscal Estimate | https://docs.legis.wisconsin.gov/code/register/2020/774A1/register/emr/emr2011_rule_text/emr2011_rule_text | 2022-08-08T02:09:26 | CC-MAIN-2022-33 | 1659882570741.21 | [] | docs.legis.wisconsin.gov |
Error AADSTS50020 - User account from identity provider does not exist in tenant
This article helps you troubleshoot error code
AADSTS50020 that's returned if a guest user from an identity provider (IdP) can't sign in to a resource tenant in Azure Active Directory (Azure AD).
Symptoms
When a guest user tries to access an application or resource in the resource tenant, the sign-in fails, and the following error message is displayed:
AADSTS50020: User account '[email protected]' from identity provider {IdentityProviderURL} does not exist in tenant {ResourceTenantName}.
When an administrator reviews the sign-in logs on the home tenant, a "90072" error code entry indicates a sign-in failure. The error message states:
User account {email} from identity provider {idp} does not exist in tenant {tenant} and cannot access the application {appId}({appName}) in that tenant. The account needs to be added as an external user in the tenant first. Sign out and sign in again with a different Azure Active Directory user account.
Cause 1: Used unsupported account type (multitenant and personal accounts)
If your app registration is set to a single-tenant account type, users from other directories or identity providers can't sign in to that application.
Solution: Change the sign-in audience setting in the app registration manifest
To make sure that your app registration isn't a single-tenant account type, perform the following steps:
In the Azure portal, search for and select App registrations.
Select the name of your app registration.
In the sidebar, select Manifest.
In the JSON code, find the signInAudience setting.
Check whether the setting contains one of the following values:
- AzureADandPersonalMicrosoftAccount
- AzureADMultipleOrgs
- PersonalMicrosoftAccount
If the signInAudience setting doesn't contain one of these values, re-create the app registration by having the correct account type selected. You currently can't change signInAudience in the manifest.
For more information about how to register applications, see Quickstart: Register an application with the Microsoft identity platform.
Cause 2: Used the wrong endpoint (personal and organization accounts)
Your authentication call must target a URL that matches your selection if your app registration's supported account type was set to one of the following values:
Accounts in any organizational directory (Any Azure AD directory - Multitenant)
Accounts in any organizational directory (Any Azure AD directory - Multitenant) and personal Microsoft accounts (e.g. Skype, Xbox)
Personal Microsoft accounts only
If you use<YourTenantNameOrID>, users from other organizations can't access the application. You have to add these users as guests in the tenant that's specified in the request. In that case, the authentication is expected to be run on your tenant only. This scenario causes the sign-in error if you expect users to sign in by using federation with another tenant or identity provider.
Solution: Use the correct sign-in URL
Use the corresponding sign-in URL for the specific application type, as listed in the following table:
In your application code, apply this URL value in the
Authority setting. For more information about
Authority, see Microsoft identity platform application configuration options.
Cause 3: Signed in to the wrong tenant
When users try to access your application, either they're sent a direct link to the application, or they try to gain access through. In either situation, users are redirected to sign in to the application. In some cases, the user might already have an active session that uses a different personal account than the one that's intended to be used. Or they have a session that uses their organization account although they intended to use a personal guest account (or vice versa).
To make sure that this scenario is the issue, look for the
User account and
Identity provider values in the error message. Do those values match the expected combination or not? For example, did a user sign in by using their organization account to your tenant instead of their home tenant? Or did a user sign in to the
live.com identity provider by using a different personal account than the one that was already invited?
Solution: Sign out, then sign in again from a different browser or a private browser session
Instruct the user to open a new in-private browser session or have the user try to access from a different browser. In this case, users must sign out from their active session, and then try to sign in again.
Cause 4: Guest user wasn't invited
The guest user who tried to sign in was not invited to the tenant.
Solution: Invite the guest user
Make sure that you follow the steps in Quickstart: Add guest users to your directory in the Azure portal to invite the guest user.
Cause 5: App requires user assignment
If your application is an enterprise application that requires user assignment, error
AADSTS50020 occurs if the user isn't on the list of allowed users who are assigned access to the application. To check whether your enterprise application requires user assignment:
In the Azure portal, search for and select Enterprise applications.
Select your enterprise application.
In the sidebar, select Properties.
Check whether the Assignment required option is set to Yes.
Solution: Assign access to users individually or as part of a group
Use one of the following options to assign access to users:
To individually assign the user access to the application, see Assign a user account to an enterprise application.
To assign users if they're a member of an assigned group or a dynamic group, see Manage access to an application.
Cause 6: Tried to use a resource owner password credentials flow for personal accounts
If a user tries to use the resource owner password credentials (ROPC) flow for personal accounts, error
AADSTS50020 occurs. The Microsoft identity platform supports ROPC only within Azure AD tenants, not personal accounts.
Solution: Use an endpoint that's specific to the tenant or organization
Use a tenant-specific endpoint (<TenantIDOrName>) or the organization's endpoint. Personal accounts that are invited to an Azure AD tenant can't use ROPC. For more information, see Microsoft identity platform and OAuth 2.0 Resource Owner Password Credentials.
Cause 7: A previously deleted user name was re-created by the home tenant administrator
Error
AADSTS50020 might occur if the name of a guest user who was deleted in a resource tenant is re-created by the administrator of the home tenant. To verify that the guest user account in the resource tenant isn't associated with a user account in the home tenant, use one of the following options:
Verification option 1: Check whether the resource tenant's guest user is older than the home tenant's user account
The first verification option involves comparing the age of the resource tenant's guest user against the home tenant's user account. You can make this verification by using Microsoft Graph or MSOnline PowerShell.
Microsoft Graph
Issue a request to the MS Graph API to review the user creation date, as follows:
GET{id | userPrincipalName}/createdDateTime
Then, check the creation date of the guest user in the resource tenant against the creation date of the user account in the home tenant. The scenario is confirmed if the guest user was created before the home tenant's user account was created.
MSOnline PowerShell
Note
The MSOnline PowerShell module is set to be deprecated. Because it's also incompatible with PowerShell Core, make sure that you're using a compatible PowerShell version so that you can run the following commands.
Run the Get-MsolUser PowerShell cmdlet to review the user creation date, as follows:
Get-MsolUser -SearchString [email protected] | Format-List whenCreated
Then, check the creation date of the guest user in the resource tenant against the creation date of the user account in the home tenant. The scenario is confirmed if the guest user was created before the home tenant's user account was created.
Verification option 2: Check whether the resource tenant's guest alternative security ID differs from the home tenant's user net ID
Note
The MSOnline PowerShell module is set to be deprecated. Because it's also incompatible with PowerShell Core, make sure that you're using a compatible PowerShell version so that you can run the following commands.
When a guest user accepts an invitation, the user's
LiveID attribute (the unique sign-in ID of the user) is stored within
AlternativeSecurityIds in the
key attribute. Because the user account was deleted and created in the home tenant, the
NetID value for the account will have changed for the user in the home tenant. Compare the
NetID value of the user account in the home tenant against the key value that's stored within
AlternativeSecurityIds of the guest account in the resource tenant, as follows:
In the home tenant, retrieve the value of the
LiveIDattribute using the
Get-MsolUserPowerShell cmdlet:
Get-MsolUser -SearchString tuser1 | Select-Object -ExpandProperty LiveID
In the resource tenant, convert the value of the
keyattribute within
AlternativeSecurityIdsto a base64-encoded string:
[convert]::ToBase64String((Get-MsolUser -ObjectId 01234567-89ab-cdef-0123-456789abcdef ).AlternativeSecurityIds.key)
Convert the base64-encoded string to a hexadecimal value by using an online converter (such as base64.guru).
Compare the values from step 1 and step 3 to verify that they're different. The
NetIDof the user account in the home tenant changed when the account was deleted and re-created.
Solution: Reset the redemption status of the guest user account
Reset the redemption status of the guest user account in the resource tenant. Then, you can keep the guest user object without having to delete and then re-create the guest account. You can reset the redemption status by using the Azure portal, Azure PowerShell, or the Microsoft Graph API. For instructions, see Reset redemption status for a guest user.
If you have questions or need help, create a support request, or ask Azure community support.
Feedback
Submit and view feedback for | https://docs.microsoft.com/en-us/troubleshoot/azure/active-directory/error-code-aadsts50020-user-account-identity-provider-does-not-exist | 2022-08-08T01:37:41 | CC-MAIN-2022-33 | 1659882570741.21 | [] | docs.microsoft.com |
Zoom#
Zoom is a communications technology company that provides videotelephony and online chat services through a cloud-based peer-to-peer software platform and is used for teleconferencing, telecommuting, distance education, and social relations.
Basic Operations#
- Meeting
- Create a meeting
- Delete a meeting
- Retrieve a meeting
- Retrieve all meetings
- Update a meeting
Example Usage#
This workflow allows you to create a meeting in Zoom. You can also find the workflow on the website. This example usage workflow would use the following two nodes. - Start - Zoom
The final workflow should look like the following image.
1. Start node#
The start node exists by default when you create a new workflow. | https://docs.n8n.io/integrations/nodes/n8n-nodes-base.zoom/ | 2022-08-08T02:26:14 | CC-MAIN-2022-33 | 1659882570741.21 | [array(['/_images/integrations/nodes/zoom/workflow.png',
'A workflow with the Zoom node'], dtype=object) ] | docs.n8n.io |
UntagResource
Removes the specified tags from the specified Amazon Chime SDK media capture pipeline. 1011.
Pattern:
^arn[\/\:\-\_\.a-zA-Z0-9]+$
Required: Yes
- TagKeys
The tag keys.
Type: Array of strings
Array Members: Minimum number of 1 item. Maximum number of 50 items.
Length Constraints: Minimum length of 1. Maximum length of 128.
Required: Yes: | https://docs.aws.amazon.com/chime-sdk/latest/APIReference/API_media-pipelines-chime_UntagResource.html | 2022-08-08T00:26:24 | CC-MAIN-2022-33 | 1659882570741.21 | [] | docs.aws.amazon.com |
Get support from telematics experts
Before we start, we want to point out the many ways how you can reach out to us if you need additional support. We are always ready to assist, and we're looking forward to hearing from you!
Join our developer community
- Visit our forums
ask your question
- Join our workspace on Slack
Request an access
- Github community
- Reach us on StackOverflow
Use our resources
Updated 7 months ago
Did this page help you? | https://docs.damoov.com/docs/get-support | 2022-08-08T02:07:35 | CC-MAIN-2022-33 | 1659882570741.21 | [] | docs.damoov.com |
When building and deploying artifacts on our CI infrastructure, we build the “master” branch, but also feature branches like “FRESH-123”.
The master builds are what we potentially roll out for our users, so they need to be unimpaired by ongoing work.
Feature builds are important to validate contributions before they are integrated into the main line of development. We might, for example, want to deploy them to a dedicated server for integration testing.
Finally, when developing locally, we need our local changes to take precedence, but also want to fall back to the “standard” version of those artifacts we need, but don’t really care for.
Some side notes
versionvalue of our
pom.xmlfiles, before we build and deploy the respective projects.
metasfresh-dependency.versionbuild property
The property
metasfresh-dependency.version plays an important role when resolving metasfresh dependencies.
It is specified in the pom.xml of our metasfresh-parent, and there it is set to
[1-master-SNAPSHOT],[${project.version}],
which is our default value, but can be overidden from outside.
${project.version} in turn resolves to
3-development-SNAPSHOT.
Note 1: as we further elaborate on this elsewhere,
1-master-SNAPSHOT is a “master” build version, created and deployed by our CI infrastructure. So, for all artifacts that you did not build locally by yourself, maven can fall back to the “1-master-SNAPSHOT” version.
Note 2: in case you wonder why it’s actually
[1-master-SNAPSHOT],[3-development-SNAPSHOT] and where the
2-... went, see the section about “feature” builds.
So, when building locally, maven tries to get the
3-development-SNAPSHOT version for each metasfresh dependency, but is ready to fall back to
1-master-SNAPSHOT.
However, when projects are built on our CI infrastructure, this property will be set to different values.
On the CI side, we distinguish between “master” builds and “feature” builds.
Note that there are dedicated “master” and “feature” build jobs, to make it easier to e.g. provide a dedicated build agent for each kind of job, or to be able to prefer master builds to feature builds if needed.
When doing a “master” build, all metasfresh dependencies are built from their respective git repositories’ master branches.
Also, the build artifacts’ versions are each set to
1-master-SNAPSHOT, as well as the value of the
metasfresh-dependency.version property, and then the actual build starts.
Therefore, in a “master” build, only artifacts from the master branch are considered as dependencies.
This scenario is comparatively boring and not the reason why we need the metasfresh-dependency.version property and all build jobs & documentation.
When doing a “feature” build, it means that the respective build itself or at least one of its metasfresh dependencies are built from their respective repositories’ “not-master” branches. Typically, this is a branch like “FRESH-123”, but it might also be some other branch. So, calling it a “feature” build is usually correct and seems to be relatively clear to me, but calling it “not-master” build would actually be more correct.
In this section, we describe a concrete example, from a high-level view.
Note that in order to support feature builds in the described way, we need different types of jobs which end with
_feature_webhook and
_feature_downstream. The difference is described in another section. I think that in order to follow the example, it’s not required to userstand the difference.
Three repositories are playing a role in this example:
To follow this example, it is not required to understand what those repositories are actually about functionally.
As of now,
metasfresh-procurement-webui depends on both
metasfresh-commons-cxf and
metasfresh.
Further,
metasfresh and
metasfresh-commons-cxf don’t depend on each other. On a sidenote, this is about to change, but for the sake of this documentation, it’s pretty convenient this way.
Assume that
metasfresh-commons-cxf has a feature branch
FRESH-276 while
metasfresh and
metasfresh-procurement-webui both do not have that branch.
Now we push a change on
metasfresh-commons-cxf, branch
FRESH-276.
The push causes the git repository to notify our CI server (which is Jenkins), which in turn starts the build job
metasfresh-commons-cxf_feature_webhook.
This build job now does a number of things:
origin/FRESH-276from the git repository
2-FRESH-276-SNAPSHOT
metasfresh-dependency.version=[1-master-SNAPSHOT],[2-FRESH-276-SNAPSHOT]and deploy the artifacts
metasfresh-procurement-webui_feature_downstream. When invoking the downstream jobs, it also passes the maven version (i.e.
2-FRESH-276-SNAPSHOT) on to them.
So, now the build job
metasfresh-procurement-webui_feature_downstream is invoked with a version parameter
2-FRESH-276-SNAPSHOT. It does the following:
origin/FRESH-276, but as there is no such branch, it falls back to the
origin/masterbranch
2-FRESH-276-SNAPSHOT. So, note that when dedicing on the maven version to go with, the parameter we got from the upstream build job takes precedence over the actual branch which the job is building!
metasfresh-dependency.version=[1-master-SNAPSHOT],[2-FRESH-276-SNAPSHOT]. Note that as maven versioning goes, everything starting with “2” is greater than everything starting with “1”, so when resolving artifacts, maven will prefer
2-FRESH-276-SNAPSHOT. This means that maven will resolve
metasfreshto the latest
1-master-SNAPSHOTversion and
metasfresh-commons-cxfto the latest
2-FRESH-276-SNAPSHOTversion.
So, to summarize: in this example
metasfresh-procurement-webuiis build from the
masterbranch, but the build is triggered from the
FRESH-276branch of
metasfresh-commons-cxf
FRESH-276, this from-master-branch-build’s result is versioned as
2-FRESH-276-SNAPSHOT. This is important to avoid mixing it up with a “master” build of
metasfresh-procurement-webui.
metasfreshdependencies the “feature” build of
metasfresh-procurement-webuiprefers
2-FRESH-276-SNAPSHOT, but willingly falls back to
1-master-SNAPSHOTin the case of metasfresh.
_feature_webhookvs
_feature_downstreambuild jobs
In the previous section, we described how a
_feature_webhook build job was notified by github and later, how a
_feature_downstream job was invoked to check for the
feature-branch and fall back to the
master-branch if there wasn’t a feature branch.
Why do we need two different build jobs?
Unfortunately, we need two differnt jobs for the two different scenarios, because yours truly is not capable of making one job do both things. Instead we have:
_feature_webhookjob
_feature_downstreamjob
The two kinds of job differ in their git plugin configuration.
For the
_feature_webhook we pretty much leave the jenkins git plugin alone, and let it do it’s job of figuring out which recents changes were not yet build.
But for the fallback ability of the
_feature_downstream scenario, we need to use the Git Chooser Alternative Plugin. And it turned out, that with this plugin being active for a build job, the git plugin is not able any more to idendfiy the changes is has to build.
Thus the two different jobs. As usual, be would be grateful for help and improvements. | https://docs.metasfresh.org/pages/infrastructure/ci_en | 2022-08-08T01:33:53 | CC-MAIN-2022-33 | 1659882570741.21 | [] | docs.metasfresh.org |
Test-Cs
Topology
Verifies service activation and group permissions for your installation of Skype for Business Server. This cmdlet was introduced in Lync Server 2010.
Syntax
Test-Cs
Topology [-GlobalCatalog <Fqdn>] [-GlobalSettingsDomainController <Fqdn>] [-Service <String>] [-Report <String>] [-Verbose] [<CommonParameters>]
Description
The
Test-CsTopology cmdlet provides a way for you to verify that Skype for Business Server is functioning correctly at a global level.
By default, the cmdlet checks your entire Skype for Business Server infrastructure, verifying that the required services are running and that the appropriate permissions have been set for these services and for the universal security groups created when you install Skype for Business Server.
In addition to verifying the validity of Skype for Business Server as a whole, the
Test-CsTopology cmdlet also lets you check the validity of a specific service.
For example, this command checks the state of the A/V Conferencing Server on the pool atl-cs-001.litwareinc.com:
Test-CsTopology -Service "ConferencingServer:atl-cs-001.litwareinc.com"
Examples
-------------------------- Example 1 --------------------------
Test-CsTopology
Example 1 validates the entire Skype for Business Server topology.
-------------------------- Example 2 --------------------------
Test-CsTopology -Report "C:\Logs\Topology.xml"
The command shown in Example 2 is a variation of the command shown in Example 1. In this case, however, the Report parameter is included to specify the location (C:\Logs\Topology.xml) where the output file should be written.
-------------------------- Example 3 --------------------------
Test-CsTopology -Service "Registrar:atl-cs-001.litwareinc.com"
In Example 3, the
Test-CsTopology cmdlet is used to validate a single service: Registrar:atl-cs-001.litwareinc.com.
Parameters
Fully qualified domain name (FQDN) of a global catalog server in your domain.
This parameter is not required if you are running the
Test\Topology.html"
When present, the
Test-CsTopology cmdlet limits its validation checks to the specified service.
(Note that you can only specify one service at a time when using the Service parameter.) Services should be specified using the appropriate service ID; for example, this syntax refers to the Registrar service on the atl-cs-001.litwareinc.com pool:
-Service "Registrar:atl-cs-001.litwareinc.com"
If this parameter is not included then the entire topology will be validated.
Reports detailed activity to the screen as the cmdlet runs.
Inputs
None.
The
Test-CsTopology cmdlet does not accept pipelined input.
Outputs
The
Test-CsTopology cmdlet returns an instance of the Microsoft.Rtc.SyntheticTransactions.TaskOutput object.
Related Links
Feedback
Submit and view feedback for | https://docs.microsoft.com/en-us/powershell/module/skype/test-cstopology?view=skype-ps | 2022-08-08T01:31:17 | CC-MAIN-2022-33 | 1659882570741.21 | [] | docs.microsoft.com |
Prepare your first contract
As you learned in Blockchain basics decentralized applications are most often written as smart contracts. Although Substrate is primarily a framework and toolkit for building custom blockchains, it can also provide a platform for smart contracts. This tutorial demonstrates how to build a basic smart contract to run on a Substrate-based chain. In this tutorial, you'll explore using ink! as a programming language for writing Rust-based smart contracts..
Tutorial objectives
By completing this tutorial, you will accomplish the following objectives:
- Learn how to create a smart contract project.
- Build and test a smart contract using the ink! smart contract language.
- Deploy a smart contract on a local Substrate node.
- Interact with a smart contract through a browser.
Update your Rust environment
For this tutorial, you need to add some Rust source code to your Substrate development environment.
To update your development environment:
- Open a terminal shell on your computer.
- Change to the root directory where you compiled the Substrate node template.
Update your Rust environment by running the following command:
rustup component add rust-src --toolchain nightly
Verify that you have the WebAssembly target installed by running the following command:
rustup target add wasm32-unknown-unknown --toolchain nightly
If the target is installed and up-to-date, the command displays output similar to the following:
info: component 'rust-std' for target 'wasm32-unknown-unknown' is up to date
Install the Substrate contracts node
To simplify this tutorial, you can download a precompiled Substrate node for Linux or macOS.
The precompiled binary includes the FRAME pallet for smart contracts by default.
Alternatively, you can build the preconfigured
contracts-node manually by running
cargo install contracts-node on your local computer.
To install the contracts node on macOS or Linux:
- Open the Releases page.
- Download the appropriate compressed archive for your local computer.
- Open the downloaded file and extract the contents to a working directory.
If you can't download the precompiled node, you can compile it locally with a command similar to the following:
cargo install contracts-node --git --tag <latest-tag> --force --locked
You can find the latest tag (
polkadot-v0.9.26 to match the rest of the docs example code versions) to use on the Tags page.
Install additional packages
After compiling the
contracts-node package, you need to install two additional packages:
- The WebAssembly binaryen package for your operating system to optimize the WebAssembly bytecode for the contract.
- The
cargo-contractcommand line interface you'll use to set up smart contract projects.
Install the WebAssembly optimizer
To install the binaryen package:
- Open a terminal shell on your computer.
Use the appropriate package manager for your operating system to install the package.
For example, on Ubuntu or Debian, run the following command:
sudo apt install binaryen
On macOS, run the following command:
brew install binaryen
For other operating systems, you can download the
binaryenrelease directly from WebAssebly releases.
Install the cargo-contract package
After you've installed the WebAssembly
binaryen package, you can install the
cargo-contract package.
The
cargo-contract package provides a command-line interface for working with smart contracts using the ink! language.
- Open a terminal shell on your computer.
Install
dylint-link, required to lint ink! contracts, warning you about things like using API's in a way that could lead to security issues.
cargo install dylint-link
Install
cargo-contractby running the following command:
cargo install cargo-contract --force
Verify the installation and explore the commands available by running the following command:
cargo contract --help
Create a new smart contract project
You are now ready to start developing a new smart contract project.
To generate the files for a smart contract project:
- Open a terminal shell on your computer.
Create a new project folder named
flipperby running the following command:
cargo contract new flipper
Change to the new project folder by running the following command:
cd flipper/
List all of the contents of the directory by running the following command:
ls -al
You should see that the directory contains the following files:
-rwxr-xr-x 1 dev-doc staff 285 Mar 4 14:49 .gitignore -rwxr-xr-x 1 dev-doc staff 1023 Mar 4 14:49 Cargo.toml -rwxr-xr-x 1 dev-doc staff 2262 Mar 4 14:49 lib.rs
Like other Rust projects, the
Cargo.tomlfile is used to provides package dependencies and configuration information. The
lib.rsfile is used for the smart contract business logic.
Explore the default project files
By default, creating a new smart contract project generates some template source code for a very simple contract that has one function—
flip()—that changes a Boolean variable from true to false and a second function—
get—that gets the current value of the Boolean.
The
lib.rs file also contains two functions for testing that the contract works as expected.
As you progress through the tutorial, you'll modify different parts of the starter code. By the end of the tutorial, you'll have a more advanced smart contract that looks like the Flipper example.
To explore the default project files:
- Open a terminal shell on your computer, if needed.
- Change to project folder for the
flippersmart contract, if needed:
- any changes to the
Cargo.tomlfile, then close the file.
- Open the
lib.rsfile in a text editor and review the functions defined for the contract.
Test the default contract
At the bottom of the
lib.rs source code file, there are simple test cases to verify the functionality of the contract.
You can test whether this code is functioning as expected using the offchain test environment.
To test the contract:
- Open a terminal shell on your computer, if needed.
- Verify that you are in the
flipperproject folder, if needed.
Use the
testsubcommand and
nightlytoolchain to execute the default tests for the
flippercontract by running the following command:
cargo +nightly test
The command should display output similar to the following to indicate successful test completion:
running 2 tests test flipper::tests::it_works ... ok test flipper::tests::default_works ... ok test result: ok. 2 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
Build the contract
After testing the default contract, you are ready to compile this project to WebAssembly.
To build the WebAssembly for this smart contract:
- Open a terminal shell on your computer, if needed.
- Verify that you are in the
flipperproject folder.
Compile the
flippersmart contract by running the following command:
cargo +nightly contract build
This command builds a WebAssembly binary for the
flipperproject, a metadata file that contains the contract Application Binary Interface (ABI), and a
.contractfile that you use to deploy the contract. For example, you should see output similar to the following:
Original wasm size: 47.9K, Optimized: 22.8K The contract was built in DEBUG mode. Your contract artifacts are ready. You can find them in: /Users/dev-doc/flipper/target/ink - flipper.contract (code + metadata) - flipper.wasm (the contract's code) - metadata.json (the contract's metadata) The `.contract` file can be used for deploying your contract to your chain.
The
metadata.jsonfile in the
target/inkdirectory describes all the interfaces that you can use to interact with this contract. This file contains several important sections:
- The
specsection includes information about the functions—like constructors and messages—that can be called, the events that are emitted, and any documentation that can be displayed. This section also includes a
selectorfield that contains a 4-byte hash of the function name and is used to route contract calls to the correct functions.
- The
storagesection defines all the storage items managed by the contract and how to access them.
- The
typessection provides the custom data types used throughout the rest of the JSON.
Start the Substrate smart contracts node
If you have successfully installed
substrate-contracts-node, you can start a local blockchain node for your smart contract.
To start the preconfigured
contracts-node:
- Open a terminal shell on your computer, if needed.
Start the contracts node in local development mode by running the following command:
substrate-contracts-node --dev
You should see output in the terminal similar to the following:
2022-03-07 14:46:25 Substrate Contracts Node 2022-03-07 14:46:25 ✌️ version 0.8.0-382b446-x86_64-macos 2022-03-07 14:46:25 ❤️ by Parity Technologies <[email protected]>, 2021-2022 2022-03-07 14:46:25 📋 Chain specification: Development 2022-03-07 14:46:25 🏷 Node name: possible-plants-8517 2022-03-07 14:46:25 👤 Role: AUTHORITY 2022-03-07 14:46:25 💾 Database: RocksDb at /var/folders/2_/g86ns85j5l7fdnl621ptzn500000gn/T/substrateEdrJW9/chains/dev/db/full 2022-03-07 14:46:25 ⛓ Native runtime: substrate-contracts-node-100 (substrate-contracts-node-1.tx1.au1) 2022-03-07 14:46:25 🔨 Initializing Genesis block/state (state: 0xe9f1…4b89, header-hash: 0xa1b6…0194) 2022-03-07 14:46:25 👴 Loading GRANDPA authority set from genesis on what appears to be first startup. 2022-03-07 14:46:26 🏷 Local node identity is: 12D3KooWQ3P8BH7Z1C1ZoNSXhdGPCiPR7irRSeQCQMFg5k3W9uVd 2022-03-07 14:46:26 📦 Highest known block at #0
After a few seconds, you should see blocks being finalized.
To interact with the blockchain, you need to connect to this node. You can connect to the node through a browser by opening the Contracts UI.
- Navigate to the Contracts UI in a web browser, then click Yes allow this application access.
Select Local Node.<Image is missing>
Deploy the contract
At this point, you have completed the following steps:
- Installed the packages for local development.
- Generated the WebAssembly binary for the
flippersmart contract.
- Started the local node in development mode.
- Connected to a local node through the Contracts UI front-end.
The next step is to deploy the
flipper contract on your Substrate chain.
However, deploying a smart contract on Substrate is a little different than deploying on traditional smart contract platforms. For most smart contract platforms, you must deploy a completely new blob of the smart contract source code each time you make a change. For example, the standard ERC20 token has been deployed to Ethereum thousands of times. Even if a change is minimal or only affects some initial configuration setting, each change requires a full redeployment of the code. Each smart contract instance consume blockchain resources equivalent to the full contract source code, even if no code was actually changed.
In Substrate, the contract deployment process is split into two steps:
- Upload the contract code to the blockchain.
- Create an instance of the contract.
With this pattern, you can store the code for a smart contract like the ERC20 standard on the blockchain once, then instantiate it any number of times. You don't need to reload the same source code repeatedly, so your smart contract doesn't consume unnecessary resources on the blockchain.
Upload the contract code
For this tutorial, you use the Contracts UI front-end to deploy the
flipper contract on the Substrate chain.
To upload the smart contract source code:
- Verify that you are connected to the Local Node.
- Click Add New Contract.
- Click Upload New Contract Code.
Select an Account to use to create a contract instance.
You can select any existing account, including a predefined account such as
alice.
- Type a descriptive Name for the smart contract, for example, Flipper Contract.
Browse and select or drag and drop the
flipper.contractfile that contains the bundled Wasm blob and metadata into the upload section.
- Click Next to continue.
Create an instance on the blockchain
Smart contracts exist as an extension of the account system on the Substrate blockchain.
When you create an instance of this smart contract, Substrate creates a new
AccountId to store any balance managed by the smart contract and to allow you to interact with the contract.
After you upload the smart contract and click Next, the Contracts UI displays information about the content of the smart contract.
To create the instance:
- Review and accept the default Deployment Constructor options for the initial version of the smart contract.
Review and accept the default Max Gas Allowed of
200000.
Click Next.
The transaction is now queued. If you needed to make changes, you could click Go Back to modify the input.
Click Upload and Instantiate.
Depending on the account you used, you might be prompted for the account password. If you used a predefined account, you won't need to provide a password.
Call the smart contract
Now that your contract has been deployed on the blockchain, you can interact with it.
The default flipper smart contract has two functions—
flip() and
get()—and you can use the Contracts UI to try them out.
get() function
You set the initial value of the
flipper contract
value to
false when you instantiated the contract.
You can use the
get() function to verify the current value is
false.
To test the
get() function:
Select any account from the Account list.
This contract doesn't place restrictions on who is allowed to send the
get()request.
- Select get(): bool from the Message to Send list.
- Click Read.
- Verify that the value
falseis returned in the Call Results.
flip() function
The
flip() function changes the value from
false to
true.
To test the
flip() function:
Select any predefined account from the Account list.
The
flip()function is a transaction that changes the chain state and requires an account with funds to be used to execute the call. Therefore, you should select an account that has a predefined account balance, such as the
aliceaccount.
- Select flip() from the Message to Send list.
- Click Call.
Verify that the transaction is successful in the Call Results.<Image is missing>
- Select get(): bool from the Message to Send list.
- Click Read.
Verify the new value is
truein the Call Results.
Next steps
Congratulations!
In this tutorial, you learned:
- How to create a new smart contract project using the ink! smart contract language.
- How to test and build a WebAssembly binary for a simple default smart contract.
- How to start a working Substrate-based blockchain node using the contracts node.
- How to deploy a smart contract by connecting to a local node and uploading and instantiating the contract.
- How to interact with a smart contract using the Contracts UI browser client.
Additional smart contract tutorials build on what you learned in this tutorial and lead you deeper into different stages of contract development.
You can find an example of the final code for this tutorial in the assets for the ink-workshop. You can learn more about smart contract development in the following topics:
If you experienced any issues with this tutorial, submit an issue, ask questions or provide feedback. | https://docs.substrate.io/tutorials/smart-contracts/first-smart-contract/ | 2022-08-08T01:22:26 | CC-MAIN-2022-33 | 1659882570741.21 | [array(['https://d33wubrfki0l68.cloudfront.net/34e8a05dd19ef75a7135a6e307e9072607f49a54/afa78/static/7a310d227296dd9b866ce7c4ee220e7e/252b3/call-results-get.png',
'Calling the get() function returns false'], dtype=object) ] | docs.substrate.io |
Citrix Application Delivery Audit Items
Each check in an audit area defined using a couple of foundational audit items: custom_item and report.
A custom_item is the base of all functional checks inside an audit. It is the wrapper that manages the definition of each audit item.
A report is a method in the audit file to report a static result that does not change regardless of how a target is configured. It is commonly used in reporting of conditional checks, reporting audit items that are not technically possible to retrieve data, or high level information on the audit that is being evaluated.
Usage
<custom_item>
type: [TYPE_OF_CHECK]
description: ["description"]
(optional) info: ["information regarding the audit item"]
(optional) solution: ["information on how to remediate the audit item"]
(optional) see_also: ["url reference for audit"]
(optional) reference: ["standard|control,standard|control,..."]
(optional) severity: [HIGH|MEDIUM|LOW]
</custom_item>
<report type:"[PASSED|WARNING|FAILED]">
description: ["description"]
(optional) info: ["information regarding the audit item"]
(optional) solution: ["information on how to remediate the audit item"]
(optional) see_also: ["url reference for audit"]
(optional) reference: ["standard|control,standard|control,..."]
(optional) output : ["custom output other than report type"]
type
The type field in a custom_item is used to identify what other fields are required and how to gather, transform, and evaluate data from the target.
The type attribute in a report is used to provide the result for the audit item.
description
A description is required as it is the most common identifier of the audit items.
info
The info is general information about the audit item. It is commonly used to communicate what is being evaluated and why it is important.
solution
The solution is text that relays how an audit item can be remediated if it has FAILED.
see_also
The see_also is a URL that is used as a reference to the configuration guide or benchmark that is being audited. It is commonly used as a method to report on audit items that refer to the same benchmark.
severity (custom_item only)
The severity is a method to soften a FAILED result posted by an audit item. For example, a result of FAILED would be reported as a WARNING when a severity of MEDIUM is used.
The following severities are defined:
HIGH — Does not change the FAILED result.
MEDIUM — Updates FAILED to WARNING.
LOW — Updates FAILED to PASSED.
Tip: If there is a scenario in an audit file that a result should be moved from PASSED to a lower result, adjust the evaluation of the audit item to always fail, then apply the desired severity.
output (report only)
The output field is a method to provide static content in the output of the result and attempts to keep all other informational fields the same between different reports for the same control. The best example of this is the use of a report in a then or else should maintain the same informational fields, but may need a differentiator for why the result changes.
Examples
<report type:"WARNING">
description : "Audit file for Unix"
output : "NOTE: This audit file does not support the OS that is identified on your target."
</custom_item> | https://docs.tenable.com/nessus/compliancechecksreference/Content/CitrixApplicationDeliveryAuditItems.htm | 2022-08-08T02:31:30 | CC-MAIN-2022-33 | 1659882570741.21 | [] | docs.tenable.com |
Onboard cloud accounts for Compass
Learn how to onboard cloud accounts for CoreStack Compass.
Private preview notice:
This article describes features that are currently only available in private preview.
Please contact [email protected] to learn how you can get access.
In order to utilize the assessment feature available in the Compass product offering, you must onboard your cloud accounts in CoreStack in a particular way. It's very similar to the standard way of onboarding cloud accounts, but there are a few additional steps that must be done in order to enable CoreStack to properly run an assessment on your cloud resources.
These unique steps are explained in detail below, but can be summarized as:
- Select the Assessment + Governance access type during the onboarding process and deploy the right template.
- Configure the cloud account post-onboarding to enable access permissions.
To learn the general steps for onboarding cloud accounts in CoreStack, as well as how to create a new cloud account. please see Onboarding Overview.
Part 1: choosing the correct access type
Follow the steps below to perform the first step for properly onboarding your cloud account for CoreStack Compass assessments.
First, navigate to Account Governance in the left-hand sidebar. Here, you should see any other cloud accounts already onboarded.
To add a new cloud account, click the Add New button in the top right corner of the dashboard view. This will open a drop-down menu.
Select Single Account from the drop-down options, then Start Now to proceed. This should open a new pop-up box view with a series of field on the left, and some helpful information on the right. Complete the fields on the left to onboard your cloud account.
First, under Public Cloud, select which platform your cloud account exists in (AWS, Azure, GCP).
- Click Get Started at the bottom to proceed to the next step.
After selecting your platform and clicking Get Started, you'll see a screen titled Choose account & access type.
Under Access Type, select Assessment + Governance. You must select this option in order to enable the full Compass assessment experience in CoreStack.
- The Assessment + Governance option allows CoreStack to use the read and write permissions necessary to properly scan and validate your cloud workloads.
Full onboarding walk-throughs:
To read the rest of the full steps for onboarding cloud accounts based on their platform, please refer to the relevant links below as needed:
Part 2: Configuring your cloud account post-onboarding
Follow the steps below to perform the second step for properly onboarding your cloud account for Compass assessments.
Once you've completed the initial steps to add a new cloud account to CoreStack, navigate to Account Governance in the left-hand sidebar.
For the cloud account you'd like to enable assessments for, click on View under the Actions column to open the drop-down menu, then select View Settings.
This will take you to the Cloud Account Details page, where you can view different information about the cloud account. Select Governance Configuration from the left-hand menu to see a row of governance categories appear along the top right side of the screen.
Select Compliance. This will show you a list of cloud frameworks and standards. Scroll down and select ____ Well-Architected Framework.
- The blank part will be whichever cloud platform you selected previously (AWS, Azure, etc).
You may see a red text message that reads: "It seems AWS Well-Architected Framework configuration is not done yet." Click on Configure beside it to proceed.
This will bring up a dialog box along the bottom edge of the UI that contains two steps: Verify Access, and Configure.
- The Verify Access sections will show you a list of mandatory and optional permissions that are either enabled or disabled. As noted previously, and in the UI, certain permissions (read and write) must be enabled in order for CoreStack to properly run scans and validate resources as part of the Compass assessment.
Review the permissions listed here and take note of any mandatory permissions with a red 'x' next to them -- these are not enabled and must be in order for the assessment to run properly. You need to take the necessary steps to enable these permissions in the respective cloud platform before running an assessment.
For additional guidance on how to do this, please refer to the steps in our other user guides below:
Updated 2 months ago | https://docs.corestack.io/docs/onboard-cloud-accounts-for-compass | 2022-08-08T01:31:45 | CC-MAIN-2022-33 | 1659882570741.21 | [] | docs.corestack.io |
Terrascan Scan History
The Terrascan user interface allows you to manage the Terrascan scan history in a few ways. You can use a scan's scan history page to launch the scan, edit the scan configuration, download scan results, download the command output of a scan, view the configuration used for a completed scan, and delete the scan's history and results.
To navigate to the Terrascan scan history page:
Under Resources in the left-side navigation pane, click Terrascan.
The Terrascan > About page appears.
Below Terrascan, click the Scans tab.
The Terrascan > Scans page opens.
In the scan table, double-click the scan configuration whose history you want to work on.
The configuration's scan history page appears.
Do one of the following:
-
Edit the scan configuration.
Download the scan results of a completed scan.
Download a scan's command output.
Roll over the scan whose command output you want to download.
In the scan row, click the
button.
The command output downloads as a .txt file.
View the configuration used for a completed scan.
Roll over the scan whose command output you want to download.
In the scan row, click the
button.
The Config Details window appears and shows the scan's configuration.
Delete a scan's history and results.
Roll over the scan whose history and results you want to delete.
In the scan row, click the
button.
The Delete Result window appears.
Click Delete.
Nessus removes the scan history and related results from the scan history page. | https://docs.tenable.com/nessus/10_3/Content/TerrascanManageScanHistory.htm | 2022-08-08T00:52:58 | CC-MAIN-2022-33 | 1659882570741.21 | [] | docs.tenable.com |
Featured resources
Jenkins Plugin management
By monitoring plugin versions and comparing the configuration of your instance to plugins identified by CloudBees as verified, trusted, or unverified, Beekeeper provides "envelope enforcement" to provide stability and security to CloudBees CI, CloudBees Jenkins Distribution, and CloudBees Jenkins Platform.
Jenkins Pipelines
Jenkins Pipelines is a suite of plugins which supports implementing and integrating continuous delivery pipelines into Jenkins. Understand how to define and administer pipelines for CloudBees CI, CloudBees Jenkins Distribution, and CloudBees Jenkins Platform.
CloudBees Build Acceleration quick start
Learn about installation prerequisites for CloudBees Build Acceleration, how to install, and how to run your first build.
Software Delivery Automation
CloudBees Software Delivery Automation
CloudBees Software Delivery Automation enables enterprises to optimize their software delivery process for increased innovation and security by connecting, automating, and orchestrating the tools and functions across development, operations, and shared service teams.
CloudBees CI
CloudBees CI is an end-to-end continuous software delivery system. It offers stable releases with monthly updates, as well as additional proprietary tools and enterprise features to enhance the manageability and security of Jenkins.
CloudBees CD/RO
CloudBees CD/RO is an Adaptive Release Orchestration platform that eliminates scripts and automates deployment pipelines while verifying performance, quality and security.
CloudBees Analytics
CloudBees Analytics is the new single source of truth to monitor and optimize the underlying CI infrastructure across your enterprise. With the new actionable insights, you can enhance your build performance, right-size your workloads over demand cycles, prevent unplanned downtimes, get a holistic view of your plugin usage across all your pipeline jobs, and lots more.
CloudBees Feature Management
With CloudBees Feature Management (CloudBees Rollout), accelerate your development speed and reduce risk by decoupling feature deployment from code releases. Control every change in your app in real-time while measuring the business and technical impact.
Other CloudBees Products
CloudBees Build Acceleration
CloudBees Build Acceleration (CloudBees Accelerator) reduces cycle time and iterates faster with fault-tolerant workload distribution. Smart load balancing ensures optimal use of cores across the cluster. Dynamic resource provisioning provides instantaneous scale up/down.
CloudBees Jenkins Platform
CloudBees Jenkins Platform is a continuous integration (CI) and continuous delivery (CD) solution that extends Jenkins. Developed for on-premise installations, CloudBees Jenkins Platform offers stable releases with monthly updates, as well as additional proprietary tools and enterprise features to enhance the manageability and security of Jenkins.
CloudBees | https://docs.cloudbees.com/ | 2022-08-08T00:41:50 | CC-MAIN-2022-33 | 1659882570741.21 | [] | docs.cloudbees.com |
Enabling Iris Rev. 6 VIA RGB Controls
The stock firmware (
rev6) for the Iris Rev. 6 has VIA support, but it does not have support for the Lighting controls in VIA due to the way RGB is implemented for QMK and VIA. One way to control the RGB lighting is by using the RGB Lighting keycodes in QMK (
RGB_xxx).
Alternatively, if you'd like to use the Lighting controls in VIA Configurator, it can be enabled by upgrading the firmware from
rev6 to
rev6a, but there are a couple things to take note of.
- The
rev6and
rev6afirmwares are both compatible with any Iris Rev. 6.x PCB (i.e. Rev. 6 and Rev 6.1)
- Using the original
rev6VIA .hex file will have Lighting controls disabled
- Using the
rev6aVIA .hex file with enable the Lighting controls
- The stock firmware on all Rev. 6 and 6.1 PCBs is
rev6
- The pull request for VIA support of
rev6ais still pending, you will have to manually load in the
iris-rev6a.jsoneverytime you launch VIA, otherwise VIA will not detect a board upgraded to
rev6a
Upgrading to
rev6a firmware
- Download and flash this .hex file to each half: Iris Rev. 6a VIA Firmware
- Flash the
rev6aVIA .hex file individually to each half
Initial Setup
You will only need to go through the following steps once:
- Download this VIA .json file: Iris Rev. 6a VIA JSON.
- Open up VIA and go to Setting tab, enable
Show Design tab
Each time you launch VIA, you will need to load in the Iris Rev. 6a definition file downloaded earlier
- Go to Design tab and load in the iris-rev6a.json file (you will need to do this each time you launch VIA)
- RGB controls will then be enabled in the Configure tab
caution
Due to power constraints on the Iris Rev. 6, do not set the Underglow Brightness to more the 2/3rds of the maxiumum brightness. In the event that you do accidentally set the brightness too high and get disconnection issues, you will need to clear your EEPROM. | https://docs.keeb.io/iris-rev6-rgb-via | 2022-08-08T00:55:54 | CC-MAIN-2022-33 | 1659882570741.21 | [array(['https://d33wubrfki0l68.cloudfront.net/dff8ebf170e5621c7940a2481a4490ae0db22f43/4536f/assets/images/enable-design-tab-16b11aaf28374c3c81e6f9ccada0330b.png',
None], dtype=object)
array(['https://d33wubrfki0l68.cloudfront.net/1cafb1fb3eb35c29a13162ffc9093d32f0e85230/5372c/assets/images/load-draft-definition-00d59cdfcd8d31d250f2c91931481efb.png',
None], dtype=object)
array(['https://d33wubrfki0l68.cloudfront.net/badb7ce984ef5cda39cee46df3219767d448665e/f49e8/assets/images/iris-rev6a-lighting-93eceb3b6c730ac91668fd9ad18e83af.png',
None], dtype=object) ] | docs.keeb.io |
...
Be sure to check out the following series of topics that help you create unique
...
Now that the plug-in monitor has been imported, you can browse to the Add Service Instance page in the Uptime Infrastructure Monitor user interface and see the plug-in monitor listed, as shown below:
Save
Save
Save | https://docs.uptimesoftware.com/pages/diffpagesbyversion.action?pageId=5013805&selectedPageVersions=9&selectedPageVersions=10 | 2022-08-08T02:02:17 | CC-MAIN-2022-33 | 1659882570741.21 | [] | docs.uptimesoftware.com |
Publishing an image is a process by which internal VMs needed for instant cloning are created from a golden image and its snapshot. This process only happens once per image and may take some time.
Horizon performs the following steps to create a pool of instant clones:
- Horizon Console,.
- After the image is published, Horizon creates the instant clones.. This process is fast. During this process, the Current Image pane in Horizon Console shows the name and state of the image.
After the pool is created, you can change the image through the push-image operation. As with the creation of a pool, the new image is first published. Then the clones are recreated.
When an instant clone pool is created, Horizon spreads the pool across datastores automatically in a balanced way. If you edit a pool to add or remove datastores, rebalancing of the cloned desktops happens automatically when a new clone is created. | https://docs.vmware.com/en/VMware-Horizon/2106/virtual-desktops/GUID-5FE0EDD6-4128-4E3B-9CD2-A91F2E884D0B.html | 2022-08-08T00:44:56 | CC-MAIN-2022-33 | 1659882570741.21 | [] | docs.vmware.com |
the dog's already having quite a bit of fun in the outdoors. funny that fewer and fewer ppl are outside--just means he can be off leash without problems.
thought i could get used to a winter without wintery weather, but when things are like they are today, i feel rejuvenated. nothing like a the pins and needles of sleet against your face to get you going | http://docs-wo-friends.livejournal.com/774.html | 2017-10-17T04:07:21 | CC-MAIN-2017-43 | 1508187820700.4 | [] | docs-wo-friends.livejournal.com |
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region.
Removes all the permissions from the specified resource.
For PCL this operation is only available in asynchronous form. Please refer to RemoveAllResourcePermissionsAsync.
Namespace: Amazon.WorkDocs
Assembly: AWSSDK.WorkDocs.dll
Version: 3.x.y.z
Container for the necessary parameters to execute the RemoveAllResourcePermissions service method.
.NET Framework:
Supported in: 4.5, 4.0, 3.5
Portable Class Library:
Supported in: Windows Store Apps
Supported in: Windows Phone 8.1
Supported in: Xamarin Android
Supported in: Xamarin iOS (Unified)
Supported in: Xamarin.Forms | http://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/WorkDocs/MWorkDocsWorkDocsRemoveAllResourcePermissionsRemoveAllResourcePermissionsRequest.html | 2017-10-17T04:07:15 | CC-MAIN-2017-43 | 1508187820700.4 | [] | docs.aws.amazon.com |
This guide details how to collect metrics and events from a new data source by writing an Agent Check, a Python plugin to the StackState Agent. We’ll look at the
AgentCheck interface, and then write a simple Agent Check that collects timing metrics and status events from HTTP services.
Any custom checks will be included in the main check run loop, meaning they will run every check interval, which defaults to 15 seconds.
First off, ensure you’ve properly installed the Agent on your machine.
All custom checks inherit from the
AgentCheck class found in
checks/__init__.py and require a
check() method that takes one argument,
instance which is a
dict having the configuration of a particular instance. The
check method is run once per instance defined in the check configuration (discussed later).
Sending metrics in a check is easy. If you’re already familiar with the methods available in StSStatsD, then the transition will be very simple. If you’re not already familiar with that interface, you’ll find sending metrics is a breeze. self.raw( ... ) # Report a raw metric, no sampling is applied will be collected and flushed out with the other Agent metrics.
At any time during your check, you can will be collected and flushed with the rest of the Agent payload.
Your custom check can also report the status of a service by calling the
self.service_check(...) method.
The service_check method will accept occured.
hostname: (optional) The name of the host submitting the check. Defaults to the host_name of the agent.
check_run_id: (optional) An integer ID used for logging and tracing purposes. The ID does not need to be unique. If an ID is not provided, one will automatically be generated.
message: (optional) Additional information or a description of why this status occured.
At any time during your check, you can make a call to
self.component(...) with the following arguments:
instance_id: dictionary, containing information on the topology source
id: string, identifier of the component
type: string, a component type classification
data: (optional) arbitrary nesting of dictionaries and arrays with additional information
A second call
self.relation(...) serves to push relation information to StackState. This call takes the following arguments:
instance_id: dictionary, containing information on the topology source
source_id: string, identifier of the source component
target_id: string, identifier of the target component
type: string, a component type classification
data: (optional) arbitrary nesting of dictionaries and arrays with additional information
At the end of your check, all components and relations will be collected and flushed with the rest of the Agent payload.
Components and relations can be sent as part of a snapshot. A snapshot represents the total state of some external topology. By putting components and relations in a snapshot, StackState will persist all the topology elements present in the snapshot, and remove everything else for the topology instance. Creating snapshots is facilitated by two functions:
Starting a snapshot can be done with
self.start_snapshot(...), with the following argument:
instance_id: dictionary, containing information on the topology source
Stopping a snapshot can be done with
self.stop_snapshot(...), with the following argument:
instance_id: dictionary, containing information on the topology source
If a check cannot run because of improper configuration, programming error or because it could not collect any metrics, it should raise a meaningful exception. This exception will be logged, as well as be shown in the Agent info command for easy debugging. For example:
$ sudo /etc/init.d/stackstate-agent info Checks ====== my_custom_check --------------- - instance #0 [ERROR]: ConnectionError('Connection refused.',) - Collected 0 metrics & 0 events
As part of the parent class, you’re given a logger at
self.log, so you can do things like
self.log.info('hello'). The log handler will be
checks.{name} where
{name} is the name of your check (based on the filename of the check module).
Each check will have a configuration file that will be placed in the
conf.d directory. Configuration is written using YAML. The file name should match the name of the check module (e.g.:
haproxy.py and
haproxy.yaml).
The configuration file has the following structure:
init_config: min_collection_interval: 20 key1: val1 key2: val2 instances: - username: jon_smith password: 1234 - username: jane_smith password: 5678
min_collection_intervalcan be added to the init_config section to help define how often the check should be run. If it is greater than the interval time for the agent collector, a line will be added to the log stating that collection for this script was skipped. The default is
0which means it will be collected at the same interval as the rest of the integrations on that agent. If the value is set to 30, it does not mean that the metric will be collected every 30 seconds, but rather that it could be collected as often as every 30 seconds. The collector runs every 15-20 seconds depending on how many integrations are enabled. If the interval on this agent happens to be every 20 seconds, then the agent will collect and include the agent check. The next time it collects 20 seconds later, it will see that 20 < 30 and not collect the custom agent check. The next time it will see that the time since last run was 40 which is greater than 30 and therefore the agent check will be collected.
The init_config section allows you to have an arbitrary number of global configuration options that will be available on every run of the check in
self.init_config.
The instances section is a list of instances that this check will be run against. Your actual
check() method is run once per instance. This means that every check will support multiple instances out of the box.
Before starting your first check it is worth understanding the checks directory structure. There are two places that you will need to add files for your check. The first is the
checks.d folder, which lives in your Agent root.
For all Linux systems, this means you will find it at:
/etc/sts-agent/checks.d/
For Windows Server >= 2008 you’ll find it at:
C:\Program Files (x86)\StackState\Agent\checks.d\ OR C:\Program Files\StackState\Agent\checks.d\
The other folder that you need to care about is
conf.d which lives in the Agent configuration root.
For Linux, you’ll find it at:
/etc/sts-agent/conf.d/
For Windows, you’ll find it at:
C:\ProgramData\StackState\conf.d\ OR C:\Documents and Settings\All Users\Application Data\StackState\conf.d\
You can also add additional checks to a single directory, and point to it in
StackState.conf:
additional_checksd: /path/to/custom/checks.d/
mycheck.pyyour configuration file must be named
mycheck.yaml.
To start off simple, we’ll write a check that does nothing more than send a value of 1 for the metric
hello.world. The configuration file will be very simple, including no real information. This will go into
conf.d/hello.yaml:
init_config: instances: [{}]
The check itself will inherit from
AgentCheck and send a gauge of
1 for
hello.world on each call. This will go in
checks.d/hello.py:
from checks import AgentCheck class HelloCheck(AgentCheck): def check(self, instance): self.gauge('hello.world', 1)
As you can see, the check interface is really simple and easy to get started with. In the next section we’ll write a more useful check that will ping HTTP services and return interesting data.
Now we will guide you through the process of writing a basic check that will check the status of an HTTP endpoint. On each run of the check, a GET request will be made to the HTTP endpoint. Based on the response, one of the following will happen:
First we will want to define how our would look something like this:
init_config: default_timeout: 5 instances: - url: - url: timeout: 8 - url:
Now we can start defining our check method. The main part of the check will make a request to the URL and time the response time, handling error cases as it goes.
In this snippet, we start a timer, make the GET request using the requests library and handle and errors that might arise.
#) if r.status_code != 200: self.status_code_event(url, r, aggregation_key)
If the request passes, we want to submit the timing to StackState as a metric. Let’s call it
http.response_time and tag it with the URL.
timing = end_time - start_time self.gauge('http.response_time', timing, tags=['http_check'])
Finally, we’ll want to define what happens in the error cases. We have already seen that we call
self.timeout_event in the case of a URL timeout and we call
self.status_code_event in the case of a bad status code. Let’s define those methods now.
First, we’ll define
timeout_event. Note that we want to aggregate all of these events together based on the URL so we will define the aggregation_key as a hash of the URL.
def timeout_event(self, url, timeout, aggregation_key): self.event({ 'timestamp': int(time.time()), 'event_type': 'http_check', 'msg_title': 'URL timeout', 'msg_text': '%s timed out after %s seconds.' % (url, timeout), 'aggregation_key': aggregation_key })
Next, we’ll })
For the last part of this guide, we’ll show the entire check. This module would be placed into the
checks.d folder as
http.py. The corresponding configuration would be placed into the
conf.d folder as
http.yaml.
Once the check is in
checks.d, you can test it by running it as a python script. Restart the Agent for the changes to be enabled. Make sure to change the conf.d path in the test method. From your Agent root, run:
PYTHONPATH=. python checks.d/http.py
You’ll see #) return if r.status_code != 200: self.status_code_event(url, r, aggregation_key) timing = end_time - start_time self.gauge('http.reponse reponse code for %s' % url, 'msg_text': '%s returned a status of %s' % (url, r.status_code), 'aggregation_key': aggregation_key }) if __name__ == '__main__': check, instances = HTTPCheck.from_yaml('/path/to/conf.d/http.yaml') for instance in instances: print "\nRunning the check against url: %s" % (instance['url']) check.check(instance) if check.has_events(): print 'Events: %s' % (check.get_events()) print 'Metrics: %s' % (check.get_metrics())
Custom Agent checks can’t be directly called from python and instead need to be called by the agent. To test this, run:
sudo -u sts-agent sts-agent check <CHECK_NAME>
If your issue continues, please reach out to Support with the help page that lists the paths it installs.
Run the following script, with the proper
<CHECK_NAME>:
<INSTALL_DIR>/embedded/python.exe <INSTALL_DIR>agent/agent.py check <CHECK_NAME>
For example, to run the disk check:
C:\Program Files\StackState\StackState Agent\embedded\python.exe C:\Program Files\StackState\StackState Agent\agent\agent.py check disk | http://docs.stackstate.com/guides/agent_checks/ | 2017-10-17T03:50:27 | CC-MAIN-2017-43 | 1508187820700.4 | [] | docs.stackstate.com |
How to Debug Reasoning
This tutorial summarizes the most common questions about reasoning in Stardog. Note that Stardog includes both logical and statistical reasoning capabilities. This tutorial focuses only on logical reasoning.
Page Contents
Background
Chances are pretty good you know what reasoning is, what it does, and why it’s important. But just in case, let’s take a very quick refresher on what reasoning is:
- Reasoning is a declarative way to either derive new nodes and edges in a graph or specify integrity constraints on a (possibly distributed) graph or to do both at the same time.
- Reasoning can replace arbitrary amounts of complex code and queries.
- Reasoning transforms a graph data structure into a knowledge graph.
- Reasoning in Stardog is fully integrated into SPARQL query evaluation.
A Motivating Example
Take, for example, the following trivial graph in the Turtle format:
:Square rdfs:subClassOf :Shape . # This says that All Squares are Shapes :MySquare a :Square .
Any plain graph database can store these 3 nodes (
:Square,
:Shape, and
:MySquare) and 2 edges (
rdfs:subClassOf and
a).
Reasoning is the software service that lets Stardog infer some new (implicit, i.e., unstated) information from the known (i.e., explicit) data. In this case, the inference is just one new edge between two existing nodes:
:MySquare a :Shape .
Stardog doesn’t store inferences by default. Rather Stardog infers them on the fly as needed when answering queries. That’s important because what if the different parts of this graph are distributed over enterprise data silos and need to stay there?
The Usual Suspects
You can find repeated types of questions in the Stardog Community forums where users aren’t seeing expected query results. These often come down to a reasoning setting or misunderstanding of how it works. Here are a few of the most common questions we have seen.
“I’m not seeing any results!”
The most simple problem to fix, and by extension the easiest thing to check when things aren’t working, is the case where reasoning isn’t enabled at all. “But wait a minute,” I hear you ask, “If reasoning is so important, why would it ever NOT be enabled?”
One answer is that it can be expensive. But the other answer is that you should use reasoning in a way that makes sense for your use case. Stardog does not materialize (i.e., explicitly calculate and store) inferences, instead finding them as needed at query time. Therefore if a query doesn’t need reasoning to get the required results, it makes no sense to make everyone else pay the cost of computing.
If your problem is that query results only contain information that is explicit, this could be the problem. The method of enabling reasoning for your queries depends on how they’re being run:
- CLI: Ensure that
stardog query executeis passed either the
-ror
--reasoningflag.
- Java: When creating a
Connectionobject via
ConnectionConfiguration, ensure that the
reasoning()method is called with a
truevalue
- HTTP: Ensure that the
reasoningquery parameter is present in your URL, or form body, with the value
true
- Stardog Studio: Ensure that the Reasoning toggle in the Workspace set to ON.
If this works, then congratulations! If not, read on.
I’m not seeing the right results!”
Okay, so reasoning is enabled, but what if you’re still not seeing the results that you know you should be seeing? It could be related to reasoning level or to the schema location.
Reasoning Level
You may not see expected results because the wrong reasoning level is being used. A profile or “reasoning level” is a bundle or family of data modeling features (called, for historical reasons, “axioms”) that are often used together. Some levels are more expressive (and thus more expensive) than others, so you want to choose the cheapest one that works. Stardog supports the following reasoning levels: RDFS, QL, RL, EL, DL, SL, and NONE. If you are missing results that you know should be there, check the
stardog.log file.
Often when we receive issues like this, the log file will contain lines that look like this:
Not a valid SL axiom: Range(...).
Typically this means that the reasoning level is set to SL (the default), but the user has included OWL DL axioms, which are not covered by SL. When
stardog.log shows lines like this, the implication is that the axiom(s) in question will be ignored completely, which is often the reason for the “missing” results, as they depended on the axiom.
By default Stardog uses the SL level because it’s the most expressive level that can be computed efficiently over large graphs. You can use the
reasoning schemaCLI command to see which axioms are included during reasoning.
The easiest solution may be to enable the database configuration option
reasoning.approximate which will, when possible, split troublesome axiom(s) into two and use the axiom that fits into SL level. You can also try using Stardog Rules. Then you can look at rule gotchas to see if there are any issues with how you’re using rules. If you have a very small amount of data, you may try using the DL reasoning level.
Schema Location
Another cause we’ve seen for not seeing the expected results is connected to where the Stardog schema is in the graph. The schema here is just the set of axioms you want to use in reasoning. But, as mentioned above, those can be distributed (physically) so Stardog will work hard to find them.
Practically this means that Stardog needs to know which named graph(s) contain the schema. So you may need to check the value of
reasoning.named.graphs property in
stardog.properties to the correct value.
Our documentation has a detailed discussion of other reasons you might not be seeing the results you want. It’s a good read.
“My Schema NEEDS axiom X and axiom Y!”
Maybe? But maybe not. Stardog Rules are very powerful and are only getting easier to write. Think of Stardog Rules as Datalog in the graph because Stardog Rules are (basically) Datalog in the graph. | https://docs.stardog.com/tutorials/how-to-debug-reasoning | 2022-09-25T09:14:04 | CC-MAIN-2022-40 | 1664030334515.14 | [] | docs.stardog.com |
Building From Source¶
In addition to meeting the system requirements, there are two things you need to build RTRTR: a C toolchain and Rust. You can run RTRTR on any operating system and CPU architecture where you can fulfil these requirements.
Dependencies¶
C Toolchain¶
Some of the libraries RTRTR, RTRTR relies on a relatively new version of Rust, currently 1.52 or newer. We therefore suggest to use the canonical Rust installation via a tool called rustup.
Assuming you already have curl installed, you can install rustup and Rust by simply entering:
curl --proto '=https' --tlsv1.2 -sSf | sh
Alternatively, visit the Rust website for other installation methods.
Building and Updating¶
In Rust, a library or executable program such as RTRTR is called a crate. Crates are published on crates.io, the Rust package registry. Cargo is the Rust package manager. It is a tool that allows Rust packages to declare their various dependencies and ensure that you’ll always get a repeatable build.
Cargo fetches and builds RTRTR’s dependencies into an executable binary for your platform. By default you install from crates.io, but you can for example also install from a specific Git URL, as explained below.
Installing the latest RTRTR release from crates.io is as simple as running:
cargo install --locked rtrtr
The command will build RTRTR and install it in the same directory that
Cargo itself lives in, likely
$HOME/.cargo/bin. This means RTRTR
will be in your path, too.
Updating¶
If you want to update to the latest version of RTRTR, it’s recommended to update Rust itself as well, using:
rustup update
Use the
--force option to overwrite an existing version with the latest
RTRTR release:
cargo install --locked --force rtrtr
Once RTRTR is installed, you need to create a Configuration file that
suits your needs. The config file to use needs to be passed to RTRTR via the
-c option, i.e.:
rtrtr -c rtrtr.conf
Installing Specific Versions¶
If you want to install a specific version of
RTRTR using Cargo, explicitly use the
--version option. If needed,
use the
--force option to overwrite an existing version:
cargo install --locked --force rtrtr --version 0.2.0-rc2
All new features of RTRTR are built on a branch and merged via a pull
request, allowing you to
easily try them out using Cargo. If you want to try a specific branch from
the repository you can use the
--git and
--branch options:
cargo install --git --branch main
See also
For more installation options refer to the Cargo book.
Platform Specific Instructions¶ | https://rtrtr.docs.nlnetlabs.nl/en/latest/building.html | 2022-09-25T09:09:59 | CC-MAIN-2022-40 | 1664030334515.14 | [] | rtrtr.docs.nlnetlabs.nl |
Get some Signa
To create a profile, add some collections or upload NFTs you need some Signa. For example a profile update costs 0.05 Signa, same amount for collections, and minting an NFT costs 0.32 Signa.
- CEXs (Centralized Exchanges):
If you don´t have access to any CEX above, you can also donate to the SNA (Signum Network Association). If you add your Signum-Account to the donation via paypal or direct bank transfer, the SNA will send you some Signa to the given Signum account. The SNA will deduct around 25% from the donation for external and internal costs and the Signa amount will be calculated and flatted based on the current market price.
The donation is intended to help easing the entry into the Signum ecosystem. In addition, these donations help the SNA to fulfil its purpose. A donation of 10 Euro, CHF or USD is already more than enough to have your NFTs minted on the portal and you have contributed your part to support the Signum network
The Signum Network Association (SNA) promotes the development and awareness of the open-source blockchain Signum (signum.network), which is extremely energy-efficient due to a unique mining process and thus can become an important building block for a sustainable digital future. Read more about the SNA vision here .
You can make a bank transfer to the following accounts in EUR or CHF:
You can also send a donation via PayPal in EUR or USD
Scan the following QR-Codes on your mobile if you like to donate.
Please add your Signa account that you have created or imported into the XT Wallet to the donation transfer - otherwise SNA cannot send you Signa as a starter kit. | https://docs.signum.network/nftportal/get-some-signa | 2022-09-25T08:57:57 | CC-MAIN-2022-40 | 1664030334515.14 | [array(['https://archbee-image-uploads.s3.amazonaws.com/l2AIpSUU6srSQmG1-K2bb/UvwoDZsq1Qs6zWsjDx3XL_qrcodeusdblacklogo.png?auto=format&ixlib=react-9.1.1&h=3000&w=3000',
'USD Paypal'], dtype=object)
array(['https://archbee-image-uploads.s3.amazonaws.com/l2AIpSUU6srSQmG1-K2bb/ovxxUX5fY6X7amm3NzrpX_qrcodeeuroblacklogo.png?auto=format&ixlib=react-9.1.1&h=3000&w=3000',
'EUR Paypal'], dtype=object) ] | docs.signum.network |
This tool uses constantly evolving Price Action to predict the direction of Price. It is a highly predictive tool that provides insights into the potential price highs and lows.
The tool is ticked off as the default setting but can be turned on simply by checking the box next to the indicator name in the settings.
The Price Projection tool uses 3 main colors to show the direction of Price Action; Red for above Price Action, Green for below Price Action, and Yellow for centerline Price Action.
The Red Line is above the Price Action and when the price reaches this level it increases the probability of a Reversal to the downside.
The Green Line is below the Price Action and when the price reaches this level it increases the probability of a Reversal to the upside.
The Yellow Line is the Centerline of the Current Price Action. When the Price gains or loses this level it can indicate that the price will continue in that direction.
The best way to use the indicator is on the lower timeframes such as the 15-minute. Look for Confluence as price reverses off of the Green or Red levels or gains/loses the yellow centerline. It is important to use this indicator with the “Confluence” of other key tools such as S & R levels, Key Timeframes, etc.
Adjusts the type of lines used by the Price Projection Tool. The default line is solid but can be changed to dotted or dashed | https://docs.trendmaster.com/auto-charting-tools/settings/price-projection | 2022-09-25T07:06:32 | CC-MAIN-2022-40 | 1664030334515.14 | [] | docs.trendmaster.com |
The Kinetic PreProcessor: KPP¶
An Environment for the
Simuation of Chemical Kinetic Systems
Adrian Sandu1, Rolf Sander2,
Michael S. Long3, Haipeng Lin4,
Robert M. Yantosca4, Lucas Estrada4, Lu Shen5, and Daniel J. Jacob4
1 Virginia Polytechnic Institute and State University, Blacksburg, VA, USA
2 Max-Planck Institute for Chemistry, Mainz, Germany
3 Renaissance Fiber, LLC, North Carolina, USA
4 Harvard John A. Paulson School of Engineering and Applied Sciences, Cambridge, MA, USA
5 School of Physics, Peking University, Bejing, China
This site provides instructions for KPP, the Kinetic PreProcessor.
Contributions (e.g., suggestions, edits, revisions) would be greatly appreciated. See Editing this User Guide and our contributing guidelines. If you find something hard to understand—let us know!
Getting Started
- KPP revision history
- Installation
- Running KPP with an example stratospheric mechanism
Using KPP
Technical information
- Information for KPP developers
- Numerical methods
- BNF description of the KPP language
KPP Reference
Help and Support | https://kpp.readthedocs.io/en/latest/?badge=latest | 2022-09-25T09:12:19 | CC-MAIN-2022-40 | 1664030334515.14 | [] | kpp.readthedocs.io |
The following dialogs let you control and manipulate image structures, such as layers, channels, or paths.
The “Layers” dialog is the main interface to edit, modify and manage your layers.
The “Layers” dialog is a dockable dialog; see the section Afsn can find in the context menu that you get by through a right-click on the layer. Afsnit 2, “Layer Modes”.
By moving the slider you give more or less opacity to the layer. With a 0 opacity value, the layer is transparent and completely invisible. Don't confuse this with a Layer Mask, which sets the transparency pixel by pixel.
You have three possibilities:
Lock pixels: When the button is pressed down, you cannot use any brush-based tool (Paintbrush, Pencil, Eraser etc.), the Airbrush or the Ink tool on the currently selected layer. This may be necessary to protect them from unwanted changes.
Lock position and size: This toggle button enables and disables protection of layers from moving them around or transforming them. When the button is pressed down, you cannot use any transform tool (Rotate, Shear, Perspective and others) or move it..
Figur 15.
Under the layer list a set of buttons allows you to perform some basic operations on the layer list.
Here you can create a new layer. A dialog is opened, which is described in New Layer.
Press the Shift key to open a new layer with last used values.
Here you can create a new layer group. A new layer is created, where you can put layers down.
Layer groups are described in Layer groups..
Before GIMP-2.10.18, this button was permanently for anchoring.
Now, it becomes an anchor only when a floating selection is
created (it anchors the floating selection to the previous
active layer). Else, it is a
with several possibilities:
Merge this layer with the first visible layer below it.
Pressing Shift: merge the layer group's layers into one normal layer.
Pressing Ctrl: merge all visible layers into one normal layer.
Pressing Shift+Ctrl: merge all visible layers with last used values. number of pixels.
Figur 15.7. A layer with layer mask
This image has a background layer with a flower and another blue one, fully opaque. A white layer mask has been added to the blue layer. In the image window, the blue layer remains visible because a white mask makes layer pixels visible.
Figur 15.8. Painting the layer mask
The layer mask is active. You paint with black color, which makes the layer transparent: the underlying layer becomes visible. | https://docs.gimp.org/2.10/da/gimp-dialogs-structure.html | 2022-09-25T08:52:13 | CC-MAIN-2022-40 | 1664030334515.14 | [] | docs.gimp.org |
2022-01-26 .nz registry replacement technical newsletter¶
Kia ora!
Welcome to the first .nz registry replacement technical newsletter in 2022.
You are receiving it because you have nominated yourself as the primary contact for the .nz registry replacement project.
In today’s newsletter:
IRS go-live date preferences
Key dates to note
Onboarding connection form
Technical changes documentation
Operational testing environment (OTE) updates
Billing changes from monthly to yearly
Registrars technical forum
IRS go-live date preferences
We are planning for the InternetNZ Registry System (IRS) migration date, and are keen to hear about your preferences of time and days of the week for it.
You received an email regarding the migration last week. If you wish to share your preference you will need to do so by the end of January.
Key dates to note
February 2022:
We’ll let you know the IRS migration date..
June 2022: official testing ends and production onboarding kicks off.
Onboarding connection form
You will get a follow-up phone call if you have missed the onboarding connection form deadline. We are happy to offer help with filling out the form if needed. It is essential to send your connection form back, so we can give you access to the OTE for testing and migration to the IRS later in the year.
Technical changes documentation
The first version of the technical changes documentation is available online. We are planning to have it updated with each IRS release.
Technical changes from Legacy SRS to IRS (InternetNZ Product Documentation website).
Any feedback/comments on the documentation will help us shape the upcoming versions. Please email your feedback/comments to [email protected].
Billing changes from monthly to yearly
The .nz domain registration minimum term is changing from one month to one year. This change will come into effect after the IRS goes live.
The change in billing will affect your customers if they pay for their domains monthly. We’ll be in touch with you and your accounts team directly to help transition your .nz registrants from monthly to yearly renewals over the next few months.
If you only offer annual registrations of .nz domains, the change of billing won’t affect your registrants.
Registrars technical forum
The next registrars technical forum is scheduled for mid-February. We will cover the billing changes from monthly to yearly. You will receive a calendar invitation shortly.
If you have any topics in mind that you would like us to cover in the upcoming forum, please email these to [email protected]
Join our Slack channel and ask questions
We encourage you to connect with us and with your fellow .nz registrars through the Slack channel.
You can join via this link: Registry replacement slack channel | https://docs.internetnz.nz/registry/comms/emails/2022-01-26_nz-registry-replacement-technical-newsletter/ | 2022-09-25T07:13:34 | CC-MAIN-2022-40 | 1664030334515.14 | [] | docs.internetnz.nz |
An Act to create 610.60 of the statutes; Relating to: electronic delivery by property and casualty insurers of notices and documents.
Amendment Histories
2013 Wisconsin Act 73 (PDF: )
2013 Wisconsin Act 73: LC Act Memo
Bill Text (PDF: )
LC Amendment Memo
AB373 ROCP for Committee on Insurance On 10/31/2013 (PDF: )
LC Bill Hearing Materials
Wisconsin Ethics Commission information
2013 Senate Bill 292 - Hold (Available for Scheduling) | https://docs.legis.wisconsin.gov/2013/proposals/ab373 | 2022-09-25T07:06:55 | CC-MAIN-2022-40 | 1664030334515.14 | [] | docs.legis.wisconsin.gov |
$ curl -k -u <username>@<profile>:<password> \ (1) https://<engine-fqdn>/ovirt-engine/api (2)
In OpenShift Container Platform version 4.9, RHV.
You created a registry on your mirror host and obtained the
imageContentSources data for your version of OpenShift Container Platform.
You provisioned persistent storage for your cluster. To deploy a private image registry, your storage must provide ReadWriteMany access modes..
To install and run an OpenShift Container Platform version 4.9 cluster, the RHV environment must meet the following requirements.
Not meeting these requirements can cause the installation or process to fail. Additionally, not meeting these requirements can cause the OpenShift Container Platform cluster to fail days or weeks after installation.
The following requirements for CPU, memory, and storage resources are based on default values multiplied by the default number of virtual machines the installation program creates. These resources must be available in addition to what the RHV environment uses for non-OpenShift Container Platform operations.
By default, the installation program creates seven virtual machines during the installation process. First, it creates a bootstrap virtual machine to provide temporary services and a control plane while it creates the rest of the OpenShift Container Platform cluster. When the installation program finishes creating the cluster, deleting the bootstrap machine frees up its resources.
If you increase the number of virtual machines in the RHV environment, you must increase the resources accordingly.
The RHV version is 4.4.
The RHV environment has one data center whose state is Up.
The RHV data center contains an RHV cluster.
The RHV cluster has the following resources exclusively for the OpenShift Container Platform RHV storage domain must meet these etcd backend performance requirements.
In production environments, each virtual machine must have 120 GiB or more. Therefore, the storage domain must provide 840 GiB or more for the default OpenShift Container Platform cluster. In resource-constrained or non-production environments, each virtual machine must have 32 GiB or more, so the storage domain must have 230 GiB or more for the default OpenShift Container Platform cluster.
To download images from the Red Hat Ecosystem Catalog during installation and update procedures, the RHV cluster must have access to an internet connection. The Telemetry service also needs an internet connection to simplify the subscription and entitlement process.
The RHV cluster must have a virtual network with access to the REST API on the RHV Manager. Ensure that DHCP is enabled on this network, because the VMs that the installer creates obtain their IP address by using DHCP.
A user account and group with the following least privileges for installing and managing an OpenShift Container Platform cluster on the target RHV cluster:
DiskOperator
DiskCreator
UserTemplateBasedVm
TemplateOwner
TemplateCreator
ClusterAdmin on the target cluster
Verify that the RHV environment meets the requirements to install and run an OpenShift Container Platform cluster. Not meeting these requirements can cause failures.
Check that the RHV version supports installation of OpenShift Container Platform version 4.9.
In the RHV Administration Portal, click the ? help icon in the upper-right corner and select About.
In the window that opens, make a note of the RHV Software Version.
Confirm that the RHV version is 4.4. For more information about supported version combinations, see Support Matrix for OpenShift Container Platform on RHV.
Inspect the data center, cluster, and storage.
In the RHV Administration Portal, click Compute → Data Centers.
Confirm that the data center where you plan to install OpenShift Container Platform is accessible.
Click the name of that data center.
In the data center details, on the Storage tab, confirm the storage domain where you plan to install OpenShift Container Platform RHV cluster where you plan to install OpenShift Container Platform. Record the cluster name for use later on.
Inspect the RHV host resources.
In the RHV Administration Portal, click Compute > Clusters.
Click the cluster where you plan to install OpenShift Container Platform.
In the cluster details, click the Hosts tab.
Inspect the hosts and confirm they have a combined total of at least 28 Logical CPU Cores available exclusively for the OpenShift Container Platform OpenShift Container Platform OpenShift Container Platform has access to the RHV Manager’s REST API. From a virtual machine on this network, use curl to reach the RHV Manager’s REST API:
$ curl -k -u <username>@<profile>:<password> \ (1) https://<engine-fqdn>/ovirt-engine/api (2)
For example:
$ curl -k -u ocpadmin@internal:pw123 \
All the Red Hat Enterprise Linux CoreOS (RH.
OpenShift Container Platform Red Hat Enterprise Linux CoreOS (RHCOS) machines read the information and can sync the clock with the NTP servers.
To run the binary
openshift-install installation program and Ansible scripts, set up the RHV Manager or an Red Hat Enterprise Linux (RHEL) computer with network access to the RHV environment and the REST API on the Manager.
Update or install Python3 and Ansible. For example:
# dnf update python3 ansible
Install the
python3-ovirt-engine-sdk4 package to get the Python Software Development Kit.
Install the
ovirt.image-template Ansible role. On the RHV Manager and other Red Hat Enterprise Linux (RHEL) machines, this role is distributed as the
ovirt-ansible-image-template package. For example, enter:
# dnf install ovirt-ansible-image-template
Install the
ovirt.vm-infra Ansible role. On the RHV Manager and other RHEL machines, this role is distributed as the
ovirt-ansible-vm-infra package.
# dnf install ovirt-ansible-vm-infra
Create an environment variable and assign an absolute or relative path to it. For example, enter:
$ export ASSETS_DIR=./wrk,
https://<engine-fqdn>/ovirt-engine/. Then, under Downloads, click the CA Certificate link.
Run the following command:
$ curl -k 'https://<engine-fqdn>/ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA' ..
Download the Ansible playbooks for installing OpenShift Container Platform version 4.9 on RHV.
On your installation machine, run the following commands:
$ mkdir playbooks
$ cd playbooks
$ curl -s -L -X GET | grep 'download_url.*\.yml' | awk '{ print $2 }' | sed -r 's/("|",)//g' | xargs -n 1 curl -O OpenShift Container Platform cluster you are installing. This includes elements such as the Red Hat Enterprise Linux CoreOS (RH OpenShift Container Platform cluster in a RHV RHV cluster in which to install the OpenShift Container Platform Manager.
image_url: Enter the URL of the RHCOS image you specified for download.
local_cmp_image_path: The path of a local download directory for the compressed RHCOS image.
local_image_path: The path of a local directory for the extracted RH/RHV RHV./RHV OpenShift Container Platform: "" Manager:...' Red Hat Enterprise Linux CoreOS (RH_perf | https://docs.openshift.com/container-platform/4.9/installing/installing_rhv/installing-rhv-restricted-network.html | 2022-09-25T08:59:15 | CC-MAIN-2022-40 | 1664030334515.14 | [] | docs.openshift.com |
sending a (multi) search request with one or more nested joins, the node receiving the request
will become the “Coordinator”. The coordinator node is in charge of controlling and executing a “Job” across the
available nodes in the cluster. A job represents the full workflow of the execution of a (multi) search request.
A job is composed of one or more “Tasks”. A task represent a single type of operations, such as a
Search/Project
or
Join, that is executed by a “Worker” on a node. A worker is a thread that will perform a task and report the
outcome of the task to the coordinator.
For example, the following search request joining the index
companies with
articles:
GET /_siren/companies/search { "query" : { "join" : { "type": "HASH_JOIN", "indices" : ["articles"], "on": ["id", "mentions"], "request" : { "query" : { "match_all": {} } } } } }
will produce the following workflow:
The coordinator will execute a
Search/Project task on every shard of the
companies and
articles
companies index. These documents ids will be transferred back to their respective shards and used to
filter the
companies. | https://docs.support.siren.io/siren-federate-user-guide/20/siren-federate/architecture.html | 2022-09-25T09:06:04 | CC-MAIN-2022-40 | 1664030334515.14 | [array(['_images/siren-federate-architecture-workflow.png',
'Query Workflow'], dtype=object) ] | docs.support.siren.io |
JUMP TOMaster DataMaster Data - AdvertiserGET AdvertisergetList AdvertisersgetCreate an AdvertiserpostUpdate an AdvertiserputMaster Data - AgencyGet an AgencygetList AgenciesgetMaster Data - ClientGET ClientgetList ClientsgetUpdate a ClientputCreate a ClientpostMaster Data - DivisionGet a DivisiongetList DivisionsgetMaster Data - MarketGet MarketgetList MarketsgetMaster Data - MediaTypeList MediaTypesgetMaster Data - OfficeGet an OfficegetList OfficesgetMaster Data - ProductGet ProductgetList ProductsgetUpdate ProductsputCreate a ProductpostMaster Data - VendorGet a VendorgetList VendorsgetUpdate a VendorputCreate a VendorpostFW Financial API OrdersFW Financial API OrdersList OrdersgetRetrieve an Order by IDgetCreate an Order ResponsepostCreate a Vendor Payment ResponsepostFW Financial API Vendor InvoicesFW Financial API - Vendor Invoices - APList Vendor InvoicesgetGet a Vendor Invoice by IDgetCreate a Vendor Invoice ResponsepostCreate a Vendor Invoice Payment ResponsepostClient Bill - ARFW Financial API - Client Bill - ARList Client BillsgetGet Client Bill by IDgetCreate a Client Bill ResponsepostCreate a Client Bill Payment ResponsegetInvoiceMatchFW Financial API InvoiceMatchCreate Vendor Invoice Match StatuspostResetDataFW Financial API Reset DataCreate Data ResetpostFWA Order Ingestion Financial APIFWA Order Ingestion Financial APIGet Vendor(s)getList Media TypesgetCreate or Update an OrderpostGet Order StatusgetFW Financial API - Vendor Invoices - APVendor Invoices are available in the API once they have been cleared in FreeWheel. Basic Attributes The following attributes are returned by the API calls in this section. Field NameDescriptionData Type (Length) Id Unique ID for every time an invoice is cleared, unlocked, re-cleared, etc.string billId Unique ID for every time an invoice is cleared, unlocked, re-cleared, etc.integer currencyCode Currency code (USD default)string invoiceGrossCost Invoice total gross dollarsnumber invoiceNetCost Invoice total net dollarsnumber invoiceGrossCash Invoice gross cash dollarsnumber invoiceGrossTrade Invoice gross trade valuenumber totalUnits Total units within invoiceinteger syscode Cable Vendor system code (syscode)string (5) agencyID Agency IDinteger agencyName Agency namestring (50) agencyCode Optional Agency billing codestring (50) officeID Office IDinteger officeName Office namestring (50) officeCode Office billing code (Optional)string (50) bottomLineInvoice True/False indicating that the invoice is bottom-line and will have no invoice detail information.boolean vendor a. vendorRefIdFW unique Vendor IDintegerb. vendorName**Spot**: Vendor Name**Network**: Vendor Name**Print/Outdoor**: Publication/Vendor Name**Digital**: Publisher/Fee Vendor Namestring (101)c. vendorCodeVendor billing codestring (50)d. alternateBillCodeAlternate Vendor billing codestring (30)e. alternateBillTrue/False If True, the alternate Vendor billing code is used for Vendor paymentbooleanf. isFeeTrue/FalseTrue: MB Fee % is not applied to the Vendor; False: MB Fee % is applied to the Vendorbooleang. mediaRefIdFW media type IDintegerh. mediaNameMedia name associated with vendor media typestring (50)i. mediaCodeMedia type code associated with vendor media typestring (50) clearedDate Date/Time Stamp when invoice was exported/cleared for paymentstring (23) unlockedBillId Bill ID of related invoice when a negative/ reverse entry is created for unlocked invoicesinteger matchOnNet Spot Only: Indicates whether the invoice was matched on net instead of grossboolean verificationNet True/False: Indicates whether the Invoice Total Net Cost represents the Verification Net (true) or the Net (false) on the invoice recordboolean invoiceNetTrade Total net trade cost on invoicenumber invoiceGrossInvestment Total gross cost on invoice identified as Investmentnumber invoiceNetInvestment Total net cost on invoice identified as Investmentnumber totalTradeUnits Number of units that are tradeinteger invoiceTax1 First tax amount on invoicenumber tax1Type Tax type for invoiceTax1Values are: statelocalGSTPSTHSTQSTstring invoiceTax2 Second tax amount on invoicenumber tax2Type Tax type for InvoiceTax2Values are: statelocalGSTPSTHSTQSTstring tax1GLAccount GL Account associated with Tax 1string (50) tax2GLAccount GL Account associated with Tax 2string (50) totalAmountDue Total amount due on invoicenumber cashInAdvance True/False: Indicates whether the invoice is a pre-pay (cash-in-advance) invoiceboolean repBill True/False: Indicates whether the Rep Bill Code should be used in place of the Vendor Code or Alternate Bill Codeboolean repBillCode Rep Billing Codestring (50) billingMonthType Indicate whether the station/package/pub/ publisher/fee Vendor is billed by Broadcast Month (B), Calendar Month (C) or Fiscal (F)string (1) InvoiceEstimates a. estimateIdSystem generated estimate numberintegerb. estimateCodeEstimate Billing code (Optional)string (50)c. invoiceGrossCostAMOUNT OF GROSS COST BEING PAID ON ESTIMATEintegerd. invoiceNetCostAMOUNT OF NET COST BEING PAID ON ESTIMATEintegere. invoiceGrossCashGROSS CASH AMOUNT BEING PAID ON ESTIMATEintegerf. invoiceGrossTradeGROSS TRADE AMOUNT BE PAID ON ESTIMATEintegerg. totalUnitsTOTAL INVOICE UNITS BEING PAID ON ESTIMATEintegerh. mediaOrderIdSystem generated ID associated with the Vendor on the estimate/campaigni. Integerj. stationCallLetters**Spot**: Vendor station call letters**Network**: Package Name**Print/Outdoor, Digital**: N/Astring (50)k. stationBandCode**Spot**: Vendor station band code (AM, FM, TV)**Network**: Media type**Print/Outdoor, Digital**: N/Astring (50)l. clientRefIdFreeWheel client IDintegerm. clientNameClient namestring (50)n. clientCodeClient billing code (Optional)string (50)o. divisionRefIdFreeWheel ID for the divisionintegerp. divisionNameDivision Name (Optional)string (50)q. divisionCodeDivision billing code (Optional)string (50)r. productRefIdFreeWheel ID for the productintegers. productNameProduct namestring (50)t. productCodeProduct billing code (Optional)string (50)u. networkPackageCodeNetwork Only: Number used to identify Network packagesintegerv. paidByClientVendor invoice paid by Clientbooleanw. invoiceDetailsInvoice Detail Line informationi. billDetailIdUnique ID for every time an invoice detail item is cleared, unlocked, re-cleared, etc.integerii. billIdUnique ID for every time an invoice is cleared, unlocked, re-cleared, etc.integeriii. idUnique identifier for the invoice detail itemintegeriv. invoiceIdUnique identifier for the invoiceintegerv. networkSpot only: Cable network associated with detail itemstring (50)vi. invoiceDetailDateInvoice Detail Date**Spot and Network**: Unit date**Print**: Insertion Date**Outdoor**: Start date**Digital**: Start datestring (23)vii. invoiceDetailTimeRun time of the Unit for Spot and Network String (23)string (23)viii. invoiceDetailEndDateInvoice Detail End Date**Spot and Network**: N/A**Print**: N/A**Outdoor**: End date**Digital**: End dateix. invoiceDetailGrossCostGross cost of the line detailnumberx. invoiceDetailNetCostNet cost of the line detailnumberxi. unitLengthUnit length for the line detail for Spot and Networkintegerxii. isciCodeSpot/Network: ISCI (commercial) ID for the adstring (50)xiii. cashOrTradeSpot and Network by unit: C = Cash, T = Tradestring (1)xiv. commentsInvoice line commentsstring (4000)xv. programNameProgram for line detail for Spot and Networkstring (200)xvi. numberOfUnitsNumber of invoice line unitsintegerxvii. InvoiceOrderLinesOrder line detail pertaining to invoice linea) marketNameMarket namestring (50)b) marketCodeMarket codestring (50)c) invoiceDetailGrossCostGross amount attributed to order lineintegerd) invoiceDetailNetCostNet amount attributed to order lineintegere) lineIdUnique line ID associated with the order Note: For Digital the line ID is unique by package or by placement lineintegerf) isPackageDigital Only: True/False used to identify if the line ID is associated with the package or placement linebooleang) orderLineNumberUnique line number by orderintegerh) commissionCommission/Agency discount %numberi) netChargeNon-commissionable charge or fee associated with Vendornumberj) daypartCodeDaypart code for the line detailstring (50)k) rateTypeDigital Rate Typestring (50)l) productName NetworkOnly: Product namestring (50)m) productRefIdFreeWheel product Id (Network Only)integern) productCodeNetwork Only: Product codestring (50)o) mediaChannelRefIdFreeWheel ID for the media channel / booking category codeintegerp) mediaChannelNameName of the media channel / booking categorystringq) mediachannelCodeExternal code for the media channel / booking categorystringr) orderMonthMonth of order corresponding to matched invoice linestringxviii. prepaidInvoiceDetaila) amountAmount on Invoicenumberb) invoiceNumberPrepaid Invoice Numberstringc) invoiceDateDate of InvoiceDatetime invoiceMonth invoice month on invoicestring bottomLineAdjustment Spot only: Adjustment amount for bottom-line invoicesnumber bottomLineAdjustmentIncluded Spot only: True/False – indicates if the BL adjustment amount should be included in the Vendor payment amountboolean invoiceStatus Invoice statusstring (1) invoiceNotes Vendor Invoice commentsstring (4000) invoiceId Unique identifier for the invoiceinteger invoiceNumber Vendor Invoice numberstring (255) invoiceDate Vendor Invoice datestring (23) invoiceDueDate Vendor Invoice due datestring (23) | https://api-docs.freewheel.tv/financial/reference/vendor-invoices | 2022-09-25T09:04:20 | CC-MAIN-2022-40 | 1664030334515.14 | [] | api-docs.freewheel.tv |
Throughput
Response. Min Throughput Property
Definition
Important
Some information relates to prerelease product that may be substantially modified before it’s released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
Gets minimum throughput in measurement of request units per second in the Azure Cosmos service.
public int? MinThroughput { get; }
member this.MinThroughput : Nullable<int>
Public ReadOnly Property MinThroughput As Nullable(Of Integer)
Property Value
- System.Nullable<System.Int32> | https://docs.azure.cn/en-us/dotnet/api/azure.cosmos.throughputresponse.minthroughput?view=azure-dotnet-preview | 2022-09-25T08:30:59 | CC-MAIN-2022-40 | 1664030334515.14 | [] | docs.azure.cn |
You can now clone the existing integration app setting values and mappings from one store to another within the BigCommerce integration tile. This utility provides you the feasibility to set the base store and destination stores as per your business needs. You can overwrite the setting values and mappings from the base store to the selected stores within the integration tile. You can also choose to ignore the clone feature for any specific stores, settings, or mappings. You can run the cloned flows to any account. It is not mandatory that you can only run flow in the integration app tile in the same account where the cloning template is installed.
- As of now, the cloning utility can be installed from the marketplace. In the future, it is pipelined to be available via the integrator.io platform.
- You can clone your Integration App setting values and mapping from one base store to one or many stores within the same integration tile using the Clone BigCommerce - NetSuite Integration App template for the US and EU marketplaces.
The integration does not allow you to sync the following components:
- Saved searches
- Any customizations in your NetSuite account
- Sync data from one integration tile to another tile (including sandbox to production and vice versa).
- Any installed add-ons.
Install the template utility
- Be sure that the required stores (base store and destination store) are already installed in the existing integration app.
- BigCommerce.
- On the Clone BigCommerce - NetSuite template, click Install.
You can preview the components that are packaged as part of the selected integration.
- On the “Install template” window, click Install now.
- Read the Disclaimer, and accordingly click Proceed.
You will be navigated to configure your integrator.io connection.
Set up your connection
- After you installed the template, you will be navigated to the setup page. On the setup page, next to the integrator.io connection, click Click to Configure.
- You can choose to set up either a new connection or an existing connection.
- If you choose to Use Existing Connection, select the connection from the drop-down list.
- If you choose to Setup New Connection, proceed with the below next steps.
- Enter your connection name.
- Select the appropriate region as per your business from the Region drop-down.
Note: If an error is displayed while testing the connection, re-check your details, otherwise contact Celigo Support.
Understand the custom settings tab
You can find the custom settings in the “Settings” tab. You will find the following details:
- Integration ID: By default, the integration ID is mentioned here that needs to be replaced. Enter the integration app ID in which you want to perform the cloning.
- Base store name: Enter the base store name. You can change the base store name as per the available stores in your integration tile.
- Stores to be ignored: By default, the value of this setting is empty. If you wish to ignore the cloning for any of the stores, mention the store name in this setting. If you do not mention any store name in this setting, the settings and mappings are cloned from the source store to all the other stores.
Note: Enter the input details in an array format.
- Mappings not to be replaced: Below this, specify the mappings you don’t want to sync to the other integration stores. By default, few mandatory mappings are specified. All the mappings are segregated below their respective import. The mappings are further segregated as “body-level” and “line-level.” Each list type will have its own grouping. For non-list (root level), the mappings are mentioned as body-level. See the image below.
Note: In the “MappingsNotToBeReplaced” setting, stores. By default, all the settings related to saved searches, GST/VAT, and primary lookup criteria (to sync customers) are specified. As per your business needs, you can specify any other setting names that should not be cloned to the other store.
Example: If you don’t want to clone the tax code to the other stores, you can add that appropriate setting label name Default tax code when no match is found in NetSuite in this section.
Run the integration flows
Note: Be sure to first run the “Sync base store setting values to other stores” flow.
- Next to the respective flow, click on the toggle button, to enable the “Sync base store setting values to other stores” and “Sync base store mappings to other stores” flows.
- Click Run.
Note: The “Sync base store mappings to other stores” flow automatically runs after the “Sync base store setting values to other stores” flow, as it is a sequenced flow.
Please sign in to leave a comment. | https://docs.celigo.com/hc/en-us/articles/360059668332-Clone-mappings-and-setting-values-between-stores-within-the-BigCommerce-integration-tile- | 2022-09-25T07:31:42 | CC-MAIN-2022-40 | 1664030334515.14 | [] | docs.celigo.com |
Compute@Edge log streaming: Google Cloud Pub/Sub
Last updated 2022-09-20
Fastly's Real-Time Log Streaming feature for Compute@Edge services can send log files to Cloud Pub/Sub, Google's global messaging and event data ingestion product.
Fastly does not provide direct support for third-party services. See Fastly's Terms of Service for more information.
Prerequisites
Before adding Cloud Pub/Sub as a logging endpoint for Fastly Compute@Edge.
If you elect to use Google service account impersonation in order to avoid storing keys with Fastly you may use this same service account for that purpose. Our guide to creating an Google IAM role provides further details.
Read more about Cloud Pub/Sub in Google’s documentation.
Adding Cloud Pub/Sub as a logging endpoint
Follow these instructions to add Cloud Pub/Sub as a logging endpoint:
Review the information in our Setting Up Remote Log Streaming guide.TIP
Our developer documentation provides more information about logging with Compute@Edge code written in Rust, AssemblyScript, and JavaScript.
- Click the Google Cloud Pub/Sub Create endpoint button. The Create a Google Cloud Pub/Sub endpoint page appears.
- Fill out the Create a Google Cloud Pub/Sub endpoint fields as follows:
- In the Name field, enter the name you specified in your Compute@Edge code. For example, in our Rust code example, the name is
my_endpoint_name.
- In the Project ID field, enter the ID of your Google Cloud Platform project.
- In the Topic field, enter the Pub/Sub topic to which logs should be sent.
- In the Access Method area, select how Fastly will access Google resources for purposes of log delivery. Valid values are User Credentials and IAM Role. Read our guide on creating a Google IAM role for more information.
- If you selected User Credentials, enter the following fields:
- In the Email field, enter the email address of the service account configured for your Pub/Sub topic.
- In the Secret Key field, enter the exact value of the private key associated with the service account configured for your Pub/Sub topic.
- If you selected IAM Role, enter the following field:
- In the Service Account Name field, enter the name of the service account email address you selected when configuring Google IAM service account impersonation.
- Click the Create button to create the new logging endpoint.
- Click the Activate button to deploy your configuration changes. | https://docs.fastly.com/en/guides/compute-log-streaming-google-cloud-pubsub | 2022-09-25T08:12:44 | CC-MAIN-2022-40 | 1664030334515.14 | [] | docs.fastly.com |
Code Development Guidelines
This document describes the coding requirements and guidelines to be followed during the development of PlasmaPy and affiliated packages.
Code written for PlasmaPy must be compatible with Python 3.8 and later.
Coding Style
TL;DR: use pre-commit
PlasmaPy has a configuration for the pre-commit framework that takes care of style mostly automatically.
Install it with
pip install pre-commit, then use
pre-commit install within
the repository.
This will cause pre-commit to download the right versions of linters we use,
then run an automated style checking suite on every commit. Do note that this
works better with a
git add, then
git commit workflow than a
git commit
-a workflow — that way, you can check via
git diff what the automated
changes actually did.
Note that the “Style linters / pre-commit (pull_request)” part of our Continuous Integration system can and will (metaphorically) shout at you if it finds you didn’t apply the linters. Also note that the linters’ output may vary with version, so, rather than apply black and isort manually, let pre-commit do the version management for you instead!
Our pre-commit suite can be found in .pre-commit-config.yaml. It includes
PlasmaPy Code Style Guide, codified
PlasmaPy follows the PEP8 Style Guide for Python Code. This style choice helps ensure that the code will be consistent and readable.
Line lengths should be chosen to maximize the readability and elegance of the code. The maximum line length for Python code in PlasmaPy is 88 characters.
Docstrings and comments should generally be limited to about 72 characters.
During code development, use black to automatically format code and ensure a consistent code style throughout the package and isort to automatically sort imports.
Follow the existing coding style within a subpackage. This includes, for example, variable naming conventions.
Use standard abbreviations for imported packages when possible, such as
import numpy as np,
import matplotlib as mpl,
import matplotlib.pyplot as plt, and
import astropy.units as u.
__init__.pyfiles for modules should not contain any significant implementation code, but it can contain a docstring describing the module and code related to importing the module. Any substantial functionality should be put into a separate file.
Use absolute imports, such as
from plasmapy.particles import Particle, rather than relative imports such as
from ..particles import Particle.
Use
Optional[type]for type hinted keyword arguments with a default value of
None.
There should be at least one pun per 1284 lines of code.
Avoid using
lambdato define functions, as this notation may be unfamiliar to newcomers to Python.
Branches, commits, and pull requests
Before making any changes, it is prudent to update your local repository with the most recent changes from the development repository:
git fetch upstream
Changes to PlasmaPy should be made using branches. It is usually best to avoid making changes on your main branch so that it can be kept consistent with the upstream repository. Instead we can create a new branch for the specific feature that you would like to work on:
git branch *your-new-feature*
Descriptive branch names such as
grad-shafranov or
adding-eigenfunction-poetry are helpful, while vague names like
edits are considered harmful. After creating your branch locally,
let your fork of PlasmaPy know about it by running:
git push --set-upstream origin *your-new-feature*
It is also useful to configure git so that only the branch you are working on gets pushed to GitHub:
git config --global push.default simple
Once you have set up your fork and created a branch, you are ready to make edits to PlasmaPy. Switch to your new branch by running:
git checkout *your-new-feature*
Go ahead and modify files with your favorite text editor. Be sure to include tests and documentation with any new functionality. We recommend reading about best practices for scientific computing. PlasmaPy uses the PEP 8 style guide for Python code and the numpydoc format for docstrings to maintain consistency and readability. New contributors should not worry too much about precisely matching these styles when first submitting a pull request, GitHub Actions will check pull requests for PEP 8 compatibility, and further changes to the style can be suggested during code review.
You may periodically commit changes to your branch by running
git add filename.py git commit -m "*brief description of changes*"
Committed changes may be pushed to the corresponding branch on your GitHub fork of PlasmaPy using
git push origin *your-new-feature*
or, more simply,
git push
Once you have completed your changes and pushed them to the branch on GitHub, you are ready to make a pull request. Go to your fork of PlasmaPy in GitHub. Select “Compare and pull request”. Add a descriptive title and some details about your changes. Then select “Create pull request”. Other contributors will then have a chance to review the code and offer constructive suggestions. You can continue to edit the pull request by changing the corresponding branch on your PlasmaPy fork on GitHub. After a pull request is merged into the code, you may delete the branch you created for that pull request.
Commit Messages
Good commit messages communicate context and intention to other developers and to our future selves. They provide insight into why we chose a particular implementation, and help us avoid past mistakes.
Suggestions on how to write a
Documentation
All public classes, methods, and functions should have docstrings using the numpydoc format.
Docstrings may be checked locally using pydocstyle.
These docstrings should include usage examples.
Warnings and Exceptions
Debugging can be intensely frustrating when problems arise and the associated error messages do not provide useful information on the source of the problem. Warnings and error messages must be helpful enough for new users to quickly understand any problems that arise.
“Errors should never pass silently.” Users should be notified when problems arise by either issuing a warning or raising an exception.
The exceptions raised by a method should be described in the method’s docstring. Documenting exceptions makes it easier for future developers to plan exception handling.
Units
Code within PlasmaPy must use SI units to minimize the chance of ambiguity, and for consistency with the recognized international standard. Physical formulae and expressions should be in base SI units.
Functions should not accept floats when an Astropy Quantity is expected. In particular, functions should not accept floats and make the assumption that the value will be in SI units.
A common convention among plasma physicists is to use electron-volts (eV) as a unit of temperature. Strictly speaking, this unit corresponds not to temperature but is rather a measure of the thermal energy per particle. Code within PlasmaPy must use the kelvin (K) as the unit of temperature to avoid unnecessary ambiguity.
PlasmaPy uses the astropy.units package to give physical units to values.
All units packages available in Python presently have some limitations, including incompatibility with some NumPy and SciPy functions. These limitations are due to issues within NumPy itself. Many of these limitations are being resolved, but require upstream fixes.
Dimensionless units may be used when appropriate, such as for certain numerical simulations. The conventions and normalizations should be clearly described in docstrings.
Equations and Physical Formulae
If a quantity has several names, then the function name should be the one that provides the most physical insight into what the quantity represents. For example,
gyrofrequencyindicates gyration, whereas
Larmor_frequencyindicates that this frequency is somehow related to someone named Larmor. Similarly, using
omega_ceas a function name will make the code less readable to people who are unfamiliar with this particular notation.
Physical formulae should be inputted without first evaluating all of the physical constants. For example, the following line of code obscures information about the physics being represented:
>>> omega_ce = 1.76e7*(B/u.G)*u.rad/u.s
In contrast, the following line of code shows the exact formula which makes the code much more readable.
>>> omega_ce = (e * B) / (m_e * c)
The origins of numerical coefficients in formulae should be documented.
Docstrings should describe the physics associated with these quantities in ways that are understandable to students who are taking their first course in plasma physics while still being useful to experienced plasma physicists.
SI units that were named after a person should not be capitalized except at the beginning of a sentence.
Some plasma parameters depend on more than one quantity with the same units. In the following line, it is difficult to discern which is the electron temperature and which is the ion temperature.
>>> ion_sound_speed(1e6*u.K, 2e6*u.K)
Remembering that “explicit is better than implicit”, it is more readable and less prone to errors to write:
>>> ion_sound_speed(T_i=1e6*u.K, T_e=2e6*u.K)
SI units that were named after a person should be lower case except at the beginning of a sentence, even if their symbol is capitalized. For example, kelvin is a unit while Kelvin was a scientist.
Angular Frequencies
Unit conversions involving angles must be treated with care. Angles
are dimensionless but do have units. Angular velocity is often given
in units of radians per second, though dimensionally this is
equivalent to inverse seconds. Astropy will treat radians
dimensionlessly when using the
dimensionless_angles equivalency,
but
dimensionless_angles does not account for the multiplicative
factor of
2*pi that is used when converting between frequency (1 /
s) and angular frequency (rad / s). An explicit way to do this
conversion is to set up an equivalency between cycles/s and Hz:
>>> from astropy import units as u >>> f_ce = omega_ce.to(u.Hz, equivalencies=[(u.cy/u.s, u.Hz)])
However,
dimensionless_angles does work when dividing a velocity
by an angular frequency to get a length scale:
>>> d_i = (c/omega_pi).to(u.m, equivalencies=u.dimensionless_angles())
Examples
Examples in PlasmaPy are written as Jupyter notebooks, taking advantage
of their mature ecosystems. They are located in docs/notebooks.
nbsphinx
takes care of executing them at documentation build time and including them
in the documentation.
Please note that it is necessary to store notebooks with their outputs stripped (use the “Edit -> Clear all” option in JupyterLab and the “Cell -> All Output -> Clear” option in the “classic” Jupyter Notebook). This accomplishes two goals:
helps with versioning the notebooks, as binary image data is not stored in the notebook
signals
nbsphinxthat it should execute the notebook.
Note
In the future, verifying and running this step may be automated via a GitHub bot. Currently, reviewers should ensure that submitted notebooks have outputs stripped.
If you have an example notebook that includes packages unavailable in the
documentation building environment (e.g.,
bokeh) or runs some heavy
computation that should not be executed on every commit, keep the outputs in
the notebook but store it in the repository with a
preexecuted_ prefix, e.g.
preexecuted_full_3d_mhd_chaotic_turbulence_simulation.ipynb.
Benchmarks
PlasmaPy has a set of asv benchmarks that monitor performance of its functionalities. This is meant to protect the package from performance regressions. The benchmarks can be viewed at benchmarks. They’re generated from results located in benchmarks-repo. Detailed instructions on writing such benchmarks can be found at asv-docs. Up-to-date instructions on running the benchmark suite will be located in the README file of benchmarks-repo. | https://docs.plasmapy.org/en/stable/contributing/code_guide.html | 2022-09-25T07:52:00 | CC-MAIN-2022-40 | 1664030334515.14 | [] | docs.plasmapy.org |
Rendering
SODA’s rendering API takes Annotation objects and represents them as SVG glyphs inside of Chart viewports in the DOM. This section explains the ins and out of that process.
Glyph rendering functions
To render glyphs, you simply make a call to a glyph rendering function with an appropriate GlyphConfig object. The properties on the config object describe which Annotation objects to use, where to render the glyphs, and how to style the glyphs. Each glyph rendering function has a corresponding config defined by an interface. For example, a simple call to the rectangle function with a minimally specified RectangleConfig may look like:
rectangle({ chart: chart, // "chart" is a Chart object annotations: annotations, // "annotations" is an array of Annotation objects });
Glyph selectors
The selector property in a GlyphConfig is a string used to help differentiate between distinct collections of glyphs that have been rendered using the same set of Annotation objects in the same Chart. SODA applies the selector as a CSS class on the DOM elements that make up the glyphs.
The selector property also allows a nuanced feature: if you use the same selector string in a subsequent call to a glyph rendering function, the glyphs from the prior call using that selector will be replaced.
Glyph properties
The properties in a GlyphConfig that control glyph styling are typed as GlyphProperty, which is simply an alias of the type union of a static value and a GlyphCallback. A GlyphCallback is another type alias for a simple callback function that takes an AnnotationDatum as the sole argument and returns a value.
For example, if we were to add static GlyphPropertie to a rectangle() call, it might look like:
rectangle({ chart: chart, annotations: annotations, fillColor: "red", fillOpacity: 0.5 });
To illustrate how to take full advantage of the flexibility of GlyphProperties, imagine we were using a custom Annotation data type:
interface CustomAnnotation implements Annotation { id: string; // <- the fields required by Annotation start: number; end: number; color: string; // <- our custom fields score: number; }
Then, we could use callback GlyphProperties like:
// explicit type parameters have been added here for clarity, but // the TypeScript compiler is usually smart enough to infer them rectangle<CustomAnnotation, Chart<RenderParams>>({ chart: chart, annotations: annotations, fillColor: (d: AnnotationDatum<CustomAnnotation, Chart<RenderParams>>) => d.a.color, fillOpacity: (d: AnnotationDatum<CustomAnnotation, Chart<RenderParams>>) => d.a.score });
The canonical rendering pattern
In SODA, the canonical rendering pattern is to define a rendering routine inside of a Chart object. The rendering routine described here is a pattern that we find straightforward, but it is by no means the only way to achieve a visualization. Once you know a bit about how SODA works, you should find it pretty easy to extend the Chart class and assume greater control over the fine details of rendering process.
Default rendering routine
The default rendering routine is broken up into several steps, which will be described in the following sections.
RenderParams
The RenderParams is an object that is passed as the sole argument into Chart.render(). The default RenderParams implementation looks like:
interface RenderParams { annotations?: Annotation[]; //<- the list of Annotation objects to render start?: number; //<- the start of the interval to be rendered end?: number; //<- the end of the interval to be rendered rowCount?: number; //<- fix the height of the chart to a number of rows }
You’ll notice that every property on the interface is optional. This means you can think of the default RenderParams implementation as something of a suggestion. However, the default rendering routine is set up to respond to the presence of each of the properties in this implementation. With that in mind, you may find some use in adapting or extending the default RenderParams.
Chart.render()
The render() method calls each of the configurable rendering callbacks in succession. Each of the callbacks receives the RenderParams object as an argument. The callbacks can be overwritten in the ChartConfig or reassigned at runtime.
public render(params: P): void { this.renderParams = params; this.updateLayout(params); this.updateRowCount(params); this.updateDimensions(params); this.updateDomain(params); this.draw(params); this.postRender(params); }
Chart.updateLayout()
The updateLayout() callback is responsible for producing a VerticalLayout for the Chart. By default, the rendering API uses the Chart’s layout object to vertically position glyphs into rows. By passing a list of Annotation objects into one of SODA’s layout functions, a VerticalLayout that guarantees no horizontal overlap will be produced.
The default updateLayout method looks like:
public defaultUpdateLayout(params: P): void { if (params.annotations != undefined) { this.layout = intervalGraphLayout(params.annotations); } }
Chart.updateRowCount()
The updateRowCount() callback is responsible for updating the Chart’s rowCount property. A handful of methods use the rowCount property to properly adjust the heights of the Chart’s DOM elements.
The default updateRowCount method looks like:
public defaultUpdateRowCount(params: P): void { this.rowCount = params.rowCount != undefined ? params.rowCount : this.layout.rowCount; }
Chart.updateDimensions()
The updateDimensions() callback is responsible for updating the Chart’s DOM element dimensions to accommodate the render. By default, only the Chart’s vertical dimensions are adjusting during a render call, and it is assumed that the rowCount is properly set before the method is called.
The default updateDimensions method looks like:
public defaultUpdateDimensions(params: P): void { this.updateDivHeight(); this.updatePadHeight(); this.updateViewportHeight(); }
Chart.updateDomain()
The updateDomain() callback is responsible for updating the Chart’s domain. This effectively controls the interval that is initially displayed after the render call finishes. Adjusting the domain can be thought of as applying zooming or panning on the Chart’s viewport.
The default updateDomain method looks like:
public defaultUpdateDomain(params: P): void { let domain = this.domain; if (params.start != undefined && params.end != undefined) { domain = [params.start, params.end]; } else if (params.annotations != undefined) { domain = Chart.getDomainFromAnnotations(params.annotations); } this.initialDomain = domain; this.domain = domain; }
Chart.draw()
The draw() callback is responsible for using the rendering API to place glyphs in the Chart. The default implementation calls Chart.addAxis() and renders the annotations as rectangle glyphs.
The default draw method looks like:
public defaultDraw(params: P): void { this.addAxis(); rectangle({ chart: this, annotations: params.annotations || [], selector: "soda-rect" }); }
Customizing the rendering routine
In building your own SODA visualization, most of the work is likely to be in customizing the draw() rendering callback. The default draw() produces a lackluster display of black rectangle glyphs. If you wanted to add some color, you could do something like this when you instantiate your Chart:
let chart = new Chart({ selector: "div#soda-chart", draw(this, params) { this.addAxis() rectangle({ chart: this, annotations: params.annotations || [], selector: "soda-rect", fillColor: "cyan" // <- this will make the rectangle glyphs cyan }) } });
Understanding the nuances of customizing the rendering routine is probably best learned by example, so check out the examples section to learn more.
Interactivity
SODA allows you to define callback functions that are called whenever a glyph is clicked or hovered. The callback functions are loosely typed by InteractionCallback. The InteractionCallback type serves as an indicator of the arguments SODA will pass to your callback function when it is executed:
type InteractionCallback<A extends Annotation, C extends Chart<any>> = { ( s: d3.Selection<any, AnnotationDatum<A, C>, any, any>, // <- a D3 selection to the glyph's DOM element d: AnnotationDatum<A, C> // <- a reference to the Annotation object and the Chart ): void; };
These arguments are passed in by default, and you are free to arbitrarily define the function body. If you already know a bit about D3 (or are willing to learn), you can use the Selection argument to modify the glyph in the DOM. With the AnnotationDatum argument, you gain access to the Annotation that the glyph was rendered with and the Chart that it is rendered in.
The interaction API is similar to the glyph rendering API: you simply make a call to an interaction function with an appropriate InteractionConfig object. For example, a simple call to the clickBehavior function with ClickConfig may look like:
clickBehavior({ annotations: annotations, // <- "annotations" is an array of Annotation objects click: (s, d) => { // <- "click" is applied alert(`${d.a.id} clicked`) } });
Glyph mapping
Internally, SODA maps Annotation objects to the glyphs that they have been used to render. Specifically, keys are built using the id property of the Annotation object, the selector used in the rendering call, and the id property of the target Chart. The mapping information can be accessed with the queryGlyphMap function, which returns D3 selections of the DOM elements that make up the glyphs. You can optionally specify any number of the components of the keys to build a query, effectively changing the granularity of the search.
Calls to the queryGlyphMap function may look like:
// this will return a single glyph let specificGlyph = queryGlyphMap({ id: "ann-1", chart: chart, selector: "gene-rectangles", }) // this will return all of the glyphs in "chart" // rendered with the selector: "gene-rectangles" let rectanglesInChart = queryGlyphMap({ chart: chart, selector: "gene-rectangles" }) // this will return all of the glyphs in every Chart // rendered with the selector: "gene-rectangles" let allRectangles = queryGlyphMap({ selector: "gene-rectangles" }) // this will return all of the glyphs in "chart" let allInChart = queryGlyphMap({ chart: chart, }) // this will return every glyph in every Chart let allGlyphs = queryGlyphMap({}) | https://sodaviz.readthedocs.io/en/latest/guide/rendering.html | 2022-09-25T08:25:53 | CC-MAIN-2022-40 | 1664030334515.14 | [] | sodaviz.readthedocs.io |
If you have questions, run into problems, or just want to tell us about your amazing new robot, please get in touch!
The best way to contact us is through Intercom, either in the Freedom App (click MENU → SHOW CHAT or type G, then C) or by clicking the blue bubble at the bottom of this page.
You can also email us at any time.
To help us help you, take a look at our best practices for:
Updated almost 3 years ago | https://docs.freedomrobotics.ai/docs/contact-us | 2022-09-25T08:32:20 | CC-MAIN-2022-40 | 1664030334515.14 | [] | docs.freedomrobotics.ai |
ion_thermal_conductivity
- plasmapy.formulary.braginskii.ion ions.
The ion thermal conductivity (\(κ\)) of a plasma is defined by\[κ = \hat{κ} \frac{n_i k_B^2 T_i τ_i}{m_i}\]
where \(\hat{κ}\) is the non-dimensional ion thermal conductivity of the plasma, \(n_i\) is the ion number density of the plasma, \(k_B\) is the Boltzmann constant, \(T_i\) is the ion temperature of the plasma, \(τ_i\) is the fundamental ion collision period of the plasma, and \(m_i\) is the mass of an ion of the plasma. × cross-sectional area × temperature gradient. In laboratory plasmas, typically the energy is flowing out of your high-temperature plasma to something else, like the walls of your device, and you are sad about this.
- Return type
-
See also | https://docs.plasmapy.org/en/latest/api/plasmapy.formulary.braginskii.ion_thermal_conductivity.html | 2022-09-25T08:51:08 | CC-MAIN-2022-40 | 1664030334515.14 | [] | docs.plasmapy.org |
How do I change the a sidebox title?
Where are the strings for sidebox titles?
Look in
includes/languages/YOURTEMPLATE/english.php.
Suppose you want the Categories sidebox to have a different title. Look for
define('BOX_HEADING_CATEGORIES', 'Categories');
Changes for other sideboxes will follow this pattern:
define('BOX_HEADING_XXXXXXXX', 'New Title');
Where
XXXXXXXX is the name of the sidebox.
Make the needed changes. Be sure that the single quote marks are not left. | https://docs.zen-cart.com/user/sideboxes/sidebox_title/ | 2022-09-25T09:09:20 | CC-MAIN-2022-40 | 1664030334515.14 | [] | docs.zen-cart.com |
Contributor Guide
Ways to Contribute!
Every contribution helps to make cookiecutter-pytest-Plugin an even better template.
Contributions can be made in a number of ways:
- Write documentation for users or developers
- Submit an issue to propose a change or report a bug
- Code contributions via Pull Requests
- Review Pull Requests and provide constructive feedback
Any help is greatly appreciated and credit will always be given!
Documentation
If you feel like the plugin source code or the template itself needs more documentation, please submit a PR and make sure to write an according description.
One of the contributors will add a Documentation Label and review your changes!
The same applies to project documentation which is hosted at Read the Docs.
For additional information regarding docs, please see the Documentation Guide. | https://cookiecutter-pytest-plugin.readthedocs.io/en/latest/contributor-guide/quickstart/ | 2022-09-25T09:00:22 | CC-MAIN-2022-40 | 1664030334515.14 | [] | cookiecutter-pytest-plugin.readthedocs.io |
Working with shards
A shard (API/CLI: node group) is a collection of one to six Redis nodes. A Redis (cluster mode disabled) cluster will never have more than one shard. You can create a cluster with higher number of shards and lower number of replicas totaling up to 90 nodes per cluster. This cluster configuration can range from 90 shards and 0 replicas to 15 shards and 5 replicas, which is the maximum number of replicas allowed. The cluster's data is partitioned across the cluster's shards. If there is more than one node in a shard, the shard implements replication with one node being the read/write primary node and the other nodes read-only replica nodes.
When you create a Redis (cluster mode enabled) cluster using the ElastiCache console, you specify the number of shards in the cluster and the number of nodes in the shards. For more information, see Creating a Redis (cluster mode enabled) cluster (Console). If you use the ElastiCache API or Amazon CLI to create a cluster (called replication group in the API/CLI), you can configure the number of nodes in a shard (API/CLI: node group) independently. For more information, see the following:
API: CreateReplicationGroup
CLI: create-replication-group
Each node in a shard has the same compute, storage and memory specifications. The ElastiCache API lets you control shard-wide attributes, such as the number of nodes, security settings, and system maintenance windows.
Redis shard configurations
For more information, see Offline resharding and shard rebalancing for Redis (cluster mode enabled) and Online resharding and shard rebalancing for Redis (cluster mode enabled).
Finding a shard's ID
You can find a shard's ID using the Amazon Web Services Management Console, the Amazon CLI or the ElastiCache API.
Using the Amazon Web Services Management Console
For Redis (Cluster Mode Disabled)
Redis (cluster mode disabled) replication group shard IDs are always
0001.
For Redis (Cluster Mode Enabled)
The following procedure uses the Amazon Web Services Management Console to find a Redis (cluster mode enabled)'s replication group's shard ID.
To find the shard ID in a Redis (cluster mode enabled) replication group
Sign in to the Amazon Web Services Management Console and open the ElastiCache console at
.
On the navigation pane, choose Redis, then choose the name of the Redis (cluster mode enabled) replication group you want to find the shard IDs for.
In the Shard Name column, the shard ID is the last four digits of the shard name.
Using the Amazon CLI
To find shard (node group) ids for either Redis (cluster mode disabled) or Redis (cluster mode enabled) replication
groups use the Amazon CLI operation
describe-replication-groups with the following
optional parameter.
--replication-group-id—An optional parameter which when used limits the output to the details of the specified replication group. If this parameter is omitted, the details of up to 100 replication groups is returned.
This command returns the details for
sample-repl-group.
For Linux, OS X, or Unix:
aws elasticache describe-replication-groups \ --replication-group-id
sample-repl-group
For Windows:
aws elasticache describe-replication-groups ^ --replication-group-id
sample-repl-group
Output from this command looks something like this. The shard (node group) ids are
highlighted here to make finding them easier.
{ "ReplicationGroups": [ { "Status": "available", "Description": "2 shards, 2 nodes (1 + 1 replica)", "NodeGroups": [ { "Status": "available", "Slots": "0-8191", "NodeGroupId": "
0001", "NodeGroupMembers": [ { "PreferredAvailabilityZone": "us-west-2c", "CacheNodeId": "0001", "CacheClusterId": "sample-repl-group-0001-001" }, { "PreferredAvailabilityZone": "us-west-2a", "CacheNodeId": "0001", "CacheClusterId": "sample-repl-group-0001-002" } ] }, { "Status": "available", "Slots": "8192-16383", "NodeGroupId": "
0002", "NodeGroupMembers": [ { "PreferredAvailabilityZone": "us-west-2b", "CacheNodeId": "0001", "CacheClusterId": "sample-repl-group-0002-001" }, { "PreferredAvailabilityZone": "us-west-2a", "CacheNodeId": "0001", "CacheClusterId": "sample-repl-group-0002-002" } ] } ], "ConfigurationEndpoint": { "Port": 6379, "Address": "sample-repl-group.9dcv5r.clustercfg.usw2.cache.amazonaws.com" }, "ClusterEnabled": true, "ReplicationGroupId": "sample-repl-group", "SnapshotRetentionLimit": 1, "AutomaticFailover": "enabled", "SnapshotWindow": "13:00-14:00", "MemberClusters": [ "sample-repl-group-0001-001", "sample-repl-group-0001-002", "sample-repl-group-0002-001", "sample-repl-group-0002-002" ], "CacheNodeType": "cache.m3.medium", "DataTiering": "disabled" "PendingModifiedValues": {} } ] }
To find shard (node group) ids for either Redis (cluster mode disabled) or Redis (cluster mode enabled) replication
groups use the Amazon CLI operation
describe-replication-groups with the following
optional parameter.
ReplicationGroupId—An optional parameter which when used limits the output to the details of the specified replication group. If this parameter is omitted, the details of up to
xxxreplication groups is returned.
This command returns the details for
sample-repl-group.
For Linux, OS X, or Unix: ?Action=DescribeReplicationGroup &ReplicationGroupId=sample-repl-group &Version=2015-02-02 &SignatureVersion=4 &SignatureMethod=HmacSHA256 &Timestamp=20150202T192317Z &X-Amz-Credential=<credential> | https://docs.amazonaws.cn/en_us/AmazonElastiCache/latest/red-ug/Shards.html | 2022-09-25T08:47:07 | CC-MAIN-2022-40 | 1664030334515.14 | [array(['images/ElastiCacheClusters-CSN-RedisShards.png',
'Image: Redis shard configurations.'], dtype=object)] | docs.amazonaws.cn |
Custom DevOps Setups for Developers#
If you are a Software Developer, there is a high probability, more than 90%, that you are not into DevOps practices and expertise. But, you still want to accomplish most of the benefits DevOps provides.
We created Torque by asking ourselves: How to give a self-service DevOps tool to a developer like you that
Does not lock you.
Is not a black box doing some auto-magical stuff.
It doesn’t require you to be a DevOps expert.
Gives you all needed control and flexibility.
Enables you to manage resources in a simple, intuitive way.
Helps you with everything you need help with (doesn’t tell you it is easy when it is not).
Grows with you as your needs grow.
I know, I know. This sounds impossible to achieve inside one single tool.
But we did it.
Torque works like this:
It gives you a Python code package that you install in your codebase.
This package contains all code necessary for automating your DevOps. No external dependencies.
All code has a form of perpetual, never-expiring license, so you are never separated from the code you depend on.
Of course, you can edit that code as much as you like, but …
If you are Torque’s customer, you have access to:
Professional advice for the system design and DevOps setup.
Torque development of new functionality or components per your requirements.
Onboarding support — we give you hands-on support for setting up all those cloud accounts, tokens, credentials, and permissions that are “easy” to set up :)
Ongoing support for bug fixes, security patches, and new releases.
Torque gives you a turn-key DevOps Setup, helps you set it up, and provides code upgrades with hands-on support.
Next, you can check the docs to get a level deeper insight into how the Torque Developer Experience (and it is impressive) looks. And you can check how Torque accomplishes that by bringing high-order programming language concepts (like injection, composition, and others) to describe and implement DevOps concepts makes a notable difference. Finally, we have an actual deployment infrastructure ABSTRACTION with a high level of code reusability.
Open-source Github repos#
⭐️ our GitHub repos, ask questions, post issues, or simply check what others are doing:
Torque CLI - torque-workspace
-
Stay in Touch#
Email us at [email protected] or visit our website to sign up.
Contents#
- Torque CLI Installation
- Simple Example
- Torque Explained | https://docs.torquetech.dev/ | 2022-09-25T07:41:30 | CC-MAIN-2022-40 | 1664030334515.14 | [] | docs.torquetech.dev |
Troubleshoot Portworx on Kubernetes
Useful commands
List Portworx pods:
kubectl get pods -l name=portworx -n kube-system -o wide
Describe Portworx pods:
kubectl describe pods -l name=portworx -n kube-system
Get Portw -c Portworx logs
Please run the following commands on any one of the nodes running Portworx:
uname -a docker version kubectl version kubectl logs -n kube-system -l name=portworx -c
- Portworx container will fail to come up if it cannot reach etcd. For etcd installation instructions refer this doc.
- Portworx containers are in the host network.
Internal Kvdb
- In an event of a disaster where, internal kvdb is in an unrecoverable error state follow this doc to recover your Portworx cluster
The Portworx cluster
- Ports 9001 - 9022 must be open for internal network traffic between nodes running Portworx. Without this, Portw Portw -rand
yum install kernel-devel-`uname -r`
- If one of the Portw.
- Portworx containers. This prevents the create volume call to come to the Portworx API server.
- For Kubernetes versions 1.6.4 and before, Portworx may not running on the Kubernetes control plane node.
- For Kubernetes versions 1.6.5 and above, if you don’t have Portworx running on the control plane node, ensure that
- The
portworx-serviceKubernetes
Serviceis running in the
kube-systemnamespace.
- You don’t have any custom taints on the control plane node. Doing so will disallow kube-proxy from running on the control plane node.
PVC Controller
If you are running Portworx in AKS and run into port conflict in the PVC controller, you can overwrite the default PVC Controller ports using the
portworx.io/pvc-controller-port and
portworx.io/pvc-controller-secure-port annotations on the
StorageCluster object:
apiVersion: core.libopenstorage.org/v1 kind: StorageCluster metadata: name: portworx namespace: kube-system annotations: portworx.io/pvc-controller-port: "10261" portworx.io/pvc-controller-secure-port: "10262" ... that the the status of your Portworx cluster.
-. | https://docs.portworx.com/operations/operate-kubernetes/troubleshooting/troubleshoot-and-get-support/ | 2022-09-25T08:35:16 | CC-MAIN-2022-40 | 1664030334515.14 | [array(['/img/slack.png', 'Slack'], dtype=object)] | docs.portworx.com |
Third-Party Plugins
Custom Installations
Installation
FastComments is designed to be installed on any kind of page - static or dynamic, light themed or dark, public or internal pages. It should be easy to install and adapt to any kind of site or web based application.
Wordpress
You can find our WordPress plugin here.
This plugin supports live commenting, SSO, and no-code installation. Simply follow the installation guide in the admin page after installing the plugin. It will guide you through connecting your WordPress install to your account.
Any comments left with FastComments through our WordPress plugin will be automatically synced back to your WordPress install so that you retain control over your data.
VanillaJS
All versions of the FastComments widget are wrappers around the core VanillaJS library. This allows us to add features and fix issues in one place - and the changes automatically propagate to the other variants of the commenting widget.
The VanillaJS version of the widget is very easy to install, not requiring any build systems or server side code.
Simply add the following code snippet to any page:
You can find documentation on configuring it here.
Angular
You can find our Angular library on NPM here.
The FastComments Angular commenting widget supports all of the same features of the VanillaJS one - live commenting, sso, and so on.
You will need fastcomments-typescript, which is a peer dependency. Please ensure this is included in your TypeScript compilation. In the future, this peer dependency will be moved to @types/fastcomments which will simplify this installation.
The peer dependency should be added in your tsconfig.json file, for example:
Then, add the
FastCommentsModule to your application:
Usage
To get started, we pass a config object for the demo tenant:
Since the configuration can get quite complicated, we can pass in an object reference:
The widget uses change detection, so changing any properties of the configuration object will cause it to be reloaded.
You can find the configuration the Angular component supports here.
React
You can find our React library on NPM here.
The FastComments React React component supports here.
Vue
You can find our Vue library on NPM here.
Additionally, a vue-next library is on NPM here
The source code can be found on GitHub.
The FastComments Vue Vue component supports here.
TypeScript
Common Use Cases
Showing Live Comments Right Away
The comment widget is live by default, however live comments appear under a "Show N New Comments" button to prevent the page content from moving around.
In some cases, it's still desirable to show the new comments right away, without having to click a button.
In this case, you want to enable the
showLiveRightAway flag, which you can find documentation for here.
Allowing Anonymous Commenting (Not Requiring Email)
By default, FastComments requires the user to leave an email when they comment.
This can be disabled, instructions are here.
Custom Styling
Many of our customers apply their own styling to the comment widget. You can find the documentation here.
Showing The Same Comments on Multiple Domains
Showing the same comments on multiple sites is something FastComments supports out of the box. See our documentation on this subject.
Changing The Current Page
FastComments supports SPAs and complex applications. Changing the current page is easy, and covered here.
Debugging Common Issues
Here are some symptoms we see encountered frequently, and common solutions.
"This is a demo" Message
This is shown when you've copied the widget code from our home page, which uses our demo tenant. To use your tenant, copy the widget code from here.
"FastComments cannot load on this domain" Error
FastComments needs to know which domains are owned by you to authenticate requests associated with your account. Check out our documentation to see how to resolve this error (simply add the exact subdomain + domain to your account).
Migrated Comments Not Showing for Custom Installations
Usually this happens when the imported comments are tied to a
Page ID, and you are passing a URL
(or no value, in which case it defaults to the page URL).
You can debug this by exporting your comments and viewing the
URL ID column (currently Column
B).
Ensure the values you see in the
URL ID column are the same values you are passing to the widget
configuration as the
urlId parameter.
For further explanation, try reading our How Comments are Tied to Pages and Articles documentation.
If all else fails, reach out to us.
Comment Widget Not Showing
If the comment widget isn't showing, check the Chrome developer console for errors.
For most misconfiguration, the comment widget will at least show an error on the page if it is able to load. Seeing nothing is usually an indication of a scripting error.
Desired Configuration Not Working as Expected
Try our Chrome extension to see what configuration the comment widget is being passed. If all fails, take as screenshot of what the chrome extension says and reach out to us.
Not Receiving Emails
At FastComments, we put a lot of work into ensuring our delivery of emails is as reliable as possible. However, some email providers are notoriously difficult to deliver to reliably. Check your spam folder for messages from fastcomments.com.
If you reach out to us we can usually provide more insight into why you may not be seeing emails from us. | https://docs.fastcomments.com/guide-installation.html | 2021-07-23T21:44:16 | CC-MAIN-2021-31 | 1627046150067.51 | [array(['/images/menu.png', 'Open Menu Menu Icon'], dtype=object)
array(['images/link-internal.png', 'Direct Link Internal Link'],
dtype=object)
array(['images/link-internal.png', 'Direct Link Internal Link'],
dtype=object)
array(['images/link-internal.png', 'Direct Link Internal Link'],
dtype=object)
array(['images/link-internal.png', 'Direct Link Internal Link'],
dtype=object)
array(['images/link-internal.png', 'Direct Link Internal Link'],
dtype=object)
array(['images/link-internal.png', 'Direct Link Internal Link'],
dtype=object)
array(['images/link-internal.png', 'Direct Link Internal Link'],
dtype=object)
array(['images/link-internal.png', 'Direct Link Internal Link'],
dtype=object) ] | docs.fastcomments.com |
Query Count Limits
Each controller or API endpoint is allowed to execute up to 100 SQL queries and in test environments we raise an error when this threshold is exceeded.
Solving Failing Tests
When a test fails because it executes more than 100 SQL queries there are two solutions to this problem:
- Reduce the number of SQL queries that are executed.
- Disable query limiting for the controller or API endpoint.
You should only resort to disabling query limits when an existing controller or endpoint is to blame as in this case reducing the number of SQL queries can take a lot of effort. Newly added controllers and endpoints are not allowed to execute more than 100 SQL queries and no exceptions are made for this rule. If a large number of SQL queries is necessary to perform certain work it’s best to have this work performed by Sidekiq instead of doing this directly in a web request.
Disable query limiting
In the event that you have to disable query limits for a controller, you must first
create an issue. This issue should (preferably in the title) mention the
controller or endpoint and include the appropriate labels (
database,
performance, and at least a team specific label such as
Discussion).
After the issue has been created, you can disable query limits on the code in question. For
Rails controllers it’s best to create a
before_action hook that runs as early
as possible. The called method in turn should call
Gitlab::QueryLimiting.disable!('issue URL here'). For example:
class MyController < ApplicationController before_action :disable_query_limiting, only: [:show] def index # ... end def show # ... end def disable_query_limiting Gitlab::QueryLimiting.disable!('...').disable!('...') # ... end | https://docs.gitlab.com/ee/development/query_count_limits.html | 2021-07-23T22:57:02 | CC-MAIN-2021-31 | 1627046150067.51 | [] | docs.gitlab.com |
Generate Upload Token
First, navigate to the 'Automation Tasks' overview. (Admin settings panel > Services > Automation Tasks)
Select the 'data, Rest API endpoint' task and create a new 'Authentication token' for it.
Select the token and copy the ID displayed on the right.
Then scroll down to 'Application roles'. Expand it and check the 'Administrator' box.
If you want to change the token's lifespan (default is 3 months), you can do so in the collapsed 'Validity' field further down below the ID field.
Besides the token an URL is needed for the import. The URL is the base URL to your Symbio instance appended with '_api'. (For example "".)
You can also retrieve the URL from token's 'Information' section, just remove everything after '_api' from the URL.
The last thing we need is the GUID for the category or main process in Symbio we want the import to end up in.
Go to your Symbio installation and create a new category/main process or select an existing one and press
ctrl + alt + d.
Select the 'ContextKey' GUID from there and copy it.
The Token, the URL and GUID are added to the Tool using the Website.
Once all information is submitted, the page returns a new GUID. Copy it and distribute it to the users who want to import processes. (You can distribute one such import token to all users.) The user will be prompted to enter this token as final step in the import process.
| https://docs.symbioworld.com/admin/services/visio-importer/setup/ | 2021-07-23T21:04:43 | CC-MAIN-2021-31 | 1627046150067.51 | [array(['../media/createAutomation.png', 'automation'], dtype=object)
array(['../media/categoryGUID.png', 'categoryGUID'], dtype=object)
array(['../media/guidGeneration.png', 'guidGeneration'], dtype=object)
array(['../media/guidCopy.png', 'guidCopy'], dtype=object)] | docs.symbioworld.com |
Shared guides
Guides can be shared by multiple forms. You define a shared guide like you define a guide for an individual form, except that you attach the guide to multiple forms. If you do not want the guide to be shared, select only one form. Also, share any active links in the shared guide that you want to execute on multiple forms. If a guide contains active links that do not belong to or are not shared with the current form, those active links are skipped when the guide is executed.
Note
Changes you make to shared active links affect all guides and forms that use them.
For more information about creating shared workflow, see Shared workflow.
The sequence of active links in the guide takes precedence over any execution condition previously defined for the active links. You can redirect the active links by using the Go to Guide Label action. If you are creating active links that are used only in a guide, do not include an Execute On condition in them. Both the condition and its execution order are ignored when the guide's active links are executed.
Guides use the following procedures when determining which form to run on:
- If the current form belongs to the guide, run the guide against that form.
- If the current form does not belong to the guide, open a new window with a reference form and run the guide.
Because a guide can be shared by one or more forms, it also can contain multiple active links from those forms. All active links associated with the selected forms appear in the list when you add an active link to the guide. (See Exit Guide action for more information.) As a result, you can access those active links only from the forms listed in the Form Name field. Active links that are not associated with a form that is associated with the guide do not execute.
If you select multiple forms, the guide is attached to all of them. The first form you select becomes the primary form. You can change the primary form by using the drop-down list.
Shared guide use cases
The following use cases illustrate shared guide behavior. Assume you have created guide X and shared it with forms A, B, and C.
Use case 1
- Create a Call Guide active link action for forms A, B, and C.
- Open form A and execute the Call Guide action.
Guide X is executed on form A.
Use case 2
- Create a Call Guide active link action for forms A, B, and C.
- Open form C and execute the Call Guide action.
Guide X is executed on form C.
Use case 3
- Designate Form B as the reference form.
- Create a Call Guide active link action for forms A, B, C, and D.
- Open form D and execute the Call Guide action.
A new window opens with the reference form B.
Guide X is executed on form B in a new window.
Use case 4
- Form A is designated as the reference form.
- Open guide X from the Open dialog box.
Guide X is executed on form A, the reference form. | https://docs.bmc.com/docs/ars81/shared-guides-225971253.html | 2021-07-23T23:15:14 | CC-MAIN-2021-31 | 1627046150067.51 | [] | docs.bmc.com |
Sometimes you may wish to delete a product or one of its variations. Before deleting a product or variation, consider unpublishing the product or disabling the variation.
This will prevent the product variation from being displayed or purchased, but will leave it in the system should you want to re-enable it at a later point.
A product's variation is disabled while editing the product. Click on the variation's Edit button. Locate the Active checkbox on the Variation form and uncheck it. Click Update Variation to complete the process. The product variation will no longer be available to purchase on your site.
A product's variation is deleted while editing the product. Click on the variation's Remove button. A confirmation form will display. Click Remove once more to confirm.
A product can be deleted by editing it. At the bottom of the form there is a Delete link, which will display a confirmation form. Click Delete once more to confirm deletion. All variations will also be deleted.
Found errors? Think you can improve this documentation? edit this page | https://docs.drupalcommerce.org/commerce2/user-guide/products/delete-a-product | 2021-07-23T21:11:46 | CC-MAIN-2021-31 | 1627046150067.51 | [] | docs.drupalcommerce.org |
The Dash is for selecting and viewing the customized graphical dashboards.
The dashboards overview is displayed when a container Node is selected in the Workspaces Tree. All dashboard Nodes within the Selected Node are shown. Click on a dashboard to select it for display. When viewing an individual Dashboard you can click the Overview button to get back to this view.
Dashboards contain user configurable tiles which can be linked to Nodes to display your important data in real-time with live updates. Tile layout is automatically adjusted on mobile devices.
Refer to Dashboard configuration for Creating and working with Dashboards. | https://docs.eagle.io/en/latest/topics/views/dash/index.html | 2021-07-23T21:50:47 | CC-MAIN-2021-31 | 1627046150067.51 | [] | docs.eagle.io |
Silverlight 2, Beta 2 and the 2008 Olympics. | https://docs.microsoft.com/en-us/archive/blogs/jackg/silverlight-2-beta-2-and-the-2008-olympics | 2021-07-23T23:58:31 | CC-MAIN-2021-31 | 1627046150067.51 | [] | docs.microsoft.com |
The Storage Status (SSTS) alarm is triggered if a Storage Node has insufficient free space remaining for object storage.
The SSTS (Storage Status) alarm is triggered at the Notice level when the amount of free space on every volume in a Storage Node falls below the value of the Storage Volume Soft Read Only Watermark ().
For example, suppose the Storage Volume Soft Read-Only Watermark is set to 10 GB, which is its default value. The SSTS alarm is triggered if less than 10 GB of usable space remains on each storage volume in the Storage Node. If any of the volumes has 10 GB or more of available space, the alarm is not triggered.
If an SSTS alarm has been triggered, you can follow these steps to better understand the issue.
In this example, only 19.6 GB of the 164 GB of space on this Storage Node remains available. Note that the total value is the sum of the Available values for the three object store volumes. The SSTS alarm was triggered because each of the three storage volumes had less than 10 GB of available space.
In this example, the total usable space dropped from 95% to just over 10% at approximately the same time.
For procedures on how to manage a full Storage Node, see the instructions for administering StorageGRID. | https://docs.netapp.com/sgws-115/topic/com.netapp.doc.sg-troubleshooting/GUID-3B66B7E9-E01E-4B28-8F6F-279DB8B2E628.html | 2021-07-23T22:07:50 | CC-MAIN-2021-31 | 1627046150067.51 | [] | docs.netapp.com |
Sale Extra¶
The sale_extra module allows to add extra line on sale based on criteria.
The extra products are added when the sale goes into quotation but the added lines can be modified when going back to draft.
The criteria are defined by the Sale Extras of the Price List.
Sale Extra¶
Each matching Sale Extra is evaluated to add an extra line. The criteria are the fields:
- Price List
- Start/End Date
- Sale Amount: If untaxed sale amount is greater or equal (in the price list company currency).
Sale Extra Line¶
Once a Sale Extra is selected, its first line that match the line’s criteria is used to setup the extra line. The criteria are the fields:
- Sale Amount: If the untaxed sale amount is greater or equal (in the price list company currency).
The sale line is setup using the fields:
- Product: The product to use on the sale line.
- Quantity: The quantity to set on the sale line (in the Unit).
- Unit: The unit of measure to use.
- Free: Set unit price of the sale line to zero, otherwise to the sale price. | https://docs.tryton.org/projects/modules-sale-extra/en/5.0/ | 2021-07-23T22:03:13 | CC-MAIN-2021-31 | 1627046150067.51 | [] | docs.tryton.org |
wrap-option
< css | properties
wrap-option
This article is Not Ready.
wrap-option (Obsolete)
For technical reasons, the title of this article is not the text used to call this API. Instead, use
wrap-option (Obsolete)
Summary
Obsolete and unsupported. Do not use.
This CSS property controls the text when it reaches the end of the block in which it is enclosed.
Overview table
Syntax
wrap-option: emergency
wrap-option: no-wrap
wrap-option: soft-wrap
wrap-option: wrap
Values
- wrap
- The text is wrapped at the best line-breaking opportunity (if required) within the available block inline-progression dimension
- no-wrap
- The text is only wrapped where explicitly specified by preserved line feed characters. In the case when lines are longer than the available block width, the overflow will be treated in accordance with the 'overflow' property specified in the element.
- soft-wrap
- The text is wrapped after the last character which can fit before the ending-edge of the line and where explicitly specified by preserved line feed characters. No line-breaking algorithm is invoked. The intended usage is the rendering of a character terminal emulation.
-.
Needs Examples: This section should include examples.
Notes
Could not find any documentation that this is still relevant, concur that it should be deleted.
Related specifications
See also
Other articles
See also: wrap-flow property | http://docs.webplatform.org/wiki/css/properties/wrap-option | 2016-09-24T22:51:44 | CC-MAIN-2016-40 | 1474738659512.19 | [] | docs.webplatform.org |
Document Type
Dissertation
Abstract
The city of New Orleans is in need of a symbol of stability and hope. The design proposal is for a New Orleans Water Treatment Facility that focuses on purification, reclamation and desalination processes. The facility will create a pure water supply that will be distributed into public circulation as well as be made readily available during natural disasters. A public works building such as this is important to the growth of the ever rebuilding city. In 2005, New Orleans was hit by Hurricane Katrina causing flooding and destruction to a great majority of the city. Once the hurricane subsided the excess flood waters, full of pollutants and sewage, were pumped into Lake Pontcharain, adjacent to the city, contaminating this water resource and killing much of its wildlife. The city needs a feeling of security for the future; its residents need a feeling of comfort while rebuilding.
Recommended Citation
Thornton, Nicholas, "Community Infrastructure: a transformation of the New Orleans industrial canal" (2010). Architecture Theses. Paper 55. | http://docs.rwu.edu/archthese/55/ | 2013-05-18T15:47:37 | CC-MAIN-2013-20 | 1368696382503 | [] | docs.rwu.edu |
Secure Shell (SSH) is a network protocol you can use to create a secure connection to a remote computer. Once you've made this connection, it's as if the terminal on your local computer is running on the remote computer. Commands you issue locally run on the remote computer and the output of those commands from the remote computer appears in your terminal window..
Using SSH to connect to the master node gives you the ability to monitor and interact with the cluster. You can issue Linux commands on the master node, run applications such as HBase, Hive, and Pig interactively, browse directories, read log files, and more.
To connect to the master node using SSH, you need the public DNS name of the master node..
To locate the public DNS name of the master node using the Amazon EMR console
In the Amazon EMR console, select the job from the list of running clusters in the
WAITING or
RUNNING state. Details about the cluster appear in the
lower pane.
The DNS name you used to connect to the instance is listed on the Description tab as Master public DNS name.
To locate the public DNS name of the master node using the CLI
If you have the Amazon EMR CLI installed, you can retrieve the public DNS name of the master by running the following command.
In the directory where you installed the Amazon EMR CLI, run the following from the command line. For more information, see the Command Line Interface Reference for Amazon EMR.
Linux, UNIX, and Mac OS X users:
./elastic-mapreduce --list
Windows users:
ruby elastic-mapreduce --list
This returns a list of all the currently active clusters in the following format. In the example below, ec2-204-236-242-218.compute-1.amazonaws.com, is the public DNS name of the master node for the cluster j-3L7WK3E07HO4H.
j-3L7WK3E07HO4H WAITING ec2-204-236-242-218.compute-1.amazonaws.com My Job Flow
OpenSSH is installed on most Linux, Unix, and Mac OS X operating systems. Windows users can use an application called PuTTY to connect to the master node. Following are platform-specific instructions for opening an SSH connection.
To configure the permissions of the keypair file using Linux/Unix/Mac OS X
Before you can use the keypair file to create an SSH connection, you must set do this, SSH returns an error saying that your private key file is unprotected and will reject the key. You only need to configure these permissions the first time you use the private key to connect.
To connect to the master node using Linux/Unix/Mac OS X
Open a terminal window. This is found at Applications/Utilities/Terminal on Mac OS X and at Applications/Accessories/Terminal on many Linux distributions.
Check that SSH is installed by running the following command. If SSH is installed, this command returns the SSH version number. If SSH is not installed, you'll need to install the OpenSSH package from a repository.
ssh -v
To establish the connection to the master node, enter the following
command line, which assumes the PEM file is in the
user's home directory. Replace
ec2-107-22-74-202.compute-1.amazonaws.com with the
Master public DNS name of your cluster and replace
~/mykeypair.pem with the location and filename of your
PEM file.
ssh hadoop@
ec2-107-22-74-202.compute-1.amazonaws.com-i
~/mykeypair.pem
A warning states that the authenticity of the host you are connecting to can't be verified.
Type
yes to continue.
Note
If you are asked to log in, enter
hadoop.
To connect to the master node using the CLI on Linux/Unix/Mac OS X
If you have the Amazon EMR CLI installed and have configured your credentials.json file so the "keypair" value is set to the name of the keypair you used to launch the cluster and "key-pair-file" value is set to the full path to your keypair .pem file, and the permissions on the .pem file are set to
og-rwx as shown in To configure the permissions of the keypair file using Linux/Unix/Mac OS X, and you have OpenSSH installed on your machine, you can open an SSH connection to the master node by issuing the following command. This is a handy shortcut for frequent CLI users. In the example below you would replace the red text with the cluster identifier of the cluster to connect to.
./elastic-mapreduce -j
j-3L7WK3E07HO4H--ssh
To close an SSH connection using Linux/Unix/Mac OS X
When you are done working on the master node, you can close the SSH connection using the
exit command.
exit. | http://docs.aws.amazon.com/ElasticMapReduce/latest/DeveloperGuide/emr-connect-master-node-ssh.html | 2013-05-18T15:37:52 | CC-MAIN-2013-20 | 1368696382503 | [] | docs.aws.amazon.com |
In this page and sub-pages, we try to put together some feature-by-feature comparison between Spring, Pico/Nano and Yan/Nuts.
This comparison is focused on "IOC container" functionalities. So toolkits such as Spring DAO, Spring MVC are not taken into consideration. A good IOC container should be able to plug&play these services any way.
Throughout the comparisons, I tried to be objective. Yet, due to my limited knowledge on the compared containers, it is certainly possible that some facts and comparisons are inaccurate or simply incorrect. I'll be grateful for any correction.
We'll start with a big fat comparison table:
Details:
- Annotation Support Compared
- AOP Compared
- Bean Maintainability Compared
- Bean Reusability Compared
- Expressiveness Compared
- JBoss Microcontainer Compared
- Lifecycle Compared
- Module System Compared
- Services Compared
- Sorcery Compared
- Syntax Compared
Created by benyu benyu
On Mon Nov 28 15:38:13 CST 2005
Using TimTam
3 CommentsHide/Show Comments
Dec 07, 2005
Paul-Michael Bauer
Spring supports both singleton and prototype beans.
Dec 07, 2005
benyu benyu
The following is copied from Spring Reference 3.4.1.2, which convinced me that "prototype" is not supported:".
Dec 07, 2005
benyu benyu
This is lifecycle-management related only, btw.
Spring does support prototype beans, but not the lifecycles of them. | http://docs.codehaus.org/display/YAN/Spring%2C+Pico+and+Nuts+Compared | 2013-05-18T15:15:40 | CC-MAIN-2013-20 | 1368696382503 | [] | docs.codehaus.org |
How do you assign a module to specific pages?
From Joomla! Documentation
Navigate. | http://docs.joomla.org/index.php?title=How_do_you_assign_a_module_to_specific_pages%3F&oldid=31166 | 2013-05-18T15:42:11 | CC-MAIN-2013-20 | 1368696382503 | [] | docs.joomla.org |
().().
See the documentation for domain_return_ok().
In addition to implementing the methods above, implementations of the CookiePolicy interface must also supply the following attributes, indicating which protocols should be used, and how. All of these attributes may be assigned to.
The most useful way to define a CookiePolicy class is by subclassing from DefaultCookiePolicy and overriding some or all of the methods above. CookiePolicy itself may be used as a 'null policy' to allow setting and receiving any and all cookies (this is unlikely to be useful).
See About this document... for information on suggesting changes.See About this document... for information on suggesting changes. | http://docs.python.org/release/2.5/lib/cookie-policy-objects.html | 2013-05-18T16:09:50 | CC-MAIN-2013-20 | 1368696382503 | [] | docs.python.org |
2. Installation¶
CompoCoreBundle is just a bundle and as such, you can install it at any moment during a project’s lifecycle.
2.1. Download the Bundle¶
Open a command console, enter your project directory and execute the following command to download the latest stable version of this bundle:
$ composer require sonata-project/admin-bundle
This command requires you to have Composer installed globally, as explained in the installation chapter of the Composer documentation.
2.2. Enable the Bundle¶
Then, enable the bundle and the bundles it relies on by adding the following line in bundles.php file of your project:
<?php // config/bundles.php return [ //... Symfony\Bundle\SecurityBundle\SecurityBundle::class => ['all' => true], Sonata\CoreBundle\SonataCoreBundle::class => ['all' => true], Sonata\BlockBundle\SonataBlockBundle::class => ['all' => true], Knp\Bundle\MenuBundle\KnpMenuBundle::class => ['all' => true], Sonata\AdminBundle\SonataAdminBundle::class => ['all' => true], Compo\CoreBundle\CompoCoreBundle::class => ['all' => true], ];
Note
If you are not using Symfony Flex, you should enable bundles in your
AppKernel.php.
// app/AppKernel.php // ... class AppKernel extends Kernel { public function registerBundles() { $bundles = [ // ... // new Sonata\AdminBundle\SonataAdminBundle(), new Compo\CoreBundle\CompoCoreBundle(), ]; // ... } // ... }
Note
If a bundle is already registered, you should not register it again.
2.3. Preparing your Environment¶
As with all bundles you install, it’s a good practice to clear the cache and install the assets:
$ bin/console cache:clear $ bin/console assets:install
2.4. The Admin Interface¶
You’ve finished the installation process, congratulations. If you fire up the server, you can now visit the admin page on
Note
This tutorial assumes you are using the build-in server using the
bin/console server:start (or
server:run) command.
| https://compo-core.readthedocs.io/en/latest/getting_started/installation.html | 2020-03-28T14:39:35 | CC-MAIN-2020-16 | 1585370491998.11 | [array(['../_images/getting_started_empty_dashboard.png',
'Sonata Dashboard :width: 700px'], dtype=object) ] | compo-core.readthedocs.io |
Description of the illustration message_validation.png
This is a runtime image of the CrcPizzaBot. This image shows a series of message. In the first, the bot outputs a message (What size do you want?) with a list of options (small, medium, large). The user’s response message, “groot” then prompts the bot to output the “size” message again, but this time, the message is: “Invalid size, please try again. What size do you want?” The next message from user is medium, a value from the list, so the bot moves to the next state using the message, “To which location do you want the pizza delivered.” | https://docs.cloud.oracle.com/en-us/iaas/digital-assistant/doc/img_text/message_validation.html | 2020-03-28T15:57:50 | CC-MAIN-2020-16 | 1585370491998.11 | [] | docs.cloud.oracle.com |
getSituationProcesses
A Graze API GET request that returns a list of process names for a specified Situation.
Back to Graze API EndPoint Reference.
Request arguments
Endpoint
getSituationProcesses takes the following request arguments:
Response
Endpoint
getSituationProcesses returns the following response:
Successful requests return a JSON objectituationProcesses:
Request example
Example cURL request to return all the process names for Situation 473:
curl -G -u graze:graze -k -v "" --data-urlencode 'sitn_id=473'
Response example
Example response returning a list of all the Situation's process names:
{ "processes":[ "Knowledge Management", "Online Transaction Processing", "Web Content Management", "40GbE", "8-bit Unicode Transcoding Platform" ] } | https://docs.moogsoft.com/en/getsituationprocesses.html | 2020-03-28T15:09:38 | CC-MAIN-2020-16 | 1585370491998.11 | [] | docs.moogsoft.com |
Resource Running Out 🔗
What this alert condition does 🔗
Alerts when a signal has a certain amount of time before it is exhausted (reaches a specified minimum value) or full (reaches a specified capacity).
When to use this alert condition 🔗
It is common to use a Static Threshold for many types of signals that trend to empty or capacity, such as alerting when memory utilization is 80% or when free disk space is below 10%. The Resource Running Out condition provides a more powerful way of receiving alerts for these types of signals, because it takes into account whether a signal is trending up or down (steadily rising or falling).
Example 🔗
You might have a detector that triggers an alert when memory utilization goes above 80%, so you have time to look into the issue before the your app is seriously affected. In fact, reaching 80% might only represent a problem if the value has been trending upward and is on a path to reaching 100%. A better way to monitor this signal would be to use the Resource Running Out alert condition, which alerts you when a signal is trending up or down.
In this case, you might say you want to be notified when the signal is expected to hit 80% in 15 minutes (trigger threshold of .25 hours) and has been in this state for 3 minutes (trigger duration of 3m). This will alert you in advance of the error condition, giving you more time to respond, but won’t send a false alert if the signal simply spikes to 80% and then quickly drops to a safer level.
Basic settings 🔗
* If you have specified a metric unit in the plot configuration panel for the signal, the value you enter for Capacity must match the unit you specified. For example, if you specified bytes, you would have to specify 100000000000 (a hundred billion) to specify 100 gigabytes.
Using the Duration option 🔗
The Trigger duration and Clear duration options are used to trigger or clear alerts based on how frequently the condition is satisfied during the specified time window. For this alert, the condition being evaluated concerns the forecasted number of hours left, and the forecast is extrapolated when data is missing. Therefore, in the alert when nearing empty case (for example), a short period of descent followed by a long period of missing data may result in an alert being triggered. | https://docs.signalfx.com/en/latest/detect-alert/alert-condition-reference/resource-running-out.html | 2020-03-28T15:34:38 | CC-MAIN-2020-16 | 1585370491998.11 | [] | docs.signalfx.com |
Overview
RadCalendar provides the functionality to configure repeating appointments. The user has the ability to apply recurring scheduling patterns such as daily, weekly, monthly or set a range of recurrence from date to date. The flexible rule mechanism covers the most common recurrence scenarios. Furthermore, you also have the option to handle the exceptions from this rule.
The purpose of this overview is to give you a straight-forward way how to create and apply a recurrence pattern, rule and exception. If you want to dive deeper into the recurrence feature of the RadCalendar, check out the following topics:
RadCalendar includes support for recurring events on daily, weekly, monthly and yearly basis. Exceptions to the recurrence rules are also permitted. To support this recurrence behavior, the Telerik.XamarinForms.Input.Appointment class includes the RecurrenceRule property. When an appointment is promoted into a recurring event its RecurrenceRule is set with correct RecurrencePattern.
If the user modifies an individual appointment occurrence, an exception is created. This exception is added to the RecurrenceRule of the master appointment along with its specific date.
Consider the following example:
- Create a sample appointment that starts at 11:00 AM and lasts half an hour:
var date = DateTime.Today; var appointment = new Appointment() { Title = "Daily appointment", StartDate = date.AddHours(11), EndDate = date.AddHours(11).AddMinutes(30), Color = Color.Tomato };
- Create a daily recurrence pattern, that specifies a limit of 5 occurrences for the appointment:
var pattern = new RecurrencePattern() { Frequency = RecurrenceFrequency.Daily, DaysOfWeekMask = RecurrenceDays.EveryDay, MaxOccurrences = 5 };
- Set the recurrence rule to appointment:
appointment.RecurrenceRule = new RecurrenceRule(pattern);
- Add exception date to the recurrence rule:
appointment.RecurrenceRule.Exceptions.Add(new ExceptionOccurrence() { ExceptionDate = date.AddDays(1) });
- Create an exception appointment:
var exceptionAppointment = new Appointment() { Title = appointment.Title, StartDate = appointment.StartDate.AddDays(3).AddHours(1), EndDate = appointment.EndDate.AddDays(3).AddHours(1), Color = appointment.Color }; appointment.RecurrenceRule.Exceptions.Add(new ExceptionOccurrence() { ExceptionDate = date.AddDays(3), Appointment = exceptionAppointment });
Finally when you add the created appointment to the AppointmentsSource of RadCalendar, you'll get the following generated appointments:
| https://docs.telerik.com/devtools/xamarin/controls/calendar/recurrence/calendar-recurrence-overview | 2020-03-28T15:54:26 | CC-MAIN-2020-16 | 1585370491998.11 | [array(['../images/calendar_recurrence_overview.png',
'Recurrent Appointments'], dtype=object) ] | docs.telerik.com |
Uploading¶
Files could be uploaded in three ways - as RAW BODY, as POST form field and as URL from existing resource in the internet.
From external resource by URL¶
POST /repository/image/add-by-url?_token=some-token-there { "fileUrl": "", "tags": [], "public": true }
In RAW BODY¶
POST /repository/file/upload?_token=some-token-here&fileName=heart.png < some file content there instead of this text >
Notes:
- Filename will have added automatically the content hash code to make the record associated with file content (eg. heart.png -> 5Dgds3dqheart.png)
- Filename is unique, same as file
- If file already exists under other name, then it’s name will be returned (deduplication mechanism)
In a POST form field¶
POST /repository/file/upload?_token=some-token-here&fileName=heart.png Content-Type: multipart/form-data; boundary=----WebKitFormBoundary7MA4YWxkTrZu0gW ------WebKitFormBoundary7MA4YWxkTrZu0gW Content-Disposition: form-data; name="file"; filename="" Content-Type: image/png ------WebKitFormBoundary7MA4YWxkTrZu0gW-- ... file content some where ... | https://file-repository.docs.riotkit.org/en/v2.0.6/domain/storage/uploading.html | 2020-03-28T14:32:14 | CC-MAIN-2020-16 | 1585370491998.11 | [] | file-repository.docs.riotkit.org |
This_cluster_nodestable will be used (and appropriate repository implementation will be used); alternatively it’s possible to use
eventbussource;
db-url- a JDBC connection URI which should be used to query redirect information; if not configured
user-db-uriwill be used;
get-all-data-query- a SQL helper query which should return all redirection data from database;
get-all-query-timeout- allows to set timeout for executed queries;
fallback-redirection-host- if there is no redirection information present (i.e. secondary hostname is not configured for the particular node) redirection won’t be generated; with this it’s possible to configure fallback redirection address.
All options } } | http://docs.tigase.net.s3-website-us-west-2.amazonaws.com/tigase-server/stable-snapshot/Administration_Guide/webhelp/_seeotherhostdualip.html | 2020-03-28T15:27:30 | CC-MAIN-2020-16 | 1585370491998.11 | [] | docs.tigase.net.s3-website-us-west-2.amazonaws.com |
TOPICS×
Generating personalized PDF documents
About variable PDF documents
Adobe:
To generate dynamic tables or include images via a URL, you need to follow a specific process.
Generating dynamic tables:
Inserting external images:
- Insert the call to the personalization block: <%@ include view="blockname" %> .
- Insert your content (personalized or not) into the body of the file.
:
- The Adobe Campaign code of the personalization fields for which the "open" and "closed" chevrons must be replaced with escape characters (respectively < and > ).
- The entire OpenOffice XML code will be copied into the OpenOffice document.:
| https://docs.adobe.com/content/help/en/campaign-classic/using/sending-messages/personalizing-deliveries/generating-personalized-pdf-documents.html | 2020-03-28T15:42:41 | CC-MAIN-2020-16 | 1585370491998.11 | [array(['/content/dam/help/campaign-classic.en/help/delivery/using/assets/s_ncs_pdf_simple.png',
None], dtype=object)
array(['/content/dam/help/campaign-classic.en/help/delivery/using/assets/s_ncs_open_office_blocdeperso.png',
None], dtype=object)
array(['/content/dam/help/campaign-classic.en/help/delivery/using/assets/s_ncs_pdf_result.png',
None], dtype=object) ] | docs.adobe.com |
.
A mount point provides AWS Elemental Delta access to a remote server. The following are some examples for why Delta might need to access a remote server:
For storage: In input filters, you must specify a storage location for the content associated with the input filter. If this location is a remote folder, it must be mounted to the Delta node.
Both the leader and secondary node should mount the same remote storage folders to ensure that the storage stays the same regardless of which node is the leader.
For database backup: We strongly recommend that you back up the database to a remote server. This server must be mounted to the Delta node.
Each node in the cluster should mount a different database backup storage folder: the nodes should not be storing to the same remote server location.
Remote servers are always mounted to this location on the AWS Elemental Delta node:
/data/mnt/<folder>.
When you mount a remote folder to a local mount folder, all of the contents of the remote folder appear in this mount folder, as if the contents were actually in this folder on the local hardware unit. In this way, you can view the folder, and, for example, verify that backup files have been created.
To add mount points
On the web interface for the node, choose Settings > Mount Points.
Click Add Mount Point.
In the Add New Mount Point dialog, complete the dialog. The following table describes the settings.
Click Create and wait a few minutes. The newly mounted folder appears on the screen. | https://docs.aws.amazon.com/elemental-delta/latest/configguide/mount-points.html | 2020-03-28T15:53:29 | CC-MAIN-2020-16 | 1585370491998.11 | [] | docs.aws.amazon.com |
This is a screen capture of the Load Batch dialog. This dialog has a close button (x) in the upper right of its title bar. In this screen capture, the dialog displays message indicating that a log file (a CSV file) has been uploaded and has been validated. The dialog includes an option called Maximum number of concurrent tests, which has a number selector. Below this and to the right, is the Test button. | https://docs.cloud.oracle.com/en-us/iaas/digital-assistant/doc/img_text/load_batch_dialog.html | 2020-03-28T14:33:52 | CC-MAIN-2020-16 | 1585370491998.11 | [] | docs.cloud.oracle.com |
@Generated(value="OracleSDKGenerator", comments="API Version: 20190801") public final class Erratum extends Object
Details about the erratum.
Note: Objects should always be created or deserialized using the
Erratum.Builder. This model distinguishes fields that are
null because they are unset from fields that are explicitly set to
null. This is done in the setter methods of the
Errat","synopsis","issued","description","updated","advisoryType","from","solution","references","affectedInstances","relatedCves","softwareSources","packages"}) @Deprecated public Erratum(String name, String id, String compartmentId, String synopsis, String issued, String description, String updated, UpdateTypes advisoryType, String from, String solution, String references, List<Id> affectedInstances, List<String> relatedCves, List<Id> softwareSources, List<SoftwarePackageSummary> packages)
public static Erratum.Builder builder()
Create a new builder.
public String getName()
Advisory name
public String getId()
OCID for the Erratum.
public String getCompartmentId()
OCID for the Compartment.
public String getSynopsis()
Summary description of the erratum.
public String getIssued()
date the erratum was issued
public String getDescription()
Details describing the erratum.
public String getUpdated()
most recent date the erratum was updated
public UpdateTypes getAdvisoryType()
Type of the erratum.
public String getFrom()
Information specifying from where the erratum was release.
public String getSolution()
Information describing how the erratum can be resolved.
public String getReferences()
Information describing how to find more information about the erratum.
public List<Id> getAffectedInstances()
list of managed instances to this erratum
public List<String> getRelatedCves()
list of CVEs applicable to this erratum
public List<Id> getSoftwareSources()
list of Software Sources
public List<SoftwarePackageSummary> getPackages()
list of Packages affected by this erratum
public Set<String> get__explicitlySet__()
public boolean equals(Object o)
equalsin class
Object
public int hashCode()
hashCodein class
Object
public String toString()
toStringin class
Object | https://docs.cloud.oracle.com/en-us/iaas/tools/java/1.12.3/com/oracle/bmc/osmanagement/model/Erratum.html | 2020-03-28T15:55:24 | CC-MAIN-2020-16 | 1585370491998.11 | [] | docs.cloud.oracle.com |
Running PhraseExpander in applications that require admin privileges
To detect the typed keystrokes in an application, PhraseExpander must have the same (or higher) privileges of that application you want to detect keystrokes in.
If you are typing inside an application that has Admin privileges (this can happen with remote or Citrix applications, sometimes), PhraseExpander will not be able to detect what you are typing there, and will show a little warning message.
How to launch PhraseExpander as admin
You can force PhraseExpander to run as admin, thus allowing it to work in the applications that have been run with admin privileges.
To launch PhraseExpander as admin
- 1
Make sure that PhraseExpander is not running. Close it by clicking on File → Exit.
- 2
Find the PhraseExpander icon (on the Desktop or in the Programs menu)
- 3
Right-click on it and click on Run as admin
How to execute PhraseExpander as admin at startup
You cannot set PhraseExpander to automatically run as Admin by default, but if you are tech savvy you can check this article that explains a workaround. | https://docs.phraseexpander.com/article/97-running-phraseexpander-as-admin | 2020-03-28T14:33:18 | CC-MAIN-2020-16 | 1585370491998.11 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5738e180c6979143dba7890f/images/5cff73572c7d3a493e7985fe/file-LRCgxKeoiu.png',
'PhraseExpander is not able to detect abbreviations in the application'],
dtype=object) ] | docs.phraseexpander.com |
Energy UK comment on the Scottish Government's draft Energy Strategy
Following the publication of the Scottish Government's draft Energy Strategy, Energy UK issued the following statement.
Lawrence Slade, chief executive of Energy UK, said:
“We welcome the release of the Scottish Government’s Energy Strategy which rightly identifies the urgent need for a plan to decarbonise our heat and transport sectors and the impact that this will have on future power demand.
“Scotland has been a world leader in reducing its emissions and continues to set the pace with an ambitious new Climate Change Bill expected later this year.
“I look forward to meeting with the Scottish Government this week to discuss the potential for the UK to become a hub for low carbon technologies which can offer growth in other sectors through a strong supply chain and the potential for international exports creating thousands of jobs up and down the country.” | https://docs.energy-uk.org.uk/media-and-campaigns/press-releases/370-2017/6028-energy-uk-comment-on-the-scottish-government-s-draft-energy-strategy.html | 2020-03-28T14:04:02 | CC-MAIN-2020-16 | 1585370491998.11 | [] | docs.energy-uk.org.uk |
Using the Azure Cloud Shell window
This document explains how to use the Cloud Shell window.
Swap between Bash and PowerShell environments
Use the environment selector in the Cloud Shell toolbar to swap between Bash and PowerShell environments.
Restart Cloud Shell
Click the restart icon in the Cloud Shell toolbar to reset machine state.
Warning
Restarting Cloud Shell will reset machine state and any files not persisted by your Azure file share will be lost.
Change the text size
Click the settings icon on the top left of the window, then hover over the "Text size" option and select your desired text size. Your selection will be persisted across sessions.
Change the font
Click the settings icon on the top left of the window, then hover over the "Font" option and select your desired font. Your selection will be persisted across sessions.
Upload and download files
Click the upload/download files icon on the top left of the window, then select upload or download.
- For uploading files, use the pop-up to browse to the file on your local computer, select the desired file, and click the "Open" button. The file will be uploaded into the
/home/userdirectory.
- For downloading file, enter the fully qualified file path into the pop-up window (i.e., basically a path under the
/home/userdirectory which shows up by default), and select the "Download" button.
Note
Files and file paths are case sensitive in Cloud Shell. Double check your casing in your file path.
Open another Cloud Shell window
Cloud Shell enables multiple concurrent sessions across browser tabs by allowing each session to exist as a separate process.
If exiting a session, be sure to exit from each session window as each process runs independently although they run on the same machine.
Click the open new session icon on the top left of the window. A new tab will open with another session connected to the existing container.
Cloud Shell editor
- Refer to the Using the Azure Cloud Shell editor page.
Web preview
Click the web preview icon on the top left of the window, select "Configure", specify the desired port to open. Select either "Open port" to only open the port, or "Open and browse" to open the port and preview the port in a new tab.
Click the web preview icon on the top left of the window, select "Preview port ..." to preview an open port in a new tab. Click the web preview icon on the top left of the window, select "Close port ..." to close the open port.
Minimize & maximize Cloud Shell window
Click the minimize icon on the top right of the window to hide it. Click the Cloud Shell icon again to unhide.
Click the maximize icon to set window to max height. To restore window to previous size, click restore.
Copy and paste
- Windows:
Ctrl-cto copy is supported but use
Shift-insertto paste.
- FireFox/IE may not support clipboard permissions properly.
- Mac OS:
Cmd-cto copy and
Cmd-vto paste.
Resize Cloud Shell window
Click and drag the top edge of the toolbar up or down to resize the Cloud Shell window.
Scrolling text display
Scroll with your mouse or touchpad to move terminal text.
Exit command
Running
exit terminates the active session. This behavior occurs by default after 20 minutes without interaction.
Next steps
Bash in Cloud Shell Quickstart
PowerShell in Cloud Shell Quickstart
Feedback | https://docs.microsoft.com/en-us/azure/cloud-shell/using-the-shell-window | 2020-03-28T16:13:05 | CC-MAIN-2020-16 | 1585370491998.11 | [array(['media/using-the-shell-window/env-selector.png',
'Select environment'], dtype=object)
array(['media/using-the-shell-window/restart.png', 'Restart Cloud Shell'],
dtype=object)
array(['media/using-the-shell-window/text-size.png', 'Text size'],
dtype=object)
array(['media/using-the-shell-window/text-font.png', 'Font'], dtype=object)
array(['media/using-the-shell-window/uploaddownload.png',
'Upload/download files'], dtype=object)
array(['media/using-the-shell-window/newsession.png', 'Open new session'],
dtype=object)
array(['media/using-the-shell-window/preview.png', 'Web preview'],
dtype=object)
array(['media/using-the-shell-window/preview-configure.png',
'Configure port'], dtype=object)
array(['media/using-the-shell-window/preview-options.png',
'Preview/close port'], dtype=object)
array(['media/using-the-shell-window/minmax.png',
'Minimize or maximize the window'], dtype=object)] | docs.microsoft.com |
Save extra keystrokes with Smart spacing
SmartSpace automatically inserts a space character when required, after you have executed a template.
This is especially useful when using PhraseExpander to autocomplete terms or insert short blocks of text, or triggering many abbreviations in sequence (which is a common scenario for transcriptionists).
How Smart spacing works
- 1
Execute your template normally by typing the abbreviation followed by the SHIFT key (or your confirmation key). The template will expand normally.
- 2
After the template has been expanded, do not type the SPACE character but continue typing the beginning of the next word. PhraseExpander will automatically add the required space for you. So, one extra character saved! (in the example below, PhraseExpander automatically inserts the space character, as you start typing can).
To activate SmartSpace
- 1
Click on File → Options
- 2
Click the Execution and check Smart spacing and then OK to save. | https://docs.phraseexpander.com/article/68-save-extra-keystrokes-with-smartspace | 2020-03-28T15:30:52 | CC-MAIN-2020-16 | 1585370491998.11 | [] | docs.phraseexpander.com |
Dragon cannot type directly inside the Input Form
Starting from PhraseExpander 5, if you are dictating with Dragon inside the Input form with Dragon, the dictated text doesn't appear directly inside the PhraseExpander Input Form but in the Dragon Dictation Box.
This happens with the applications that are not natively supported by Dragon. You can dictate (in a separate box) but you cannot edit the text, once it's inserted into PhraseExpander.
To enable Dragon Full-Text Control in PhraseExpander
You need to activate Dragon Full-Text control to dictate inside PhraseExpander and get a better experience (like you had in PhraseExpander 4).
To activate it, you need to reinstall Dragon with the option
TEXT_SERVICE_SUPPORT=1
This enables Dragon native support for WPF applications. Don't ask me why, but Dragon doesn't support them by default when it's installed.
The procedure takes just a few minutes and then you'll be able to enjoy a native dictation experience in PhraseExpander.
- 1
Open a command prompt with administrative rights. Type cmd and when you see Command prompt in the results, right-click on it and choose Run as typing cmd
- 2
- Navigate to the path containing the Dragon setup file (usually in your temp folder). If you don't remember, you can download your setup again and check in which folder it's installed.
- 3
- Run the Dragon MSI installer file (called Dragon Naturally Speaking 13.msi for the v13) in the installation folder by typing the following instruction and choose the Repair installation option
C:\Users\Andrea\AppData\Local\Temp\NaturallySpeaking\>MSIEXEC /i "Dragon NaturallySpeaking 13.msi" TEXT_SERVICE_SUPPORT=1
where:
C:\Users\Andrea\AppData\Local\Temp\NaturallySpeaking is the installation folder
Dragon NaturallySpeaking 13.msi is the setup folder
- 4
Reboot your machine
Once you have done that, you can dictate and edit directly inside PhraseExpander
| https://docs.phraseexpander.com/article/91-dragon-cannot-type-input-form | 2020-03-28T15:04:50 | CC-MAIN-2020-16 | 1585370491998.11 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5738e180c6979143dba7890f/images/5c83c9350428633d2cf362f3/file-sgzP3nVhSY.gif',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5738e180c6979143dba7890f/images/5c83c9b30428633d2cf362f6/file-y9rPRlSQvQ.gif',
None], dtype=object) ] | docs.phraseexpander.com |
Turning a draft will rotate the draft so that the threading becomes the treadling and vica-versa.
Suppose you wanted to weave the following drat using one shuttle, instead of alternating the two colors in the weft. Note that this draft using 10 treadles.
Clicking the Turn Draft Icon one time:
Yields the following:
Now the draft requires 10 shafts and uses 8 treadles. Look at the tie-up, it also has turned accordingly. | https://docs.tempoweave.com/tools/turn-draft | 2020-03-28T14:12:51 | CC-MAIN-2020-16 | 1585370491998.11 | [] | docs.tempoweave.com |
Fluentd
Fluentd is an open source data collector that supports different formats, protocols, and customizable plugins for reading and writing log streams.
- If you have data in Fluentd, we recommend using the Unomaly plugin to forward that data directly to a Unomaly instance for analysis.
- As a fallback option for data ingestion, Unomaly also runs with Fluentd pre-installed on the instance. This topic helps you to configure Fluentd on the instance to receive data from other Fluentd, Docker, or syslog.
The Unomaly plugin for Fluentd
Send fluent records to the Unomaly ingestion API with the Unomaly plugin for Fluentd:
Sending data to Fluentd on Unomaly
1. Fluentd listens on port 24224, which is the default fluentd forwarder port. This means that the port needs to be accessible through the firewall. See "Edit network and communication settings".
2. The main Fluentd configuration file on your Unomaly instance is located at /DATA/fluentd/etc/fluent.conf.
Do not make any changes to this file, because it will be overwritten on upgrades. Instead, add your configurations to /DATA/fluentd/etc/conf.d as separate files.
Refer to Fluentd's documentation for more descriptions of their configuration options:.
3. After you make any changes to the configuration file, you need to restart Fluentd:
unomaly restart fluentd
You can view the stdout/log of Fluentd by running:
unomaly logs fluentd
Or, you can run the following for tail-mode viewing:
unomaly logs fluentd -f
Receive data from other Fluentd or Docker
The following configuration example receives data forwarded from other Fluentd installations or Docker.
<source>
@type forward
port 24224
bind 0.0.0.0
</source>
<filter **.docker.**>
@type record_transformer
<record>
hostname "${tag_parts[2]}"
</record>
</filter>
<match **.docker.**>
@type unomaly
host
flush_interval 1s
source_key hostname
message_key log
accept_self_signed_certs true
</match>
Specifically in this case, the following statements are declared:
source
- @type forward means that this plugin is mainly used to receive event logs from other Fluentd instances, the fluent-cat command, or client libraries. This is by far the most efficient way to retrieve the records.
- port 24224 designates the port on which fluentd should listen for data.
- bind 0.0.0.0 means that it will listen to any network interface.
filter
- filter **.docker.** means that messages are transformed.
- In this case we re-write the hostname in the <record>-block.
- tag_parts[2] means to against the second index of the tag in the sender’s message.
match
- match **.docker.** means that messages from docker matched.
- @type unomaly means that fluentd will look for a file called out_unomaly.rb in /DATA/fluentd/plugins and pass log data into it. The out_unomaly.rb plugin will ingest data into Unomaly.
Receive standard syslog data
The following configuration example receives standard syslog data and ingests them into Unomaly.
<source>
@type syslog
@label @mystream
port 51400
bind 0.0.0.0
tag system
</source>
<label @mystream>
<match system.**>
@type unomaly
host
flush_interval 1s
source_key hostname
message_key log
accept_self_signed_certs true
</match>
</label>
You may add the following source section to switch protocols to TCP syslogs (by default, UDP is used):
protocol_type tcp
If you have problems receiving syslog messages, it may be the format doesn't match. You can change the format in the source section using message_format:
message_format auto
message_format rfc3164
message_format rfc5424
Installing Fluentd plugins
You can install Fluentd plugins by using unomaly_fluent_gem:
unomaly restart fluentd
For a list of available commands run:
unomaly logs fluentd
After installing new plugins you need to restart the fluentd service to be able to use them in a configuration:
unomaly restart fluentd
You can write your own plugins or find existing ones for Fluentd and save them into /DATA/fluentd/plugins:
- Make sure that they are registers with the name you use in the fluentd.conf
- Make sure that the source-code registers them in the same name
- Make sure that the file-name matches and has the correct type (such as out_ for output type plugins)
Debugging
To see whether data comes into fluentd at all, you can use for example:
<match **>
@type stdout
</match>
This will print the message on the stdout of the running fluentd process.
You can use for instance fluent-cat (a fluentd tool) or simply logger (a standard Linux syslog tool) to produce log message input for fluentd.
1. Example of using fluent-cat:
echo '{"message":"hello world"}' | fluent-cat --host 10.164.0.7 --port 24224 --format json hello
2. Example of using logger:
logger --server 10.164.0.7 --port 51400 '<6>Jan 13 12:34:56 sunworld myapp[31802]:
[info] test logline in rfc3164' | https://docs.unomaly.com/integration-fluentd | 2020-03-28T15:33:32 | CC-MAIN-2020-16 | 1585370491998.11 | [] | docs.unomaly.com |
libp2p makes it simple to establish encrypted, authenticated communication channels between two peers, but there are other important security issues to consider when building robust peer-to-peer systems.
Many of the issues described here have no known “perfect solution,” and the solutions and mitigation strategies that do exist may come with tradeoffs and compromises in other areas. As a general-purpose framework, libp2p tries to provide the tools for application developers to address these problems, rather than taking arbitrary approaches to security that may not be acceptable to all systems built with libp2p.
Another aspect to consider is that the fact that a particular type of attack is theoretically feasible, does not automatically imply that it is practical, sensible, worthwhile, or efficient to carry out. To evaluate the actual exploitability of a theoretical attack vector, consider the volume, class, and cost of resources that the attacker would have to expend for their attack to reasonably succeed.
Every libp2p peer is uniquely identified by their peer id, which is derived from a private cryptographic key. Peer ids and their corresponding keys allow us to authenticate remote peers, so that we can be sure we’re talking to the correct peer and not an imposter.
However, authentication is generally only half of the “auth” story when it comes to security. Many systems will also require authorization, or the ability to determine “who is allowed to do what.”
libp2p does not provide an authorization framework “out of the box”, since the requirements vary widely across peer-to-peer systems. For example, some networks may not need authorization at all and can simply accept requests from any peer, while others may need to explicitly grant fine-grained permissions based on a hierarchy of roles, the requested resources or services, etc.
To design an authorization system on libp2p, you can rely on the authentication of peer ids and build an association between peer ids and permissions, with the peer id serving the same function as the “username” in traditional authorization frameworks, and the peer’s private key serving as the “password”. Your protocol handler could then reject requests from untrusted peers.
Of course, it’s also possible to build other kinds of authorization systems on libp2p that are not based on peer ids. For example, you may want a single libp2p peer to be usable by many human operators, each with a traditional username and password. This could be accomplished by defining an authorization protocol that accepts usernames and passwords and responds with a signed token if the credentials are valid. Protocols that expose sensitive resources could then require the token before allowing access.
Systems that are designed to be fully decentralized are often “open by default,” allowing any peer to participate in core functions. However, such systems may benefit from maintaining some kind “reputation” system to identify faulty or malicious participants and block or ignore them. For example, each peer could assign scores to other peers based on how useful and “correct” their behavior is according to the design of the protocol, taking the score into account when deciding whether to handle a given request.
A fully decentralized reputation management system, in which peers collaborate to evaluate each other, is outside the scope of libp2p. However, many of libp2p’s core developers and community members are excited by research and development in this area, and would welcome your thoughts on the libp2p forums.
Some of libp2p’s most useful built-in protocols are cooperative, leveraging other peers in the network to perform tasks that benefit everyone. For example, data stored on the Kad-DHT is replicated across the set of peers that are “closest” to the data’s associated key, whether those peers have any particular interest in the data or not.
Cooperative systems are inherently susceptible to abuse by bad actors, and although we are researching ways to limit the impact of such attacks, they are possible in libp2p today.
The Kad-DHT protocol is a distributed hash table that provides a shared key/value storage system for all participants. In addition to key/value lookups, the DHT is the default implementation of libp2p’s peer routing and content routing interfaces, and thus serves an important role in discovering other peers and services on the network.
DHTs, and p2p systems in general are vulnerable to a class of attacks called Sybil attacks, in which one operator spins up a large number of DHT peers with distinct identities (generally called “Sybils”) to flood the network and gain an advantageous position.
A DHT query may need to be routed through several peers before completion, each of which has the opportunity to modify query responses, either by returning incorrect data or by not returning data at all. By controlling a large number of Sybil nodes (in proportion to the size of the network), a bad actor increases the probability of being in the lookup path for queries. To target a specific key, they could improve their chances of being in the lookup path further by generating IDs that are “close” to the target key according the DHT’s distance metric.
Applications can guard against modification of data by signing values that are stored in the DHT, or by using content addressing, where a cryptographic hash of the stored value is used as the key, as in IPFS. These strategies allow you to detect if the data has been tampered with, however, they cannot prevent tampering from occurring in the first place, nor can they prevent malicious nodes from simply pretending the data doesn’t exist and omitting it entirely.
Very similar to Sybil attacks, an Eclipse attack also uses a large number of controlled nodes, but with a slightly different goal. Instead of modifying data in flight, an Eclipse attack is targeted at a specific peer with the goal of distorting their “view” of the network, often to prevent them from reaching any legitimate peers (thus “eclipsing” the real network). This kind of attack is quite resource-intensive to perform, requiring a large number of malicious nodes to be fully effective.
Eclipse and Sybil attacks are difficult to defend against because it is possible to generate an unlimited number of valid peer ids. Many practical mitigations for Sybil attacks rely on making ID generation “expensive” somehow, for example, by requiring a proof-of-work with real-world associated costs, or by “minting” and signingIDs from a central trusted authority. These mitigations are outside the scope of libp2p, but could be adopted at the application layer to make Sybil attacks more difficult and/or prohibitively expensive.
We are currently planning to implement a strategy of querying multiple disjoint lookup paths (paths that do not share any common intermediary peers) in parallel, inspired by the [S/Kademlia paper][paper-s-kademlia]. This will greatly increase the chances of finding “honest” nodes, even if some nodes are returning dishonest routing information.
libp2p’s publish/subscribe protocol allows a peer to broadcast messages to other peers within a given “topic.”
By default, the
gossipsub implementation will sign all messages with the
author’s private key, and require a valid signature before accepting or
propagating a message further. This prevents messages from being altered in
flight, and allows recipients to authenticate the sender.
However, as a cooperative protocol, it may be possible for peers to interfere with the message routing algorithm in a way that disrupts the flow of messages through the network.
We are actively researching ways to mitigate the impact of malicious nodes on
gossipsub’s routing algorithm, with a particular focus on preventing Sybil
attacks. We expect this to lead to a more robust and attack-resistant pubsub
protocol, but it is unlikely to prevent all classes of possible attack by
determined bad actors. | https://docs.libp2p.io/concepts/security-considerations/ | 2020-03-28T14:31:09 | CC-MAIN-2020-16 | 1585370491998.11 | [] | docs.libp2p.io |
Jump Start class: New date added for Exam 70-659: Windows Server 2008 R2, Server Virtualization course
Posted by the Microsoft Learning Jump Start Team:
- Course: Exam 70-659: Windows Server 2008 R2, Server Virtualization Jump Start
- Target Audience: IT Pros with some Microsoft Virtualization experience who want to be sure they're ready to pass exam 70-659
- Instructors: Microsoft Technical Evangelist, Symon Perriman & Microsoft Technical Instructor, Philip Helsel
- Cost: $99 (includes an exam voucher valued at $150 and valid at any Prometric testing center worldwide)
- Date: Thursday, December 1, 2011
- Time: Starts at 22:00 PST (10:00 PM - 06:00 AM PST). Targeting APAC and EMEA time zones..
Coming from VMware to Microsoft Virtualization?
Check out the “Microsoft Virtualization for VMware Professionals” Jump Start delivered earlier this year, Symon Perriman and Corey Hynes were a hit! | https://docs.microsoft.com/en-us/archive/blogs/microsoft_press/jump-start-class-new-date-added-for-exam-70-659-windows-server-2008-r2-server-virtualization-course | 2020-03-28T15:51:57 | CC-MAIN-2020-16 | 1585370491998.11 | [] | docs.microsoft.com |
8.2. Lesson: Changing Raster Symbology¶
Not.
8.2.1.
Try Yourself¶
Use the Browser Panel to load the new raster dataset;
Load the dataset
srtm_41_19_4326.tif, found under the directory
exercise_data/raster/SRTM/;
Once it appears in the Layers Panel, rename it to
DEM;
Zoom to the extent of this layer by right-clicking on it in the Layer List and selecting Zoom to Layer.
This dataset is a Digital Elevation Model (DEM). It’s a map of the elevation (altitude) of the terrain, allowing us to see where the mountains and valleys are, for example.
While each pixel of dataset of the previous section contained color information, in a DEM file, each pixel contains elevation values.
Once it’s loaded, you’ll notice that it’s a basic stretched grayscale representation of the DEM:
QGIS has automatically applied a stretch to the image for visualization purposes, and we will learn more about how this works as we continue.
8.2.2.
Follow Along: Changing Raster Layer Symbology¶
You have basically two different options to change the raster symbology:
Within the Layer Properties dialog for the DEM layer by right-clicking on the layer in the Layer tree and selecting Properties option. Then switch to the Symbology tab;
By clicking on the
button right above the Layers Panel. This will open the Layer Styling anel where you can switch to the Symbology tab.
Choose the method you prefer to work with.
8.2.3.
Follow Along: Singleband gray¶ grayscale is stretched to the
minimum and maximum values.
Look at the difference with the enhancement (left) and without (right):
But what are the minimum and maximum values that should be used for the stretch? The ones that are currently under Min / Max Value Settings. There are many ways that you can use to calculate the minimum and maximum values and use them for the stretch:
User Defined: you choose both minimum and maximum values manually;
Cumulative count cut: this is useful when you have few extreme low or high values. It cuts the
2%(or the value you choose) of these values;
Min / max: the real minimum and maximum values of the raster;
Mean +/- standard deviation: the values will be calculated according to the mean value and the standard deviation.
8.2.4.
Follow Along: Singleband pseudocolor¶
Grayscales are not always great styles for raster layers. Let’s try to make the DEM layer more colorful.
Change the Render type to Singleband pseudocolor: if you don’t like the default colors loaded, click on Color ramp and change them;
Click the Classify button to generate a new color classification;
If it is not generated automatically click on the OK button to apply this classification to the DEM.
You’ll see the raster looking like this:
This is an interesting way of looking at the DEM. You’ll now see that the values of the raster are again properly displayed, with the darker colors representing valleys and the lighter ones, mountains.
8.2.5. Follow Along: Changing the transparency¶ of single pixels. For example in the raster we used you can see an homogeneous color at the corners:
To set this values as transparent, the Custom Transparency Options menu in Transparency has some useful methods:
By clicking on the
button you can add a range of values and set the transparency percentage of each range chosen;
For single values the
button is more useful;
Click on the
button. The dialog disappearing and you can interact with the map;
Click on a corner of the raster file;
You will see that the transparency table will be automatically filled with the clicked values:
Click on OK to close the dialog and see the changes.
See? The corners are now 100% transparent.
8.2.6. In Conclusion¶
These are only the basic functions to get you started with raster symbology. QGIS also allows you many other options, such as symbolizing a layer using paletted/unique values, representing different bands with different colors in a multispectral image or making an automatic hillshade effect (useful only with DEM raster files).
8.2.7. Reference¶
The SRTM dataset was obtained from | https://docs.qgis.org/3.10/en/docs/training_manual/rasters/changing_symbology.html | 2020-03-28T15:54:04 | CC-MAIN-2020-16 | 1585370491998.11 | [array(['../../../_images/greyscale_dem.png',
'../../../_images/greyscale_dem.png'], dtype=object)
array(['../../../_images/dem_layer_properties.png',
'../../../_images/dem_layer_properties.png'], dtype=object)
array(['../../../_images/enhancement.png',
'../../../_images/enhancement.png'], dtype=object)
array(['../../../_images/dem_pseudocolor_properties.png',
'../../../_images/dem_pseudocolor_properties.png'], dtype=object)
array(['../../../_images/pseudocolor_raster.png',
'../../../_images/pseudocolor_raster.png'], dtype=object)
array(['../../../_images/global_transparency.png',
'../../../_images/global_transparency.png'], dtype=object)
array(['../../../_images/corner_values.png',
'../../../_images/corner_values.png'], dtype=object)] | docs.qgis.org |
Become a Translator & Help Us Grow
Popup Maker has always been about accessibility and giving our users the best experience possible. We want to grow our community and to do that, we have to go global! We want to be translated globally, and our ultimate goal is to have Popup Maker translated in as many languages as the WordPress Core itself. This would allow users around the world to have equally satisfying experiences.
We are looking for capable individuals to help us translate Popup Maker globally.
Our Needs
Currently, no language is 100% complete. Those brave enough to help slay this beast may choose which language they want to assist with – we are looking for both Translators and at least one (1) Editor (those who approve translations) per language.
If you are interested in becoming a Translator or Editor, we offer free products in exchange for helping translate portions of Popup Maker or approving translations for Popup Maker.
Current Status
To see our current status and progress on all languages, check out this report:
What You Get
For Translators, every 40 strings that get approved, you get your a 1 site license for the extension of your choice.
For becoming an Active Editor for a minimum time and approving translations, you receive our Optimize plan for one (1) year – a $249 value! If you continue to remain active, your license will be extended!
If you want to take advantage of this opportunity as a Translator, get started by heading over the WordPress Translator Portal.
If you’re looking to become an Editor, first become a Translator, then contribute at least 40 total strings in your chosen language. Once you have finished that go to Support and submit a ticket using the “Other” Type, then title the ticket “Become Editor for Language XYZ.”
Stay patient, warriors! While signing up & pumping content as a Translator happens quickly, becoming an Active Editor takes some time. Keep contributing and stay active, and your goodies will arrive forthwith. | https://docs.wppopupmaker.com/article/151-become-a-translator-help-us-grow | 2020-03-28T13:45:12 | CC-MAIN-2020-16 | 1585370491998.11 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/559c68b9e4b03e788eda1692/images/57685a189033601c8a8ec385/file-IGLVsuS1L0.jpg',
None], dtype=object) ] | docs.wppopupmaker.com |
This topic applies to:
- WSS 3.0 and MOSS 2007
- SharePoint Foundation 2010/2013 and SharePoint Server 2010/2013/2016
Several issues and their corresponding solutions are described below. Click the link to jump to the appropriate issue.
- Issue 1: Copying or Removing Assemblies
- Issue 2: Deployment Fails or Times Out
- Issue 3: Copying a File Failed
- Issue 4: Resources Scoped for one Web App must be deployed to more Web Apps
- Issue 5: Cannot Find a Certain File
Issue 1: Copying or Removing Assemblies
With SharePoint 2010 and 2013, we sometimes see failures when trying to copy assemblies to the Global Assembly Cache (GAC) or remove assemblies or other files from the bin or 14 or 15 hive during solution retraction and/or deployment.
Resolution:
- Restart the SharePoint 2010/2013 Administration service on all of the Web Front End servers (all servers on the farm where the Foundation Web Application service is running).
Restart the SharePoint 2010/2013/2016 Timer service as well..
Remove the solution.
Reinstall the solution.
For more information, see:
Top
Issue 2: Deployment Fails or Times Out
Deployment fails, and the reason is not clear from the error shown in installation feedback (or just shows time out errors), or the Bamboo GUI installer appears to stop in the middle of the deployment, and eventually times out.
The Timer Job Definitions in Central Administration may show one or more persistent “one time” timer jobs listed.
Manual installations using stsadm also fails to complete the installation.
Resolution:
Delete the one-time timer jobs listed in the timer job definition list. Restart the SharePoint 2010/2013 Timer service on each server in the farm running the Foundation Web Application service.
Try the installation again.
Any solutions that show up in on the Solution Management page in Central Administration with a status of “undeployed” or “error” need to be either deployed manually in Solution Management, or removed, if you are going to run the Bamboo GUI installer. For more information, see Installation stops at the Repair Remove or Install screen
Run the stsadm installation using the -local rather than the -immediate parameter for stsadm -deploysolution. This will not invoke the timer service. As stated in this Technet article:
-local “Deploys the solution synchronously on the local computer only. The timer service is not used.”
You will have to run the deploysolution with the -local parameter on each server running the Windows SharePoint Services Web Application service or Foundation Web Application service.
For more information about deploysolution, see:
Deploysolution: Stsadm operation (Office SharePoint Server)
For more information about which server is running the Windows SharePoint Services Web Application Service, look in Central Administration:
– on SP 2007 in Operations > Servers in Farm
– on SP 2010 in System Settings > Manage Servers in this Farm
- Clear the SharePoint Configuration cache. The Web Front End servers may be out of sync. For more information and instructions see Clear the SharePoint Configuration Cache for Timer Jobs.
If you experience issues with WSS and MOSS timer jobs failing to complete are receiving errors trying to run psconfig, clearing the configuration cache on the farm is a possible method for resolving the issue. The config cache is where we cache configuration information (stored in the config database) on each server in the farm. Caching the data on each server prevents us from having to make SQL calls to pull this information from the configuration database. Sometimes this data can become corrupted and needs to be cleared out and rebuilt. If you only see a single server having issues, only clear the config cache on that server, you do not need to clear the cache on the entire farm.
To clear the cache a single server, follow the steps below on just the problem server.
- Stop the OWSTIMER service on ALL of the MOSS servers in the farm.
On the Index server, navigate to:
Server 2003 location: Drive:Documents and SettingsAll UsersApplication DataMicrosoftSharePointConfigGUID and delete all the XML files from the directory.
Server 2008 location: Drive:ProgramDataMicrosoftSharePointConfigGUID Index server and wait for XML files to begin to reappear in the directory.
- After you see XML files appearing on the Index server, repeat steps 2, 3 & 4 on each query server, waiting for XML files to appear before moving to subsequent servers.
- After all of the query servers have all been cleared and new .xml files have been generated, proceed to the WFE and Application servers in the farm, following steps 2, 3, 4 and 5 for each remaining server.
Top
Issue 3: Copying a File Failed
This is the error message that you get: “Copying of this file failed. This operation uses the SharePoint Administration service (spadmin), which could not be contacted. If the service is stopped or disabled, start it and try the operation again.”
Resolution:
For instructions and more information, see this MSDN article.
Top
Issue 4: Resources Scoped for one Web App must be deployed to more Web Apps
This is the error message that you see: “This solution contains resources scoped for a Web application and must be deployed to one or more Web applications.”
Resolution:
Usually this can be resolved by running the Bamboo GUI installer, removing the product, and then reinstalling it.
We have also found that if the solution is showing up in the Solution Management page as installed but not deployed, you can try to run a manual deployment using stsadm.
See Best Practices for Installing Bamboo Products
Also see MSDN Issues Deploying SharePoint Solution Packages
Top
Issue 5: Cannot Find a Certain File
After an apparently successful deployment, you see errors about not being able to find file(s) when attempting to view products on a page.
Resolution:
Be sure to exclude directories such as %systemroot%Program FilesCommon FilesMicrosoft SharedWeb Server Extensions from file level antivirus scanning, or you may find that files that were deployed in that directory will be removed when the antivirus scan runs.
For more information, see this TechNet article.
Top | https://docs.bamboosolutions.com/document/troubleshoot_problems_with_deploying_farm_solutions/ | 2019-06-16T05:56:11 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.bamboosolutions.com |
Initialize your Helm configuration
This cluster has Helm (and its tiller server) already installed, so execute the following command to start using Helm:
$ helm init, step by step, *
You can find an example of the installation of Red, run the following command:
$ helm install stable/mongodb
NOTE: Check the configurable parameters of the MongoDB chart and their default values at the official Kubernetes GitHub repository.
To install the most recent Odoo release,, access (replacing the YOUR_IP placeholder with your instance’s address) using a browser. You will be prompted a username and a password. Follow this section to obtain them.: | https://docs.bitnami.com/google/infrastructure/kubernetes-sandbox/get-started/get-started/ | 2019-06-16T06:21:00 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.bitnami.com |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.