content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
2. Deciding what to contribute¶ In this section, I will describe what a contributor should do and should not do for choosing what to contribute for Erlang/OTP. 2.1. Erlang/OTP and the OTP Team need your help¶ Before getting into the details, I would like to emphasize the fact that the Erlang community and OTP Team always need your help to fix the bugs and improve the software. Erlang/OTP is a well-designed and highly stable product, as well as a language and a concurrent system. Most of the features are carefully designed and easy to use. Reading the OTP online manual will solve most of the problems you will encounter. (If you haven’t, you should do it now.) Erlang/OTP is actively maintained by the OTP Team. The OTP Team is not a group of volunteers; they are paid developers hired by Ericsson, and takes the final responsibility of the code quality. Erlang/OTP is not a hobby product; it is a mission-critical product for many parts of the telecommunication and internet infrastructure systems. Unfortunately, even for such a great product, you might find numerous bugs. Sometimes those bugs are nasty enough to prevent writing further code, or may result in a serious vulnerability or a security hole. If you find any of the bugs, they must be reported first to the OTP Team, even if you cannot fix them. Erlang/OTP uses the bug reporting site [3] solely for this purpose. Also, OTP team has been chronically understaffed, so their bug fixing resources are limited. Lots of other people are helping the OTP team, such as Erlang Solutions, Industrial Erlang User Group, and the contributors of the Erlang/OTP GitHub Repository. Many people from the Elixir language community also help a lot. The development history of Erlang/OTP can be found here as a paper written by Joe Armstrong, the principal author of Erlang and one of the four leading developers of Erlang, as a presentation called “A History of Erlang” presented in HOPS III conference held in 2007. 2.2. What is considered as a contribution?¶ Erlang/OTP has become an open source software (OSS) product since 1998. OSS is a community product, and every OSS has the own governance system. For Erlang/OTP, the OTP Team makes the final decision on what should be accepted or not as a contribution. It’s more like a commercial product governance than other OSS products, based on the centralized approach of making decisions, while accepting broader contributions from the users through the GitHub repository and the Pull Request workflow. Erlang/OTP has the own contribution guideline page. It is mainly focused onto contributing code, but other fixes such as those for the documentation are also frequently accepted. Describing the reason to change is essential for proposing a change to Erlang/OTP. Here’s a quote from the guideline: It is important to write a good commit message explaining why the feature is needed. We [OTP Team] prefer that the information is in the commit message, so that anyone that want to know two years later why a particular feature can easily find out. It does no harm to provide the same information in the pull request (if the pull request consists of a single commit, the commit message will be added to the pull request automatically). Note that a proposal which contains the Erlang language and semantic changes should be proposed as a part of Erlang Enhancement Process (EEP) as an Erlang Extension Proposal (also called EEP). You can find the archive of Erlang Enhancement Process and the proposals in GitHub. In this document, I will not discuss the details on writing an EEP, because writing an EEP is for changing the syntax and semantics of Erlang and OTP, and it requires far deeper and broader knowledge than writing an OTP module. 2.3. How not to write Erlang/OTP modules¶ In Erlang, a group of functions is called module. Erlang/OTP is a set of modules [1]. Writing a module is an essential part of the OTP software development process. And a set of modules is called application. Making a decision of writing or not writing a module should be done carefully. Erlang/OTP provides various useful functions as their own modules. You will be astonished to find out the functions in the modules provided in the basic two applications, kernel and stdlib. You can find many other functions and modules in OTP suitable for your work. A general tip for choosing a function to use is to always consider using the existing functions in the OTP modules first before reinventing or even inventing your own ones. In most cases you can do what you want to do by choosing an existing function. In short: read the OTP documentation first before writing your own code; your problem can be solved by combining existing functions. Development of Erlang software usually means writing the project-specific modules and make them the project-specific applications. The developed OTP applications can be combined with the OTP runtime environment as a release. In many cases having a release for your specific tasks will be sufficient for running your own software [2], even you have to include your own modules. 2.4. Why writing Erlang/OTP modules then?¶ In most cases you don’t have to write the modules which should reside in OTP. For some cases, however, you need to write and contribute your code to the Erlang/OTP modules. You need to consider a few issues before thinking about making a contribution to OTP, as described in this section. 2.4.1. Is the module or new code really needed in OTP?¶ OTP is part of Erlang and cannot be separated from OTP [4]. If your module is accepted to OTP, that will affect all users of Erlang/OTP, including other Erlang-derived language users, such as those of Elixir and LFE. If your contribution is a critical bug fix, it should be merged as soon as possible. If it’s a feature or an enhancement, it would take a much longer time to assess the impact of change. 2.4.2. Who needs the new code?¶ If what you want to do with your contribution can be done without changing other parts of Erlang/OTP, it should be separated from the OTP. Just because your code provides a cool feature does not necessarily justify that the code should be a part of Erlang/OTP. 2.4.3. Removing old code from OTP is hard¶ Removing an OTP module is extremely difficult after once it is accepted as a part of OTP. For example, the module random, which is being replaced by a new module rand which has become official since OTP 18.0, will remain for two major versions; it is now deprecated on OTP version 19, and will be removed on OTP version 20. The life of single OTP version is usually one year [5], so two years will be required as minimum to replace an obsolete piece of code by the respective new code. 2.5. A case study: rand module¶ The module rand, a pseudo random number generator (PRNG) which I contributed with Dan Gudmundsson as the corresponding OTP maintainer for Erlang/OTP 18.0, had many specific reasons to be a part of OTP as follows: - AS183, the algorithm of random, has been exploited less than a day with a modern computer [6], so there was a strong need to provide the alternative to plug the security hole; - Finer resolution of the output, which gives sufficient precision for Erlang float numbers; - Much longer period for preventing prediction of random number values; - Fully compatible or even simplified API for the programmer; - Multiple choices of algorithms available for future extension and bugfix; and - the execution speed for the default algorithm is as fast as randomon a modern 64bit CPU. There are also other situational factors on making rand module possible: - Xorshift*/+, a compact and well-tested PRNG implementation became available in the Public Domain; - Awareness on improving algorithms of language-specific PRNGs increased (e.g, JavaScript V8 Engine failure discovered on November 2015); and - I gained expertise of building PRNG module on Erlang/OTP by building prototypes for many different algorithms. On the other hand, it took four years to actually start implementing the randmodule after the first PRNG security incident of Erlang/OTP discovered by Geoff Cant on May 2011. My lessons learned through the rand module development process are as follows: - Software is a sticky being. You need to convince a lot of people to revise an old piece of software, even it has a critical vulnerability. - Contribution of lots of people are needed to actualize revision of a piece of software. Having a good idea itself is not enough. - You need to take time for the whole process unravels. Footnotes
https://docs.jj1bdx.tokyo/writing-otp-modules/html/what-to-contribute.html
2018-12-10T03:53:10
CC-MAIN-2018-51
1544376823303.28
[]
docs.jj1bdx.tokyo
Invoking Flake8¶ Once you have installed Flake8, you can begin using it. Most of the time, you will be able to generically invoke Flake8 like so: flake8 ... Where you simply allow the shell running in your terminal to locate Flake8. In some cases, though, you may have installed Flake8 for multiple versions of Python (e.g., Python 2.7 and Python 3.5) and you need to call a specific version. In that case, you will have much better results using: python2.7 -m flake8 Or python3.5 -m flake8 Since that will tell the correct version of Python to run Flake8. Note Installing Flake8 once will not install it on both Python 2.7 and Python 3.5. It will only install it for the version of Python that is running pip. It is also possible to specify command-line options directly to Flake8: flake8 --select E123 Or python<version> -m flake8 --select E123 Note This is the last time we will show both versions of an invocation. From now on, we’ll simply use flake8 and assume that the user knows they can instead use python<version> -m flake8 instead. It’s also possible to narrow what Flake8 will try to check by specifying exactly the paths and directories you want it to check. Let’s assume that we have a directory with python files and sub-directories which have python files (and may have more sub-directories) called my_project. Then if we only want errors from files found inside my_project we can do: flake8 my_project And if we only want certain errors (e.g., E123) from files in that directory we can also do: flake8 --select E123 my_project If you want to explore more options that can be passed on the command-line, you can use the --help option: flake8 --help And you should see something like: Usage: flake8 [options] file file ... Options: -) --filename=patterns Only check for filenames matching the patterns in this comma-separated list. (Default: *.py) -) --max-line-length=n Maximum allowed line length for the entirety of this run. (Default: 79) --select=errors Comma-separated list of errors and warnings to enable. For example, ``--select=E4,E51,W234``. (Default: ) --disable-noqa Disable the effect of "# noqa". This will report errors on lines with "# noqa" at the end. --show-source Show the source generate each error or warning. --statistics Count errors and warnings. --enabled-extensions=ENABLED_EXTENSIONS Enable plugins and extensions that are otherwise disabled by default --exit-zero Exit with status code "0" even if there are errors. -j JOBS, --jobs=JOBS Number of subprocesses to use to run checks in parallel. This is ignored on Windows. The default, "auto", will auto-detect the number of processors available to use. (Default: auto) --output-file=OUTPUT_FILE Redirect report to a. --builtins=BUILTINS define more built-ins, comma separated --doctests check syntax of the doctests --include-in-doctest=INCLUDE_IN_DOCTEST Run doctests only on these files --exclude-from-doctest=EXCLUDE_FROM_DOCTEST Skip these files when running doctests Installed plugins: pyflakes: 1.0.0, pep8: 1.7.0
https://flake8.readthedocs.io/en/3.0.4/user/invocation.html
2018-12-10T04:36:53
CC-MAIN-2018-51
1544376823303.28
[]
flake8.readthedocs.io
Themes This help article will demonstrate a step by step tutorial how to customize the ControlDefault theme of RadBrowseEditor. Open VisualStyleBuilder: Start menu (Start >> Programs >> Telerik >> UI for WinForms [version] >> Tools). Export the built-in themes in a specific folder by selecting File >> Export Built-in Themes. Load a desired theme from the just exported files by selecting File >> Open Package Select BrowseEditorButton in Controls Structure on the left side. Then, select ButtonFill in the Elements section. Modify the applied fill repository item. You can see the result directly in the Visual Style Builder. Save the theme by selecting File >> Save As. Now, you can apply your custom theme to RadBrowseEditor by using the demonstrated approach in the following link: Using custom themes
https://docs.telerik.com/devtools/winforms/controls/editors/browseeditor/customizing-appearance/themes
2018-12-10T04:39:46
CC-MAIN-2018-51
1544376823303.28
[]
docs.telerik.com
The Migrate to vSAN dashboard provides you with an easy way to move virtual machines from existing storage to newly deployed vSAN storage. You can use this dashboard to select non-vSAN datastores that might not serve the virtual machine IO demand. By selecting the virtual machines on a given datastore, you can identify the historical IO demand and the latency trends of a given virtual machine. You can then find a suitable vSAN datastore which has the space and the performance characteristics to serve the demand of this VM. You can move the virtual machine from the existing non-vSAN datastore to the vSAN datastore. You can continue to watch the use patterns to see how the VM is served by vSAN after you move the VM.
https://docs.vmware.com/en/vRealize-Operations-Manager/6.7/com.vmware.vcom.config.doc/GUID-BBE34155-E834-4550-8E2E-449687968706.html
2018-12-10T05:04:42
CC-MAIN-2018-51
1544376823303.28
[]
docs.vmware.com
Matplotlib¶ xlwings.Plot() allows for an easy integration of Matplotlib with Excel. The plot is pasted into Excel as picture. Getting started¶ The easiest sample boils down to: >>> import matplotlib.pyplot as plt >>> import xlwings as xw >>> fig = plt.figure() >>> plt.plot([1, 2, 3]) >>> wb = xw.Workbook() >>> xw.Plot(fig).show('MyPlot') Note You can now resize and position the plot on Excel: subsequent calls to show with the same name ( 'MyPlot') will update the picture without changing its position or size. Full integration with Excel¶ Calling the above code with RunPython and binding it e.g. to a button is straightforward and works cross-platform. However, on Windows you can make things feel even more integrated by setting up a UDF along the following lines: @xw.func def myplot(n): wb = xw.Workbook.caller() fig = plt.figure() plt.plot(range(int(n))) xw.Plot(fig).show('MyPlot') return 'Plotted with n={}'.format(n) If you import this function and call it from cell B2, then the plot gets automatically updated when cell B1 changes: Properties¶ Size, position and other properties can either be set as arguments within show, see xlwings.Plot.show(), or by manipulating the picture object as returned by show, see xlwings.Picture(). For example: >>> xw.Plot(fig).show('MyPlot', left=xw.Range('B5').left, top=xw.Range('B5').top) or: >>> plot = xw.Plot(fig).show('MyPlot') >>> plot.height /= 2 >>> plot.width /= 2 Note Once the picture is shown in Excel, you can only change it’s properties via the picture object and not within the show method. Getting a matplotlib figure¶ Here are a few examples of how you get a matplotlib figure object: via PyPlot interface: import matplotlib.pyplot as plt fig = plt.figure() plt.plot([1, 2, 3, 4, 5]) or: import matplotlib.pyplot as plt plt.plot([1, 2, 3, 4, 5]) fig = plt.gcf()() Then show it in Excel as picture as seen above: plot = Plot(fig) plot.show('Plot1')
http://docs.xlwings.org/en/v0.7.1/matplotlib.html
2018-12-10T04:24:08
CC-MAIN-2018-51
1544376823303.28
[]
docs.xlwings.org
Shading language¶ Вступ¶ Godot uses a shading language similar to GLSL ES 3.0. Most datatypes and functions are supported, and the few remaining ones will likely be added over time. If you are already familiar with GLSL, the Godot Shader Migration Guide is a resource that will help you transition from regular GLSL to Godot's shading language. Типи даних¶ Most GLSL ES 3.0 datatypes are supported: Casting¶ Just like GLSL ES 3.0, implicit casting between scalars and vectors of the same size but different type is not allowed. Casting of types of different size is also not allowed. Conversion must be done explicitly via constructors. Приклад: float a = 2; // invalid float a = 2.0; // valid float a = float(2); // valid Default integer constants are signed, so casting is always needed to convert to unsigned: int a = 2; // valid uint a = 2; // invalid uint a = uint(2); // valid Члени]. Constructing. Точність. Some architectures (mainly mobile) can benefit significantly from this, but there are downsides such as the additional overhead of conversion between precisions. Refer to the documentation of the target architecture for further information. In many cases, mobile drivers cause inconsistent or unexpected behavior and it is best to avoid specifying precision unless necessary. Масиви++) { // ... } Примітка. Константи; Оператори! Попередження When exporting a GLES2 project to HTML5, WebGL 1.0 will be used. WebGL 1.0 doesn't support dynamic loops, so shaders using those won't work there. Відкидання¶ Fragment and light functions can use the discard keyword. If used, the fragment is discarded and nothing is written. Функції. Example below: void sum2(int a, int b, inout int result) { result = a + b; } Varyings) Примітка: Примітка. Примітка For a list of the functions that are not available in the GLES2 backend, please see the Differences between GLES2 and GLES3 doc.
https://docs.godotengine.org/uk/stable/tutorials/shading/shading_reference/shading_language.html
2021-01-16T06:02:16
CC-MAIN-2021-04
1610703500028.5
[]
docs.godotengine.org
Predictions with Pyro + GPyTorch (High-Level Interface)¶ Overview¶ In this example, we will give an overview of the high-level Pyro-GPyTorch integration - designed for predictive models. This will introduce you to the key GPyTorch objects that play with Pyro. Here are the key benefits of the integration: Pyro provides: - The engines for performing approximate inference or sampling - The ability to define additional latent variables GPyTorch provides: - A library of kernels/means/likelihoods - Mechanisms for efficient GP computations [1]: import math import torch import pyro import tqdm import gpytorch from matplotlib import pyplot as plt %matplotlib inline In this example, we will be doing simple variational regression to learn a monotonic function. This example is doing the exact same thing as GPyTorch’s native approximate inference, except we’re now using Pyro’s variational inference engine. In general - if this was your dataset, you’d be better off using GPyTorch’s native exact or approximate GPs. (We’re just using a simple example to introduce you to the GPyTorch/Pyro integration). [2]: train_x = torch.linspace(0., 1., 21) train_y = torch.pow(train_x, 2).mul_(3.7) train_y = train_y.div_(train_y.max()) train_y += torch.randn_like(train_y).mul_(0.02) fig, ax = plt.subplots(1, 1, figsize=(3, 2)) ax.plot(train_x.numpy(), train_y.numpy(), 'bo') ax.set_xlabel('x') ax.set_ylabel('y') ax.legend(['Training data']) [2]: <matplotlib.legend.Legend at 0x11ddf7320> The PyroGP model¶ In order to use Pyro with GPyTorch, your model must inherit from gpytorch.models.PyroGP (rather than gpytorch.modelks.ApproximateGP). The PyroGP extends the ApproximateGP class and differs in a few key ways: - It adds the modeland guidefunctions which are used by Pyro’s inference engine. - It’s constructor requires two additional arguments beyond the variational strategy: likelihood- the model’s likelihood num_data- the total amount of training data (required for minibatch SVI training) name_prefix- a unique identifier for the model [3]: class PVGPRegressionModel(gpytorch.models.PyroGP): def __init__(self, train_x, train_y, likelihood): # Define all the variational stuff variational_distribution = gpytorch.variational.CholeskyVariationalDistribution( num_inducing_points=train_y.numel(), ) variational_strategy = gpytorch.variational.VariationalStrategy( self, train_x, variational_distribution ) # Standard initializtation super(PVGPRegressionModel, self).__init__( variational_strategy, likelihood, num_data=train_y.numel(), name_prefix="simple_regression_model" ) self.likelihood = likelihood # Mean, covar self.mean_module = gpytorch.means.ConstantMean() self.covar_module = gpytorch.kernels.ScaleKernel( gpytorch.kernels.MaternKernel(nu=1.5) ) def forward(self, x): mean = self.mean_module(x) # Returns an n_data vec covar = self.covar_module(x) return gpytorch.distributions.MultivariateNormal(mean, covar) [4]: model = PVGPRegressionModel(train_x, train_y, gpytorch.likelihoods.GaussianLikelihood()) Performing inference with Pyro¶ Unlike all the other examples in this library, PyroGP models use Pyro’s inference and optimization classes (rather than the classes provided by PyTorch). If you are unfamiliar with Pyro’s inference tools, we recommend checking out the Pyro SVI tutorial. [5]: # this is for running the notebook in our testing framework import os smoke_test = ('CI' in os.environ) num_iter = 2 if smoke_test else 200 num_particles = 1 if smoke_test else 256 def train(lr=0.01): optimizer = pyro.optim.Adam({"lr": 0.1}) elbo = pyro.infer.Trace_ELBO(num_particles=num_particles, vectorize_particles=True, retain_graph=True) svi = pyro.infer.SVI(model.model, model.guide, optimizer, elbo) model.train() iterator = tqdm.notebook.tqdm(range(num_iter)) for i in iterator: model.zero_grad() loss = svi.step(train_x, train_y) iterator.set_postfix(loss=loss) %time train() CPU times: user 17.7 s, sys: 460 ms, total: 18.2 s Wall time: 2.75 s In this example, we are only performing inference over the GP latent function (and its associated hyperparameters). In later examples, we will see that this basic loop also performs inference over any additional latent variables that we define. Making predictions¶ For some problems, we simply want to use Pyro to perform inference over latent variables. However, we can also use the models’ (approximate) predictive posterior distribution. Making predictions with a PyroGP model is exactly the same as for standard GPyTorch models. [6]: fig, ax = plt.subplots(1, 1, figsize=(4, 3)) train_data, = ax.plot(train_x.cpu().numpy(), train_y.cpu().numpy(), 'bo') model.eval() with torch.no_grad(): output = model.likelihood(model(train_x)) mean = output.mean lower, upper = output.confidence_region() line, = ax.plot(train_x.cpu().numpy(), mean.detach().cpu().numpy()) ax.fill_between(train_x.cpu().numpy(), lower.detach().cpu().numpy(), upper.detach().cpu().numpy(), color=line.get_color(), alpha=0.5) ax.set_xlabel('x') ax.set_ylabel('y') ax.legend([train_data, line], ['Train data', 'Prediction']) [6]: <matplotlib.legend.Legend at 0x11e3ffeb8> Next steps¶ This was a pretty boring example, and it wasn’t really all that different from GPyTorch’s native SVGP implementation! The real power of the Pyro integration comes when we have additional latent variables to infer over. We will see an example of this in the next example, which learns a clustering over multiple time series using multitask GPs and Pyro.
https://docs.gpytorch.ai/en/latest/examples/07_Pyro_Integration/Pyro_GPyTorch_High_Level.html
2021-01-16T05:12:45
CC-MAIN-2021-04
1610703500028.5
[array(['../../_images/examples_07_Pyro_Integration_Pyro_GPyTorch_High_Level_3_1.png', '../../_images/examples_07_Pyro_Integration_Pyro_GPyTorch_High_Level_3_1.png'], dtype=object) array(['../../_images/examples_07_Pyro_Integration_Pyro_GPyTorch_High_Level_11_1.png', '../../_images/examples_07_Pyro_Integration_Pyro_GPyTorch_High_Level_11_1.png'], dtype=object) ]
docs.gpytorch.ai
Larger the business, more complex becomes the analysis. You need to have multiple fields on column header and same on row header. Have a look at the example given below: Figure 29: Grouping of Data When to group When summary field depends on two level of information, you may consider grouping. Some examples: - Zone and area expenditure report showing monthly expenditure by expenditure-heads. - Quarterly flight occupancy chart showing occupancy by sector and flight number. However, there is a dependency. Data will appear correctly only if it is grouped logically. You may not like to group data first by cities and within cities, by states!! How to group To do this, select multiple fields for column header or row header in the right sequence. Selecting in right sequence will decide if the cross-tab will turn out to be logical or a meaningless matrix. Figure 30: Grouping data on cross-tab The field that is selected earlier will appear outside (major group). The field that is selected later will appear inside (minor group). Other steps in creating a cross-tab remain the same. You can also create groups based on column headers. The steps are logically the same as that of creating group by row headers. Example To get the cross-tab like the example given above, Figure 31: Grouping of Data We need record-set having the following fields: - ExpenditureType - DeptCode - ZoneCode - BranchCode - ExpAmt General steps to get this cross-tab in Intellicus Studio are: - Set the appropriate connection, create an SQL and refresh fields. - Place Cross-tab component preferably on Report Header. - Select ExpenditureType as first row header. - Select DeptCode as second row header. - Select ZoneCode as first column header. - Select BranchCode as second column header. - Select ExpAmt as summary field. - Apply formatting to row headers, column headers and summary field.
https://docs.intellicus.com/documentation/using-intellicus-19-0/studio-reports-19-0-2/data-grouping-on-cross-tab-19-0/
2021-01-16T06:21:36
CC-MAIN-2021-04
1610703500028.5
[array(['https://docs.intellicus.com/wp-content/uploads/2019/01/Grouping-of-data.png', 'data grouping'], dtype=object) array(['https://docs.intellicus.com/wp-content/uploads/2019/01/Grouping-data-on-cross-tab.png', 'Grouping data'], dtype=object) array(['https://docs.intellicus.com/wp-content/uploads/2019/01/grouping-of-data1.png', 'Grouping of Data'], dtype=object) ]
docs.intellicus.com
the add-on, the user "splynx-remote-support" will be added to the server as a UNIX user and as an administrator of Splynx. For this user, two-factor authentication is enabled, and the password is changed regularly, in order to avoid password brute-force attacks. In order to avoid the transfer of within our tunnel. Installation can be performed in 2 methods: Web-based installation Navigate to Config -> Add-ons Locate or search for the "splynx-remote-support" add-on and click on the install button Click on the "OK, confirm" button to begin the installation process Wait for confirmation that the addon was correctly installed Check that the addon is installed in Config -> Add-ons Installation using the command line To install the addon via CLI, run the followong command: apt-get update && apt-get -y install splynx-remote-support After installation process has completed, do not remove these users, and do not close the SSH and WEB TCP ports in the firewall of the OpenVPN interface.
https://docs.splynx.com/addons_modules/splynx_remote_support/splynx_remote_support.md
2021-01-16T05:57:04
CC-MAIN-2021-04
1610703500028.5
[array(['http://docs.splynx.com/images/get?path=en%2Faddons_modules%2Fsplynx_remote_support%2F0.png', 'Add-ons'], dtype=object) array(['http://docs.splynx.com/images/get?path=en%2Faddons_modules%2Fsplynx_remote_support%2Finstall_icon.png', None], dtype=object) array(['http://docs.splynx.com/images/get?path=en%2Faddons_modules%2Fsplynx_remote_support%2Fweb1.png', None], dtype=object) array(['http://docs.splynx.com/images/get?path=en%2Faddons_modules%2Fsplynx_remote_support%2Fweb2.png', None], dtype=object) array(['http://docs.splynx.com/images/get?path=en%2Faddons_modules%2Fsplynx_remote_support%2Fweb4.png', None], dtype=object) ]
docs.splynx.com
Environment Record Fields An environment represents a Salesforce organization and is created automatically when an org credential is authenticated. Below you will find a description of all the relevant fields included in an Environment record. Details - Environment Name: Name chosen for your environment. - Org ID: External ID of the organization linked to an environment. - Type: Select the type of environment. It can be a Production/Developer environment, a Sandbox environment or a Scratch Org environment. - Run all local tests: If selected, Run all tests will be enforced when deploying to an environment. - Namespace: If you are developing managed packages, this will help you write only one version of the pre/post Apex code in Selenium test suites. Copado will replace any "(!NS)" text with this value. - Index Back Promotion Metadata: If enabled, when a back promotion which contains metadata found on user stories in the destination environment is completed, the status of the User Story Metadata records of these user stories will be updated to 'Back Promoted'. - Promotion Default Credential: The default credential of the destination org is always selected by default. Optionally, the default credential can be selected only if the user has no org credential in the destination org. - Validation Promotion Default Credential: The default credential of the destination org is always selected by default. Optionally, the default credential can be selected only if the user has no org credential in the destination org. - Current SCA Score: Current static code analysis score. - Maximum SCA Score: Maximum static code analysis score. - Minimum Apex Test Coverage: Set the minimum Apex test coverage here, which will be used for Apex test results in the Org Credentials detail page. This will also be the default test coverage threshold that will be applied to user stories originating in this particular environment. - Current Code Coverage: This field shows the percentage of code covered during an Apex test: Compliance - Compliance Rule Group: A compliance rule group holds the reference to the compliance rules and criteria which are going to be enforced at an environment level. - Compliance Scan Events: You can select either Deployments or Commits. - Compliance Status: This field shows the status of the compliance scan. - Last Compliance Scan Date: Date when the last compliance scan was run:
https://docs.copado.com/article/1v2z8kd671-environment-record-fields
2021-01-16T05:36:40
CC-MAIN-2021-04
1610703500028.5
[array(['https://files.helpdocs.io/U8pXPShac2/articles/1v2z8kd671/1559232790336/environment-fields.png', None], dtype=object) ]
docs.copado.com
How to control the startup message about updating linked workbooks in Excel Note Office 365 ProPlus is being renamed to Microsoft 365 Apps for enterprise. For more information about this change, read this blog post. Summary This step-by-step article describes how to control the startup message about updating linked workbooks in Microsoft Office Excel. When you open a workbook that contains links to cells in other workbooks, you may receive the following message: This workbook contains links to other data sources. If you update the links, Excel attempts to retrieve the latest data. If you don't update, Excel uses the previous information. You can click either Update or Don't Update. By default, Excel displays this message. You can control if it appears, and if Excel updates the links manually or automatically. To make these changes, use the following methods. Note - Regardless of the options that you choose, Excel still displays a message if the workbook contains links that are not valid or links that are broken. - To find information about the links in a workbook in Excel 2010 and later versions, select Edit Links in the Queries & Connections group on the Data tab. Additionally, the following options apply only when the workbook that contains the basic data is closed. If the workbook with the basic data is already open when you open the workbook that contains the links, the links are updated. Automatic update and no message To suppress the message and to automatically update the links when you open a workbook in Excel, follow these steps: - Select File > Options > Advanced. - Under General, click to clear the Ask to update automatic links check box. Note - When the Ask to update automatic links check box is cleared, the links are automatically updated. Additionally, no message appears. - This option applies to the current user only and affects every workbook that the current user opens. Other users of the same workbooks are not affected. Manual update and no message If you are sharing this workbook with other people who will not have access to the sources of the updated links, you can turn off updating and the prompt for updating. To suppress the message and leave the links (not updated) until you choose to update them, follow these steps: In Excel, select Edit Links in the Queries & Connections group on the Data tab. Click Startup Prompt. Click the Don't display the alert and don't update automatic links option. Warning If you choose not to update the links and not to receive the message, users of the workbook will not know that the data is out of date. This choice affects all users of the workbook. However, this choice applies only to that particular workbook. To update the links manually, follow these steps: - Select Edit Links in the Queries & Connections group on the Data tab. - Select Update Values. - Select Close. Do not display the alert and update links When the Don't display the alert and update links option is selected on a workbook, the selection is ignored. If the person opening the workbook selected the Ask to update automatic links check box, the message appears. If not, links are automatically updated.
https://docs.microsoft.com/en-in/office/troubleshoot/excel/control-startup-message
2021-01-16T06:51:06
CC-MAIN-2021-04
1610703500028.5
[]
docs.microsoft.com
User Guide - Business & Software Project Administrator Welcome! Your Organization has installed the ProForma for Jira. ProForma allows you to quickly create all of the fields you need on user-friendly forms that embed in Jira issues/requests. Using forms on Jira issues allows you to include all of the fields you need, organize fields in a way works for you and control what information is visible to whom. Building Forms Form Building Permission ProForma forms will not impact your Jira performance or the configuration of your Jira instance. It is perfectly safe to delegate responsibility for creating the forms to the teams that will use the forms. Users with Administer Projects permission can create forms in ProForma. From Scratch To create a form template, go to the forms list (Jira Settings > Manage Apps> Forms or Project Settings > Forms) and click Create Form. A blank form will open in the form builder. Add form elements as desired. From an Existing. From the Template. Form Builder Adding, Copying, Moving and Deleting Form Elements There are multiple ways to add, copy, move and delete elements (formatting options, section or questions) on your forms: Use the Select All keyboard shortcut (Cmd/Ctrl A) to copy and paste entire forms. You can also copy and paste forms and form elements across Jira instances. Question Types & Validation The form builder includes a variety of question types, each with its own validation options: Question Properties When you create a question in the form builder, a sidebar will display allowing you to set the question properties: Label – The text of the question that will appear to the user. Description – A place to provide the user with supplementary information such as examples, recommended formatting or field level instructions. Default Values – The form builder lets you set a default value for any field. Note that default values are not available in the Legacy form builder. Linked Jira Field – A list of Jira Fields that can be linked to the ProForma field. Click here to learn more. Validation Options – Validation options vary by question type. See the table below for details. Question Key – The form builder includes question keys which allow form you to set an identifier for the question, making it easier to find the question in the JSON data structure. Text questions have two additional properties: Regex: Pattern – This allows you to define a pattern of characters for your text fields. Click here to learn more. Regex: Message if input is invalid – The message users will see if their input does not match the defined regex pattern. Choice questions also have two additional properties: Choices – The options a user will be able to choose from Data Connections – A link to an external data source that will populate the choice list. Click here to learn more. Note that in the Legacy form builder, some question types are grouped together. Use the Style dropdown to select the desired option. This includes: Text – Narrow or wide Choice – Single Choice, Multiple Choice, Dropdown Date – Date, Time, Date and Time User Lookup – Single User, Multiple Users Linked Jira Fields Form fields can be linked to Jira fields (standard or custom), making the data available for reports and JQL queries, for triggering automation, including in scripts or organizing a queue. The value entered in the form field will populate the Jira field and vice versa. Once a form is Submitted or Locked (see Form Settings), the contents of the form will not change even if linked Jira fields are updated. This allows Jira to be used as a system of record. Making a Field Available for Linking Currently, ProForma uses the most recently created issue as the template issue type to determine what fields can be linked to. If the field you wish to link to is not listed in the dropdown, temporarily create an issue of the desired issue type. Then return immediately to the form builder and edit your form template. The desired field should now be included as option to link to. Do not delete the issue you created, as this can cause the fields to become unlinked. Note that while this method should work in most cases, it will not work for the following field types: Attachment Custom cascading choice lists (single or multiple) Custom next-gen people fields Due date (Server and Data Center) Labels **Field created by other apps Using Conditional Logic To set a condition for showing or hiding a section of a form: ProForma uses sections to allow you to dynamically show or hide content depending on the user's response to a previous question. To create a conditional section: Create a choice (radio button, checkbox or dropdown) question on your form. (Note that currently multi-select dropdown questions cannot be used to trigger conditional logic.) Add a section anywhere below the choice question. Click on the section divider. The properties panel will give you the option to have the section shown Always, or Conditionally. After selecting Conditionally you’ll be given a list of all the choice questions in the form (including questions that are part of previous conditional sections) which precede the conditional section. Select the appropriate question. A list of the choice options will be shown and you can select the choices that will trigger the section to be shown. Add the relevant questions and content to the section. Data Lookups If your Jira Administrator has set up one or more data connections, you can create choice questions that lookup live data from an external source. To create a data lookup question: Open the relevant form in the form builder. Add a choice question to the form. Enter the question label, description and validation as desired. Use the Data Connection dropdown menu to select the appropriate connection. You will see data from the external source populate the choice list. Click Save. Form Settings When a form opens in the form builder, you will see a tab at the top labeled Settings. You can edit form settings as follows: Name – Set the name of the form. Language – The ProForma form builder supports 25 different languages. Use the dropdown box to select the language. When issue is created – This setting lets you indicate if you would like to: Lock the form to prevent changes after the form has been submitted Automatically generate a PDF of the form and attach it to the issue when the form is submitted Recommended Form – You can also associate the form with a specific issue type. When an agent clicks Add Form on an issue of that type, this form will be listed as a recommended option. Jira Service Management Portal – For Service projects, use this setting to associate the form with a request type, making the form appear on the customer portal. Create Issue – If the Issue Forms feature has been enabled, you can associate the form with a request type or issue type. This will allow users to access the form directly from the Jira navigation bar. When the users submits the form a Jira issue will automatically be generated. You can also enable some Advanced options that govern how the form will behave. ProForma forms that are OPEN can be edited by the customer. When a customer Submits a form, they are signaling that the form is complete. SUBMITTED forms cannot be edited unless they are reopened by an Agent. Leave form open – If this options is selected, then the form will stay in an open state, allowing the customer to continue to add to and edit their responses after the request is created. Validation will be enforced when the request is created. Ignore validation – If you choose to leave the form open, then two more options will become available. Checking the Ignore validation option means the customer will be able to create the request even if required fields are left blank or their responses do not meet validation criteria. Validation rules will not be enforced until the the form is submitted. Hide submit button – If you would like a form to stay perpetually open, or to only allow a Service Management Agent to determine when the form is complete, you can choose to hide the Submit button. The form will remain in an open state and will be editable from the portal. Only an agent will be able to submit the form. Form Automation Form automation allows your ProForma forms to interact with your Jira workflow. You can set automation rules to: Automatically add a ProForma form when an issue is transitioned to a new status Automatically transition an issue when a form is submitted Prevent an issue from being transitioned if a form is not attached to the issue, or if the forms on the issue are not submitted. Give your rule a name. a form being submitted, or by the issue transitioning to a new status, another dropdown box will ask you to indicate the issue the workflow, these validators can only. To set a validator: Go to a Jira Settings > Issues > Workflows Click Edit next to the name of the indicated workflow Click on the indicated transition and select the Validators tab Click Add validator and select the appropriate ProForma validator from the list. Click Add. Using ProForma Forms on Jira Issues/Requests Team members can add, delete, edit or fill out forms on Jira issues/requests. This guide will provide an overview of using ProForma for Jira. For more detailed information, see ProForma documentation. Viewing Forms If your Jira instance uses New Issue View, you will need to click on the ProForma icon in order to see the forms section. Form States A form on a Jira issue/request will be in one of three states: Open – Open forms have not yet been submitted or have been reopened to make edits. A user can fill out or edit open forms. Submitted – A user can submit a form upon completion. Submitted forms will need to be reopened in order to be edited. Locked – Locked forms can only be reopened and edited by a user with Administer Project permissions. Add a Form to an Issue or Request ProForma forms embed in Jira issues and Jira Service Desk requests. You can add as many ProForma forms to an issue/request as needed. To add a form to an issue/request: When viewing an issue, expand the Forms section if necessary. Click Add Form. A dropdown menu will show a list of available forms. Forms that have been recommended for the particular issue type will be shown at the top of the list. Select the desired form. Click Add. The form will now be included on the issue. Submit a Form Submitting a form demonstrates that the user or customer has completed the form to their satisfaction. Forms cannot be submitted unless all validation requirements have been met. To a submit a form on an existing issue: When viewing the issue, expand the Forms section if necessary. Click on the name of the indicated form. Click Submit, or if the form has been edited, click Save and Submit. A dialogue box will open to confirm the that you want to submit the form. Reopen a Form There may be times when you wish to re-open a form that already been submitted so that you, or a customer using the JSM portal can make edits to the form contents. To reopen a form: When viewing an issue, expand the Forms section if necessary. Click on the name of the indicated form. The form will open in the viewer. Click on Reopen button at the top of the viewer. A dialogue box will open to confirm that you want to reopen the form. Note that if the form has been set a form to lock upon submission, administer project permissions will be required to reopen the form. Edit the Contents of a Form To edit an open form: When viewing an issue, expand the Forms section if necessary. Click on the name of the indicated form. The form will open in the viewer. Click on the Edit button. A blue banner across the top of the form will indicate that you are in Edit Mode. Make the necessary changes. Click Save and Submit. Alternatively, you can Save your work in progress and submit the form at a later time. To edit a submitted form: When viewing an issue, expand the Forms section if necessary. Click on the name of the indicated form, The form will open in the viewer. Click on the Reopen button and confirm that you want to reopen the form. Click on the Edit button. A blue banner across the top of the form will indicate that you are in Edit Mode. Make the necessary changes. Click Save and Submit. Alternatively, you can Save your work in progress and submit the form at a later time. Download a PDF or a XLSX of a Form Every form included on an issue/request can be downloaded as a PDF or XLSX file. To download a PDF or XLSX file a form on an issue: When viewing an issue, expand the Forms section if necessary. Click on the ... button for the indicated form. Click on Download PDF (vertical listing of field responses), Download Rich PDF (Formatted PDF of form responses) or Download XLSX. Create a Jira Issue from a Form If the Issue Forms feature is enabled and a form(s) has been properly configured , you can create Jira issues directly from a ProForma form. To create an issue from a form: Go to the Jira navigation bar. Click on Issue Forms. Use the Change link to select the relevant project. Select the relevant form from the form list, or search for the form using the search bar. Fill out the form. Click Create. A Jira issue will be created.. To link to an Issue Form Ensure the Issue Forms configuration is enabled. On the Settings tab of the form, use the toggle to enable the form as an Issue Form. Select the appropriate Issue Type. Click on the Copy button to copy a link to your clipboard. Reporting There are two ways to report data from ProForma forms: Linked Jira Fields When creating a form, Project Administrators or agents can link form fields to Jira fields. Data in linked Jira fields is available for Jira reports or JQL queries. Form Responses Spreadsheet Downloading a spreadsheet allows you to easily search, sort and aggregate all responses that have been submitted for a form. To download a spreadsheet of all the form responses: Navigate to Project Settings. Click on Forms from the left hand navigation bar. You will see a list of all forms in the project. Click on the … button to the right of the form name and select Responses. A spreadsheet rendering all of the responses to each from field will be downloaded.
https://docs.thinktilt.com/proforma/User-Guide---Business-&-Software-Project-Administrator.1310392654.html
2021-01-16T05:20:37
CC-MAIN-2021-04
1610703500028.5
[]
docs.thinktilt.com
Migrate 11.x audit logs The audit log export utility enables you to export the audit log data from the 11.x Enterprise Control Room to a JSON file. You must paste the JSON file in the Enterprise A2019 repository and then migrate the audit log data. Prerequisites Ensure you have the AAE_Admin role or the Manage Migration permission. ProcedureBased on the current version of your Enterprise Control Room, choose one of the following ways to export and migrate the audit log data: For Version 11.3 and later: Download the latest version of the audit log export utility from the Automation Anywhere Support site. Important: You must download the utility on the machine on which the Version 11.3 Enterprise Control Room is installed. If the Enterprise Control Room is installed in Cluster mode, you can download the utility on any of the node available in the cluster. Navigate to the Automation Anywhere Downloads page: A-People Downloads page (Login required). Click the Automation Anywhere Enterprise A2019 link. Click Installation Setup, and then click the AAE_Export_Audit_Log_<version_number> zip file, for example, AAE_Export_Audit_Log_A2019.16.zip. Extract the files from the zip file you have downloaded. Open the Windows command prompt. Change the working directory to AAE_Export_Audit_Log_<version_number>, for example, AAE_Export_Audit_Log_A2019.16, and enter the following command: .\\bin java -jar AAE_Export_Audit_Log_<version_number>.jar export.path="OUTPUT LOCATION" es.url="ELASTIC SEARCH URL" Update the following values in the command: ELASTIC SEARCH URL: Replace the text with the Elasticsearch URL that contains the audit logs you want to migrate. For example,. Information about the port used by Elasticsearch is available in the elasticsearch.properties file, which is located at C:\Program Files\Automation Anywhere\Enterprise\config\. The value available in the elasticsearch.port attribute indicates the port used for Elasticsearch. OUTPUT LOCATION: Replace the text with the location where you want to save the output. Ensure that the folders mentioned in the location exist. Ensure that you specify the location of the output folder in double quotation marks. For example, java -jar AAE_Export_Audit_Log_A2019.16.jar export.path="C:\Migration\Audit Log" The utility generates the es_export.json file at the output location you specified. The generated JSON file contains a maximum of 10000 records. If there are more than 10000 records available in the audit data, the utility generates multiple JSON files at the same location and adds a suffix such as es_export_1. Create the migration\es-data folders in the Enterprise A2019 repository. Copy the JSON file available at the output location. If the utility has generated multiple JSON files, you must copy all the files. Paste the JSON file in the Server Files\migration\es-data folder. Log in to your Enterprise A2019 staging environment. Go to Administration > Migration. Click Migrate Audit log from the Migrate bots menu on the top-right of the screen. The system starts retrieving and migrating the audit log data from the es_export.json file and uploading it to Enterprise A2019 Elasticsearch. The entries are displayed in the Audit Log tab of Enterprise A2019.After the Audit log migration is complete, navigate to Administration > Migration page to view the status of the audit log migration and other related information. You can also filter the audit log migration results based on the migration Type as the Audit log migration.Click the View migration option for each audit migration instance to see additional information such as the audit log file path and the reasons why individual es_export.json files were skipped or were not successfully migrated when migrating the audit logs.View migration reports For version 11.2 and earlier: Log in to your Enterprise A2019 staging environment. Go to Administration > Migration. Click Migrate Audit data from the Migrate bots menu on the top-right of the screen. The system starts retrieving and migrating the audit data from the database and uploading it to the Enterprise A2019 Elasticsearch. The entries are displayed in the Audit Log tab of Enterprise A2019. Related tasksView migration reports
https://docs.automationanywhere.com/bundle/enterprise-v2019/page/enterprise-cloud/topics/migration/migrate-11-x-audit-logs.html
2021-01-16T06:02:48
CC-MAIN-2021-04
1610703500028.5
[]
docs.automationanywhere.com
Important You are viewing documentation for an older version of Confluent Platform. For the latest, click here. Troubleshoot KSQL issues¶ This guide contains troubleshooting information for many KSQL issues. SELECT query does not stop¶ KSQL queries streams continuously and must be stopped explicitly. In the CLI, use Ctrl-C to stop non-persistent queries, like SELECT * FROM myTable. To stop a persistent query created by CREATE STREAM AS SELECT or CREATE TABLE AS SELECT, use the TERMINATE statement: TERMINATE query_id;. For more information, see TERMINATE. SELECT query returns no results¶ If a KSQL query returns no results and the CLI hangs, use Ctrl-C to stop the query and then review the following topics to diagnose the issue. Verify that the query is based on the correct source topic¶ Use the DESCRIBE EXTENDED statement to view the Apache Kafka® source topic for the stream. For example, if you have a pageviews stream on a Kafka topic named pageviews, enter the following statement in the CLI: DESCRIBE EXTENDED PAGEVIEWS; Example output showing the source topic: Name : PAGEVIEWS [...] Kafka topic : pageviews (partitions: 1, replication: 1) Verify that the source topic is populated with data¶ Your query results may be empty because the Kafka source topic is not populated with data. Use the kafkacat to consume messages and print a summary. docker run --network ksql-troubleshooting_default --tty --interactive --rm \ confluentinc/cp-kafkacat \ kafkacat -b kafka:39092 \ -C -t pageviews \ -o beginning Example output showing an empty source topic: % Reached end of topic pageviews [0] at offset 0 Verify that new messages are arriving at the source topic¶ The topic is populated if the kafkacat prints messages. However, it may not be receiving new messages. By default, KSQL reads from the end of a topic. A query does not return results if no new messages are being written to the topic. To check your query, you can set KSQL to read from the beginning of a topic by assigning the auto.offset.reset property to earliest using following statement: SET 'auto.offset.reset'='earliest'; Example output showing a successful change: Successfully changed local property 'auto.offset.reset' from 'null' to 'earliest' Run your query again. You should get results from the beginning of the topic. Note that the query may appear to hang if the query reaches the latest offset and no new messages arrive. The query is simply waiting for the next message. Use Ctrl-C to stop the query. Verify that the query predicate is not too restrictive¶ If the previous solutions do not resolve the issue, your query may be filtering out all records because its predicate is too restrictive. Remove WHERE and HAVING clauses and run your query again. Verify that there are no deserialization errors¶ KSQL will not write query results if it is not able to deserialize message data. Use the DESCRIBE EXTENDED statement to check that the VALUE_FORMAT of the stream matches the format of the records that kafkacat prints for your topic. Enter the following statement in the CLI: DESCRIBE EXTENDED pageviews; Example output: Name : PAGEVIEWS [...] Value format : DELIMITED Example output from kafkacat for a DELIMITED topic: 1541463125587,User_2,Page_74 1541463125823,User_2,Page_92 1541463125931,User_3,Page_44 % Reached end of topic pageviews [0] at offset 1538 1541463126232,User_1,Page_28 % Reached end of topic pageviews [0] at offset 1539 1541463126637,User_7,Page_64 % Reached end of topic pageviews [0] at offset 1540 1541463126786,User_1,Page_83 ^C Check for message processing failures for serialization errors. For example, if your query specifies JSON for the VALUE_FORMAT, and the underlying topic is not formatted as JSON, you’ll see JsonParseException warnings in the KSQL server log. For example: ') KSQL CLI does not connect to KSQL server¶ The following warning may occur when you start the CLI. **************** WARNING ****************** Remote server address may not be valid: Error issuing GET to KSQL server Caused by: java.net.SocketException: Connection reset Caused by: Connection reset ******************************************* A similar error may display when you create a KSQL query using the CLI. Error issuing POST to KSQL server Caused by: java.net.SocketException: Connection reset Caused by: Connection reset In both cases, the CLI is not able to connect to the KSQL server. Review the following topics to diagnose the issue. Verify that the KSQL CLI is using the correct port¶ By default, the server listens on port 8088. See Starting the KSQL CLI for more information. Verify that the KSQL server configuration is correct¶ In the KSQL server configuration file, check that the list of listeners has the host address and port configured correctly. Search for the listeners setting in the file and verify it is set correctly. listeners= See Starting KSQL Server for more information. Verify that there are no port conflicts¶ There may be another process running on the port that the KSQL server listens on. Use the following command to get the Process ID (PID) for the process running on the port assigned to the KSQL server. The command below checks the default 8088 port. netstat -anv | egrep -w .*8088.*LISTEN Example output: tcp4 0 0 *.8088 *.* LISTEN 131072 131072 46314 0 In this example, 46314 is the PID of the process that is listening on port 8088. Run the following command to get information about process 46314. ps -wwwp 46314 Example output: io.confluent.ksql.rest.server.KsqlServerMain ./config/ksql-server.properties If the KsqlServerMain process is not shown, a different process has taken the port that KsqlServerMain would normally use. Search for the listeners setting in the KSQL server configuration file and get the correct port. Start the CLI using the correct port. See Starting KSQL Server and Starting the KSQL CLI for more information. Cannot create a stream from the output of a windowed aggregate¶ Window aggregation is not currently supported in KSQL. KSQL does not clean up internal topics¶ Make sure that your Kafka cluster is configured with delete.topic.enable=true. See deleteTopics for more information. Replicated topic with Avro schema causes errors¶ The Confluent Replicator renames topics during replication. If there are associated Avro schemas, they are not automatically matched with the renamed topics after replication completes. Using the CREATE STREAM statement fails with a deserialization error. For example: CREATE STREAM pageviews_original (viewtime bigint, userid varchar, pageid varchar) WITH (kafka_topic='pageviews.replica', value_format='AVRO'); Example output with a deserialization error: Avro schemas manually against the replicated subject name for the topic. For example: # Original topic name = pageviews # Replicated topic name = pageviews.replica curl -X POST -H "Content-Type: application/vnd.schemaregistry.v1+json" --data "{\"schema\": $(curl -s | jq '.schema')}" Snappy encoded messages don’t decompress¶ If you don’t have write access to the /tmp directory because it’s set to noexec, you need to pass in a directory path for snappy that you have write access to: -Dorg.xerial.snappy.tempdir=/path/to/newtmp Check for message processing failures¶ You can check the health of a KSQL query by viewing the number of messages that it has processed and counting how many processing failures have occurred. Use the DESCRIBE EXTENDED statement to see total-messages and failed-messages-per-sec to get message processing metrics. Note that the metrics are local to the server where the DESCRIBE statement runs. DESCRIBE EXTENDED GOOD_RATINGS; Example output: [...] Local runtime statistics ------------------------ messages-per-sec: 1.10 total-messages: 2898 last-message: 9/17/18 1:48:47 PM UTC failed-messages: 0 failed-messages-per-sec: 0 last-failed: n/a (Statistics of the local KSQL server interaction with the Kafka topic GOOD_RATINGS) An increasing number of failed-messages may indicate problems with your query. See deserialization errors for typical sources of processing failures. Check the KSQL server logs¶ Check the KSQL server logs for errors using the command: confluent log ksql-server KSQL writes most of its log messages to stdout by default. Look for logs in the default directory at /usr/local/logs or in the LOG_DIR that you assigned when starting the CLI. See Starting the KSQL CLI for more information. If you installed the Confluent Platform using RPM or Debian packages, the logs are in /var/log/confluent/. If you’re running KSQL using Docker, the output is in the container logs, for example: docker logs <container-id> docker-compose logs ksql-server
https://docs.confluent.io/5.0.0/ksql/docs/troubleshoot-ksql.html
2021-01-16T05:23:29
CC-MAIN-2021-04
1610703500028.5
[]
docs.confluent.io
How-to articles, tricks, and solutions about GIT COMMIT the Previous Branch in Git This tutorial provides an easy way of checking out the previous branch. Find a shortcut to the command line to save your time and an experimental solution. How to Combine Multiple Git Commits into One Don’t overload your git log with meaningless commits. Combine multiple commits into one with the help of our tutorial. See the explanation with codes. How to Copy a Version of a Single File from One Git Branch to Another orial and find several solutions to the problem of copying a version of a single file from one branch to another. Also, read important tips. Pull the Latest Git Submodule Learn more about git submodules in this short tutorial. Here you will also find out how to pull the latest git submodule in a relatively fast and easy way. How to Rebase Git Branch Don’t waste your time struggling with Git. Here you can find the three commands that you should run to rebase your branch. See the explanation with codes.
https://www.w3docs.com/snippets-tags/git%20commit
2021-01-16T05:02:47
CC-MAIN-2021-04
1610703500028.5
[]
www.w3docs.com
Flow. A user can request a free trial for FlowJo and SeqGeq through FlowJo Portal. The FlowJo free trial is still managed through hardware ID and serial number. However, in FlowJo Portal users can request a SeqGeq 60-day trial right in FlowJo Portal. Navigate to the appropriate tab within your Portal account, and click the button to request a trial: Note: SeqGeq trials become active at the time you click that trial button. To activate a license for SeqGeq once your trial has expired, you can request a quote from our sales and licensing experts, or write to [email protected]. License Status FlowJo Portal gives researchers the ability to monitor the status of their license under the License button on the left side menu. Resources At the bottom of the FlowJo Portal environment you’ll notice a set of links that make educational and interesting materials easily accessible. For questions or concerns regarding your FlowJo Portal account, don’t hesitate to reach out to our team at [email protected].
https://docs.flowjo.com/portal/my-account/
2021-01-16T05:04:14
CC-MAIN-2021-04
1610703500028.5
[array(['http://docs.flowjo.com/wp-content/uploads/sites/10/2017/12/FlowJo-Portal-Product-Tabs.png', None], dtype=object) array(['http://docs.flowjo.com/wp-content/uploads/sites/10/2017/12/FlowJo-Portal-Free-Trial-SeqGeq.png', None], dtype=object) array(['http://docs.flowjo.com/wp-content/uploads/sites/10/2017/12/FlowJo-Portal-License-Type.png', None], dtype=object) array(['http://docs.flowjo.com/wp-content/uploads/sites/10/2017/12/FlowJo-Portal-Resources.png', None], dtype=object) ]
docs.flowjo.com
What was decided upon? (e.g. what has been updated or changed?) a necessary evil Why was this decided? (e.g. explain why this decision was reached. It may help to explain the way a procedure used to be handled pre-Alma) We have a love hate relationship with Captcha, but if it’s important for security reasons then we can live with it. If you log in, will you need to do the captcha? Who decided this? (e.g. what unit/group) User Interface When was this decided? Additional information or notes.
https://docs.library.vanderbilt.edu/2018/10/15/captcha/
2021-01-16T06:25:54
CC-MAIN-2021-04
1610703500028.5
[]
docs.library.vanderbilt.edu
Using Spark 2 from Python Cloudera Machine Learning supports using Spark 2 from Python via PySpark. This topic describes how to set up and test a PySpark project. PySpark Environment Variables The default Cloudera Machine Learning engine currently includes Python 2.7.17 and Python 3.6.9. To use PySpark with lambda functions that run within the CDH cluster, the Spark executors must have access to a matching version of Python. For many common operating systems, the default system Python will not match the minor release of Python included in Machine Learning. To ensure that the Python versions match, Python can either be installed on every CDH host or made available per job run using Spark’s ability to distribute dependencies. Given the size of a typical isolated Python environment, Machine Learning includes Create a Project from a Built-in Template. To run a PySpark project, navigate to the project's overview page, open the workbench console and launch a Python session. For detailed instructions, see Native Workbench Console and Editor. to the Spark documentation: Running Spark Application.
https://docs.cloudera.com/machine-learning/1.0/spark/topics/ml-pyspark.html
2021-01-16T05:13:13
CC-MAIN-2021-04
1610703500028.5
[]
docs.cloudera.com
0013 nrd-kernels - Title: nrd-kernels - Authors: Antioch Peverell - Start date: Mar 24, 2020 - RFC PR: Edit if merged: mimblewimble/grin-rfcs#47 - Tracking issue: mimblewimble/grin#3288 Summary Grin supports a limited implementation of "relative timelocks" with "No Recent Duplicate" (NRD) transaction kernels. Transactions can be constructed such that they share duplicate kernels. An NRD kernel instance is not valid within a specified number of blocks relative to a prior duplicate instance of the kernel. A minimum height difference must therefore exist between two instances of an NRD kernel. This provides the relative height lock between transactions. Motivation Relative timelocks are a prerequisite for robust payment channels. NRD kernels can be used to implement a revocable channel close mechanism. A mandatory revocation period can be introduced through a relative timelock between two transactions. Any attempt to close an old invalid channel state can be safely revoked during the revocation period. Recently, Ruben Somsen announced a design for Succinct Atomic Swaps (SAS) reducing the number of on-chain transactions required to implement the swap. This design uses a combination of relative locks and adaptor signatures. SAS would appear to be compatible with Grin/MW but with some caveats, namely the need for an additional transaction kernel as the NRD lock and the adaptor signature cannot co-exist on the same kernel. This is discussed in Unresolved questions below. Community-level explanation A minimum distance in block height is enforced between successive duplicate instances of a given NRD kernel. This can be used to enforce a relative lock height between two transactions. A transaction containing an NRD kernel will not be accepted as valid within the specified block height relative to any prior instance of the NRD kernel. Transactions can be constructed around an existing transaction kernel by introducing either an additional kernel or in some cases by simply adjusting the kernel offset. This allows NRD kernels to be used across any pair of transactions. The NRD kernel implementation aims for simplicity and a minimal approach to solving the problem of "relative locks". Grin does not support a general solution for arbitrary length locks between arbitrary kernels. The implementation is limited in scope to avoid adversely impacting performance and scalability. References between duplicate kernels are implicit, avoiding the need to store kernel references. Locks are limited in length to recent history, avoiding the need to inspect the full historical kernel set during verification. Reference-level explanation An NRD kernel is not valid within a specified number of blocks of a previous duplicate instance of the same NRD kernel. We define duplicate here as two NRD kernels sharing the same public excess commitment. NRD kernels with different excess commitments are not treated as duplicates. An NRD kernel and a non-NRD kernel (plain kernel, coinbase kernel etc.) sharing the same excess commitment are not treated as duplicates. An NRD kernel has an associated relative lock height. For a block B containing this kernel to be valid, no duplicate instance of the kernel can exist in the last RH blocks (up to and including B), where RH is the relative lock height. For example, a transaction containing an NRD kernel with relative lock height 1440 (24 hours) is included in a block at height 1000000. This block is only valid if no duplicate instance of this kernel exists in any block from height 998561 (h-1439) to height 1000000 (h-0) inclusive. A duplicate instance is permitted at height 998560 (h-1440), with the transaction seen as valid. If no duplicate instance of the kernel exists within this range then the lock criteria is met. A kernel can be delayed by the existence of a previous kernel. The non-existence of a previous kernel has no impact on the lock criteria. Note that this implies the first singular occurrence of any NRD kernel meets the lock criteria trivially as it cannot, by definition, be locked by a previous kernel. Thus, the relative lock defaults to "fail open" semantics. Each node maintains an index of recent NRD kernels to enable efficient checking of NRD relative lock heights. Note we only need to index NRD locks and we only need to index those within recent history. Relative locks longer than 7 days are not valid. This is believed to be sufficient to cover all proposed use cases. The minimum value for a relative lock height is 1 meaning a prior instance of the kernel can exist in the previous block for the lock criteria to be met. An instance of the NRD kernel in the same block will invalidate the block as the lock criteria will not be met. NRD lock heights of 0 are invalid and it is never valid for two duplicate instances of the same NRD kernel to exist in the same block. It follows that two transactions containing duplicate instances of the same NRD kernel cannot be accepted as valid in the transaction pool concurrently. Current txpool behavior is "first one wins" semantics when receiving transactions and this will also apply to transactions containing NRD kernels. We plan to revisit this in a future "fee" RFC and plan to investigate the feasibility of introducing "replace by fee" semantics at that time. Grin supports "rewind" back through recent history to handle fork and chain reorg scenarios. 1 week of full blocks are maintained on each node and up to 10080 blocks can be rewound. To support relative lock heights each node must maintain an index over sufficient kernel history for an additional 10080 blocks beyond this rewind horizon. Each node should maintain 2 weeks of kernel history in the local NRD kernel index. This will cover the pathological case of a 1 week rewind and the validation of a 1 week long relative lock beyond that. The primary use case is for revocable payment channel close operations. We believe a 7 day period is more than sufficient for this. We do not require long, extended revocation periods and limiting this to a few days is preferable to keep the cost of verification low. The need for these revocable transactions to be included on chain should be low as these are only required in a non-cooperative situation but where required we want to minimize the cost of verification which must be performed across all nodes. The following kernel variants are supported in Grin - - Plain - Coinbase - HeightLocked - NoRecentDuplicate These are implemented as kernel "feature" variants - pub enum KernelFeatures { /// Plain kernel (default for Grin txs). Plain = 0, /// A coinbase kernel. Coinbase = 1, /// A kernel with an explicit absolute lock height. HeightLocked = 2, /// A relative height locked NRD kernel. NoRecentDuplicate = 3, } Each kernel variant includes feature specific data - # Plain { "fee": 8 } # Coinbase { # empty } # Height Locked { "fee": 8, "lock_height": 295800 } # No Recent Duplicate (NRD) { "fee": 8, "relative_height": 1440, } Note that NRD kernels require no additional data beyond that required for absolute height locked kernels. The reference to the previous kernel is implicit and based on a duplicate NRD kernel excess commitment. The maximum supported NRD relative_height is 10080 (7 days) and the relative height can be safely and conveniently represented as a u16 (2 bytes). This differs from absolute lock heights where u64 (8 bytes) is necessary to specify the lock height. The minimum supported NRD relative_height is 1 and a value of 0 is not valid. Two duplicate instances of a given NRD kernel cannot exist simultaneously in the same block. There must be a relative height of at least 1 block between them. Nodes on the Grin network currently support two serialization versions for transaction kernels - V1 "fixed size kernels" In V1 all kernels are serialized to the same "fixed" number of bytes: feature (1 byte) | fee (8 bytes) | additional_data (8 bytes) | excess commitment (33 bytes) | signature (64 bytes) 03 | 00 00 00 00 01 f7 8a 40 | 00 00 00 00 00 00 05 A0 | 08 b1 ... 22 d8 | 33 11 ... b9 69 NRD kernels use the last 2 bytes of feature specific data for the relative lock height as big-endian u16. The first 6 bytes of feature specific data must be all zero: 00 00 00 00 00 00 05 A0 Note: absolute lock height (u64) and relative lock height (u16) have identical serialization in practice. V1 is supported for backward compatibility with nodes that do not support V2 "variable size kernels". V2 "variable size kernels" V2 kernels have been supported since Grin v2.1.0 and V2 supports the notion of "variable size" kernels. See RFC-0005 "Varible Size Kernels" for details of this. NRD kernels include 8 bytes for the fee as big-endian u64 and 2 bytes for the relative lock height: feature (1 byte) | fee (8 bytes) | relative_height (2 bytes) | excess commitment (33 bytes) | signature (64 bytes) 03 | 00 00 00 00 00 6a cf c0 | 05 A0 | 09 4d ... bb 9a | 09 c7 ... bd 54 In V2 relative lock height is 2 bytes as big-endian u16: 05 A0 Note: the serialization strategy is used for both network "on the wire" serialization of both transactions and full blocks, and local storage, both the database for full blocks and the kernel MMR backend files. Version negotiation occurs during the initial peer connection setup process and determines which version is used for p2p message serialization. If a node uses V2 serialization for the kernel MMR backend file then it will provide a V2 txhashset based on these underlying files. Kernel Signature Message Every kernel contains a signature proving the excess commitment is a commitment to zero. The message being signed includes the features, fee and other associated data to prevent malleability of the transaction kernel and the overall transaction. The transaction fee cannot be modified after signing, for example. For NRD kernels the message being signed is constructed as follows with the relative lock height serialized as 2 bytes. Hash(feature | fee | relative_height) Hash(03 | 00 00 00 00 01 f7 8a 40 | 05 A0) No additional data is introduced with NRD kernels beyond the 2 bytes representing the relative lock height. There is no opportunity to include arbitrary data. Any additional kernel included in a transaction is itself still a fully valid kernel. There is no explicit reference necessary that could be misused to include arbitrary data. An additional NRD kernel in a transaction will increase the "weight" of the transaction by this single additional kernel and allows for a simple way to deal with additional fees. A transaction with an additional kernel must provide additional fees to cover the additional "weight". NRD kernels cannot be added for free. Note that in some limited situations it is possible to replace a kernel with an NRD kernel. If the NRD lock can be introduced without adding an additional kernel then the fee does not have to be increased and the lock is effectively added for free. A transaction kernel consists of an excess commitment and an associated signature showing this excess is indeed a commitment to 0. A transaction with a single kernel can always be represented as a transaction with multiple kernels, provided the kernels excess commitments sum to the correct total excess. Given an existing NRD kernel with excess commitment - - r'G + 0H And a transaction with single excess commitment - - rG + 0H This transaction can be represented as a pair of kernels with excess commitments - - rG + 0H = (r'G + 0H) + (r-r'G + 0H) We take advantage of this to allow an arbitrary NRD kernel to be included in any transaction at construction time. Additionally the kernel offset included in each transaction can be used in certain situations to allow the replacement of a single transaction kernel with an NRD kernel without needing to introduce an additional kernel. Given an existing NRD kernel with excess commitment - - r'G + 0H And a transaction with single excess commitment and kernel offset - - rG + 0H, o This transaction can be rewritten to use the NRD kernel - - r'G + 0H, (o+r-r') These two "degrees of freedom", introducing multiple kernels and adjusting the kernel offset, allowing for flexibility to introduce an NRD kernel in a variety of ways. - Introduce NRD kernel to transaction, compensate with additional kernel. - Introduce NRD kernel to transaction, compensate with kernel offset. Payment Channel Implementation NRD kernels can be used to delay alternate "branches" of conflicting transactions, enabling a payment channel implementation. A payment channel is represented as a single multi-party output. Each channel state transition is represented as a pair of "close" and "settle" transactions with an NRD kernel enforcing a delay between them. Funds are held in an intermediate multi-party output while delayed. The NRD kernel is reused across both transactions by adjusting kernel offsets. X -> Y, Knrd_a \ Y -> [Za, Zb], Knrd_a Alice closes the channel X with their "close" transaction. After a delay Alice can "settle" the funds out to Alice and Bob. Attribution of "close" and "settle" transactions for each channel state is provided through endpoint specific NRD kernels. This allows the other party to "revoke" old invalid state without the NRD delay. Each channel state transition involves a new pair of "close" and "settle" transactions for each participant along with a shared "revoke" transaction. The "revoke" transaction simply spends funds back to the channel output and a plain kernel suffices. [Za, Zb] -> X, Krev Alice attempts to close old invalid state (Y1): X -> Y1, Knrd_a1 Bob can immediately revoke and close current state (Y1 -> Y2): Y1 -> ~[Za, Zb]~, Knrd_b1 \ ~[Za, Zb]~ -> X, Krev_1 \ X -> Y2, Knrd_b2 \ => Y1 -> Y2, [Knrd_b1, Krev_1, Knrd_b2] Bob publishes only the final cut-through multi-kernel transaction (Y1 -> Y2). Bob's individual settle transaction is not revealed. Neither party can self-revoke without introducing the NRD delay. The other party always has the opportunity to revoke first. Self-revocation cannot be used to lock funds up indefinitely. Rollout/Deployment (HF3) The following rules will be enforced during rollout as part of HF3 - Assumptions: - HF3 will occur at height 786,240. - Blocks at height >= 786,240 will have block version >= 4. Block Specific Rules: - A block containing NRD kernel(s) is only be valid if block version >= 4. - A block containing NRD kernel(s) is only valid if all defined relative lock height rules are met. - Two duplicate NRD kernel instances cannot exist in the same block. Transaction Specific Rules: - A transaction containing NRD kernel(s) will not be accepted by the local txpool/stempool unless chain head version >= 4. - A transaction containing NRD kernel(s) will not be relayed or broadcast to other nodes unless chain head version >= 4. - A transaction containing NRD kernel(s) will not be accepted by the local txpool/stempool unless it meets the defined relative lock height rule in the next block. - A transaction containing NRD kernel(s) will not be relayed or broadcast to other nodes unless it meets the defined relative lock height rule in the next block. - Two duplicate NRD kernel instances cannot exist in the txpool/stempool concurrently. Weights & Fees For the purpose of block weight calculations, each kernel is treated as 3 "weight units" where each unit is approximately 32 bytes. This covers the excess commitment and the associated signature common across all kernel variants. The additional 2 bytes of "relative height" on NRD kernels are ignored for the purposes of calculating block weight. For the purpose of minimum transaction relay fees all kernels are treated as 1 "fee unit" with each unit being 1 milligrin. We plan to revisit the entire transaction fee structure in a future RFC. Kernel variants may affect the transaction fee calculations differently in the future. Drawbacks NRD kernels are a limited and restricted form of "relative locks" between kernels. These locks are limited to a period of 7 days and "fail open" beyond that window. This approach meets the requirements for limited revocable payment channel operations but there are likely to be use cases where this approach is not sufficient or unsuitable. While it would be nice to provide a fully general purpose solution that would allow arbitrary locks to be implemented, it does appear to be hard, if not impossible, to do this in Grin/MW. Rationale and alternatives Referencing historical data in Grin and in Mimblewimble in general is difficult due to the possibility of pruning historical data. It is not possible to reference old outputs once they are spent. Historical validators must have access to any referenced data to validate consensus rules. This leaves transaction kernels as the only available data to be referenced. While arbitrary historical kernels can be referenced this is not desirable as we do not want to impose additional constraints on nodes, requiring them to maintain historical data that would otherwise be prunable. An earlier design iteration was "No Such Kernel Recently" (NSKR) locks. Where NRD references were implicit, with duplicate kernel excess commitments, NSKR kernels referenced prior kernels explicitly. These explicit references were problematic for several reasons - - Additional overhead, both local storage and network traffic due to the explicit references. - Optimization by referencing prior kernel based on MMR position introduced a dependency on external data (kernels can no longer be validated in isolation). - Permitting non-existence of references due to limited window of history, opened up a vector for "spam" where arbitrary data could be used in place of a valid reference. To prevent "spam" a signature can be used to verify the reference was indeed a valid commitment. By including a signature along with the commitment, the reference is effectively a full transaction kernel. The idea of using Merkle proofs to verify inclusion of a historical referenced kernel in the kernel MMR was also considered. This gets expensive both in terms of transaction size and increased verification cost. There is also the problem of position not yet being known at transaction creation time, necessitating Merkle proof generation at block creation time by miners which adds complexity. Prior art Bitcoin allows transaction inputs to be "encumbered" with a relative locktime based on the sequence number field. This restricts an input from spending the associated output until a certain number of blocks have passed. BIP112 describes the CHECKSEQUENCEVERIFY opcode in Bitcoin and BIP68 describes the underlying consensus changes around the sequence number field. - Timelock#CheckSequenceVerify (bitcoin wiki) - CheckSequenceVerify (bitcoin wiki) - Bitcoin BIP-0068 - Bitcoin BIP-0112 Note that relative locks in Bitcoin are based on transaction inputs and outputs, with inputs only able to spend outputs once confirmed beneath a certain number of blocks. We cannot do this in Grin due to the pruning of old data. Spent outputs will eventually be removed and cannot be relied upon as part of the validation process. Bitcoin encumbers individual outputs whereas in Grin we encumber transactions via the constituent transaction kernels. Unresolved questions Some investigation is still needed around the conditions necessary to allow a kernel to simply be reused with an adjustment to the kernel offset and where an additional kernel is necessary. An adjustment to the kernel offset will expose the private excess under certain conditions and cannot be done safely for all transactions. One outstanding question is what use cases are not covered by NRD kernels. We believe them to be sufficient for the revocable payment channel close mechanism. But they may not be sufficient for all use cases. Succinct Atomic Swaps (SAS) describes the use of both relative locks and adaptor signatures to implement atomic swaps with only two on-chain transactions. The secret associated with the adaptor signature is swapped to allow funds to be claimed while the relative lock locks funds prior to a refund being claimed. We note that NRD kernels and adaptor signatures are not directly compatible as a prior instance of an NRD kernel would have revealed the secret associated with the adaptor signature. That said we can produce transactions with multiple kernels and we can use this to isolate the adaptor signature on a separate kernel alongside an NRD kernel. It is an unresolved question if there is a way to modify the SAS protocol and avoid the need for these additional kernels in Grin/MW. References - Original "triggers" mailing list post by Ruben Somsen - "No Such Kernel Recently" post by John Tromp - "Duplicate Kernels" post by Antioch - "NRD based payment channel" post by John Tromp - Earlier NSKR based payment channel design) - Timelock#CheckSequenceVerify (bitcoin wiki) - CheckSequenceVerify (bitcoin wiki) - Bitcoin BIP-0068 - Bitcoin BIP-0112 - Succinct Atomic Swaps by Ruben Somsen - Scriptless Scripts - RFC-0005 "Variable Size Kernels"
https://docs.grin.mw/grin-rfcs/text/0013-nrd-kernels/
2021-01-16T05:41:26
CC-MAIN-2021-04
1610703500028.5
[]
docs.grin.mw
Amazon Cloud¶ Intended audience: beginner users, beginner developers, beginner administrators TangoBox 9.3 AMI¶ The 9.3 release is also available as an AMI image on the AWS. The related AMI-ID is: ami-0a2e0cddaa68be39f The image contains all the features of the original TangoBox. It requires at least 2 vCPUs and 4GB of memory, which corresponds to t2.medium instance type. The running costs apply according to AWS pricing. An instance may be accessed with both SSH access or Remote Desktop. SSH access¶ For security reason, the SSH does not accept password authentication. To SSH login to your instance, you need a key-pair configured. The AWS web console asks for the key-pair during the launch process. You may either select exiting or create a new key-par: Then, you can use the web console Connect feature. Please provide the username tango-cs: Remote Desktop¶ There is also xRDP server installed to enable a desktop connection. So, you can connect to the instance with, for example a windows Remote Desktop client. For this feature, the instance Security Group settings shall allow for connecting to 3389 port. Warning Before enabling the 3389 port, it is recommended to change the default tango-cs user password: - connect to the instance with the AWS web console Connect, as described above. - call passwd and change the password from the default. When prompted for the current password use tango. After enabling the RDP port and connecting with a remote desktop client, you are greeted with the following screen: After providing the username tango-cs and the valid password, you connect to the desktop: Previous version¶ The version of TANGO 9.2.5a is also available on the cloud. An Amazon image running Ubuntu 16.04 with TANGO 9.2.5a is pre-installed and configured to start up at boot time. The image is public and can be found under this id and region: AMI-ID: ami-d503cfba region=EU-Frankfurt You can find out how to do this here. Launch VM with this image and you will have TANGO 9.2.5 + PyTango 9.2.0 up and running including the TANGO REST API so you can access it from internet. Note the TANGO_HOST is the private IP address of the VM. This means the TANGO database and device servers are not accessible from the internet but only on the VM or set of VMs which share the same VPN. This can be seen as a security feature. Use the REST api and TANGO security to open up access to the device servers you want to expose. To experiment with the REST api, start an instance of the AMI image on Amazon cloud. You can connect to the TangoWebApp as follows: - point your browser to this url: - click on cancel on the popup login window - set the TANGO_REST_URL to Note NO spaces before or after and no quotes - set the TANGO_HOST toip-172-31-29-94.eu-central-1.compute.internal:10000 Note NO spaces or quotes otherwise it won’t work! - click on the refresh button to the right of the TANGO_HOST field - login as user=tango-cs and pw=tango when prompted Note If you do not get a new prompt for user name and pwd from the host ec2-35-156-147-163.eu-central-1.compute.amazonaws.com then the WebApp is down and it won’t work. - expand the tree of devices at the top left of the application See picture below to find out more. You should be able to play with the TangoTest device sys/tg_test/1 To see the running DEMO, please, follow the link. Use tango-cs/tango to login
https://tango-controls.readthedocs.io/en/latest/installation/amazon-cloud.html
2021-01-16T05:32:07
CC-MAIN-2021-04
1610703500028.5
[array(['../_images/key-pair.png', '../_images/key-pair.png'], dtype=object) array(['../_images/connect.png', '../_images/connect.png'], dtype=object) array(['../_images/rdp.png', '../_images/rdp.png'], dtype=object) array(['../_images/desktop.png', '../_images/desktop.png'], dtype=object) array(['../_images/amazonCloudTango.jpg', '../_images/amazonCloudTango.jpg'], dtype=object)]
tango-controls.readthedocs.io
Tips to Implementing a Payroll System It is best for one to ensure that when they are signing up for their payroll system and they are ready to process their first run, that they are doing it the best way possible. A person should know that starting a payroll system can be a little daunting when a person does not know what they are doing and thus the need to get a guide to help them in the implementing of the payroll system. vital for one to ensure that they have confirmed that all the company’s information is the correct one most importantly the back account. It is vital for an individual to also ensure that they have checked if files such as W2s and many more are filed by. It is best for one to take their time and be vigilant with the inputs and get to check the summary page before they accept things through to avoid mistakes. Suggested Post: more information
http://docs-prints.com/2020/10/21/6-facts-about-everyone-thinks-are-true-17/
2021-01-16T05:05:25
CC-MAIN-2021-04
1610703500028.5
[]
docs-prints.com
GitHub repositories can be run automatically on Repl.it. Head to to import a repository. Any public repository under 500 MB can be cloned, and subscribing to our hacker plan unlocks private repos after authenticating with GitHub. After cloning for the first time, you will be prompted to configure a run command for your repl. For more information, see .replit documentation After configuring a run command for your repl, you can add a badge to your repository README that will allow anyone to run your project automatically!
http://docs.repl.it/repls/repl-from-repo
2021-01-16T06:48:00
CC-MAIN-2021-04
1610703500028.5
[]
docs.repl.it
domains. See also: AWS API Documentation See 'aws help' for descriptions of global parameters. list-domains [-) The list of domains. (structure) The domain's details. DomainArn -> (string)The domain's Amazon Resource Name (ARN). DomainId -> (string)The domain ID. DomainName -> (string)The domain name. Status -> (string)The status. CreationTime -> (timestamp)The creation time. LastModifiedTime -> (timestamp)The last modified time. Url -> (string)The domain's URL. NextToken -> (string) If the previous response was truncated, you will receive this token. Use it in your next request to receive the next set of results.
https://docs.aws.amazon.com/cli/latest/reference/sagemaker/list-domains.html
2021-01-16T06:50:30
CC-MAIN-2021-04
1610703500028.5
[]
docs.aws.amazon.com
Restricting User-Controlled Kubernetes Pods Cloudera Machine Learning 1.6.0 (and higher) includes three CML user data at risk. Allow privileged pod containers Pod containers that are "privileged" are extraordinarily powerful. Processes within such containers get almost the same privileges that are available to processes outside the container. If this property is enabled, a privileged container could potentially access all data on the host. This property is disabled by default . Allow .
https://docs.cloudera.com/machine-learning/1.0/security/topics/ml-user-created-pods.html
2021-01-16T05:24:05
CC-MAIN-2021-04
1610703500028.5
[]
docs.cloudera.com
Google maps add-on The Splynx Google maps add-on is used to help you establish where your customers, routers, and monitoring devices are located. From Splynx version 2.3, we've started using built in maps (OpenStreetMaps. GoogleMaps, BingMaps). Maps can be configured under Config / Main / Preferences / Map Settings. To use Google Maps, you can simply select it as the map type, set up the Google Maps API Key and save config. The Google Maps add-on can be installed in two method, via CLI or the Web UI. To install the google maps add-on via CLI, the following commands can be used: apt-get update apt-get install splynx-google-maps To install it via the Web UI: Navigate to Config -> Integrations -> Add-ons: Locate or search for the "splynx-google-maps" add-on and click on the install icon in the Actions column Click on the "OK, confirm" button to begin the installation process After the installation process has completed, you have to configure the add-on under Config / Integrations / Modules List Locate or search for the "splynx-google-maps" addon and click on the edit icon in the Actions column First of all you should type or copy and paste the URL of your server into the "API domain" field and thereafter, copy and paste your Google API key in the field provided If you do not have a Google API key, browse to the Google API portal () and retrieve it from there. How to create Google Maps API key Open To view or display your map navigate to "Customers / Maps" - NOTE the "For development purposes only" signatures on the map view are displayed because of our key we've used when configuring our map.(the key is only used for development purposes). Coordinate pointers of your customers, routers, and monitoring devices can be viewed here. You can also apply filters to the map to only display items of you wish to view. Customer pointers have different colors, this depends on the customer status (new, active, online, blocked, inactive). You can click on the pointer to see additional information. Routers and monitoring devices can have only one coordinate pointer. But customers can have several pointers. To edit customer coordinate pointers, navigate to the information tab of the customer, click on the "View/Set" button in the Additional Information section of the "GPS" field. If the customer has a saved address, it will appear in the following window - Add pointer. Click on the map to add a pointer. You can click "Geocode" to find the address and place the pointer on it. You can only add one pointer at a time, if you need to add more then one - open this window again - Move pointer. drag and drop the pointer - Remove pointer. Click on the pointer and press "Delete marker" After editing click on "Save" and "Close". The "Save" button saves pointers immediately. You don't have to press the "Save" button when returning to the customer's information tab You can edit coordinate pointers of routers and monitoring devices in the same way.
https://docs.splynx.com/addons_modules/google_maps/google_maps.md
2021-01-16T06:12:46
CC-MAIN-2021-04
1610703500028.5
[array(['http://docs.splynx.com/images/get?path=en%2Faddons_modules%2Fgoogle_maps%2Fpref.png', 'config_preferences_map_settings'], dtype=object) array(['http://docs.splynx.com/images/get?path=en%2Faddons_modules%2Fgoogle_maps%2Fmaps.png', 'Maps'], dtype=object) array(['http://docs.splynx.com/images/get?path=en%2Faddons_modules%2Fgoogle_maps%2F0.png', '0.png'], dtype=object) array(['http://docs.splynx.com/images/get?path=en%2Faddons_modules%2Fgoogle_maps%2F1.png', '1.png'], dtype=object) array(['http://docs.splynx.com/images/get?path=en%2Faddons_modules%2Fgoogle_maps%2F2.png', '2.png'], dtype=object) array(['http://docs.splynx.com/images/get?path=en%2Faddons_modules%2Fgoogle_maps%2FModules_list.png', 'Modules_list.png'], dtype=object) array(['http://docs.splynx.com/images/get?path=en%2Faddons_modules%2Fgoogle_maps%2Fgm_module_edit.png', 'gm_module_edit.png'], dtype=object) array(['http://docs.splynx.com/images/get?path=en%2Faddons_modules%2Fgoogle_maps%2Fgm_API_key.png', 'gm_API_key.png'], dtype=object) array(['http://docs.splynx.com/images/get?path=en%2Faddons_modules%2Fgoogle_maps%2Fget_key8.png', 'get_key8.png'], dtype=object) array(['http://docs.splynx.com/images/get?path=en%2Faddons_modules%2Fgoogle_maps%2Fget_key10.png', 'get_key10.png'], dtype=object) array(['http://docs.splynx.com/images/get?path=en%2Faddons_modules%2Fgoogle_maps%2Fgm_general_map.png', 'gm_general_map.png'], dtype=object) array(['http://docs.splynx.com/images/get?path=en%2Faddons_modules%2Fgoogle_maps%2Fgm_customer.png', 'gm_customer.png'], dtype=object) array(['http://docs.splynx.com/images/get?path=en%2Faddons_modules%2Fgoogle_maps%2Fgm_customer_edit.png', 'gm_customer_edit.png'], dtype=object) array(['http://docs.splynx.com/images/get?path=en%2Faddons_modules%2Fgoogle_maps%2Fgm_router.png', 'gm_router.png'], dtype=object) array(['http://docs.splynx.com/images/get?path=en%2Faddons_modules%2Fgoogle_maps%2Fgm_monitoring_device.png', 'gm_monitoring_device.png'], dtype=object) array(['http://docs.splynx.com/images/get?path=en%2Faddons_modules%2Fgoogle_maps%2Frouter_view.png', 'router_view.png'], dtype=object) ]
docs.splynx.com
Since version 5.0 of the Inloco SDK, the option to get the User ID has been removed. Since the User ID is configured by the company's developer himself and it refers to an internal company identifier, there is no need to obtain this identifier through the Inloco SDK. At the moment, you can only assign ⬇ an User ID or clear⬆the User ID from a device that has your company's app with Inloco SDK integrated.
https://docs.inloco.ai/faqs-and-guides/finding-identifiers/how-can-i-find-user-ids
2020-09-18T10:28:43
CC-MAIN-2020-40
1600400187390.18
[]
docs.inloco.ai
servant – A Type-Level Web DSL¶ servant is a set of Haskell libraries for writing type-safe web applications but also deriving clients (in Haskell and other languages) or generating documentation for them, and more. This is achieved by taking as input a description of the web API as a Haskell type. Servant is then able to check that your server-side request handlers indeed implement your web API faithfully, or to automatically derive Haskell functions that can hit a web application that implements this API, generate a Swagger description or code for client functions in some other languages directly. If you would like to learn more, click the tutorial link below. - Tutorial - Cookbook - Structuring APIs - Using generics - Serving web applications over HTTPS - Overview - SQLite database - PostgreSQL connection pool - Using a custom monad - Inspecting, debugging, simulating clients and more - Customizing errors from Servant - Basic Authentication - Streaming out-of-the-box - Combining JWT-based authentication with basic access authentication - Hoist Server With Context for Custom Monads - File Upload ( multipart/form-data) - Pagination - Generating mock curl calls - Error logging with Sentry - How To Test Servant Applications - OpenID Connect - Helpful Links - Principles
https://docs.servant.dev/en/master/
2020-09-18T09:33:55
CC-MAIN-2020-40
1600400187390.18
[array(['https://raw.githubusercontent.com/haskell-servant/servant/master/servant.png', 'https://raw.githubusercontent.com/haskell-servant/servant/master/servant.png'], dtype=object) ]
docs.servant.dev
Updating WPUM from WordPress.org When an update for WP User Manager, WP User Manager plugin. If an update is available, the WPUM section will turn pale orange, and have a red bar on the left edge with a link to update the plugin. Once you click it, a small updating spinner will appear, and then it will say "Updated!". Then your update is finished.
https://docs.wpusermanager.com/article/126-updating-wpum-from-wordpressorg
2020-09-18T11:03:21
CC-MAIN-2020-40
1600400187390.18
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5d6cea4404286350aeeb5235/images/5d6cf2cc04286350aeeb52ec/5d6cf2c62a6eb.png', None], dtype=object) ]
docs.wpusermanager.com
Overview Microsoft Dynamics 365 for Finance and Operations, Enterprise edition, lets users pin tiles [email protected], contacts the PowerBI.com service, Power BI should show only tiles and reports from Tim’s own PowerBI.com account. By completing this configuration step, you enable Finance and Operations to contact the PowerBI.com service on behalf of a user. The flow between Finance and Operations and the Power BI service is based on the OAuth 2.0 Authorization Code Grant Flow, which is discussed later in this topic. an example, if you provisioned Finance and Operations in the Contoso.com domain, you must have Power BI accounts in that domain, such as [email protected]. Registration process Open a new browser session, and start the Power BI app registration at. A page that resembles the following illustration appears. Select Sign in with your existing account. Make sure that your browser signed you in by using the same Azure AD account that you use for Finance and Operations. After sign-in, the user’s name should appear on the page. In the App name field, enter a name, such as Contoso Dyn365 for Operations. - In the Redirect URL field, copy and paste the base URL of your Finance and Operations client, and then add the OAuth suffix. Here is an example: In the Home page URL field, enter your home page URL, and add a mock extension. Here is an example: This value is mandatory, but it isn't required for the workspace integration. Make sure that the App ID URI is a mock URL. If you use the real URL of your deployment, you might cause sign-in issues in other Azure AD applications, such as the Microsoft Excel Add-in. Here is an example: Under Step 3 Choose APIs to access, select all the check boxes. - Select Register App. Make a note of the values in the Client ID and Client secret fields. You will use these values in the next procedure. Specify Power BI settings in Finance and Operations In the Finance and Operations client, open the Power BI configuration page. Select Edit. Set the Enabled option to Yes. The Azure AD Tenant field should show your tenant (or domain name). For example, if you provisioned Finance and Operations with the ContosoAX7.onmicrosoft.com tenant, the field should have the value that is shown in the previous illustration. If the field is blank, you can enter the correct tenant. Note that the Power BI integration feature doesn’t work on pre-production and test Azure AD domains. You must change to a production Azure AD domain by running the Admin user tool. In the Client ID field, enter the Client ID value that you got from Power BI in the previous procedure. - In the Application Key field, enter the Client Secret value that you got from Power BI in the previous procedure. Make sure that the Redirect URL field is set to the same redirect URL that you entered in Power BI in the previous procedure. For example, copy and paste the base URL of your Finance and Operations client, and then add the OAuth suffix. Here is an example: In the Tile filter table field, enter Company. In the Tile filter column field, enter ID. These two values enable filtering of Power BI tiles that are pinned to a workspace. For an example, if the company context of a workspace is USMF, the data on the Power BI tile will be filtered for the USMF company. You can apply the company filter only if your Power BI content has a table that is named Company and a column that is named ID. Ready-made Power BI content that is released with Finance and Operations uses this convention. If the Power BI content (that you wish to pin) doesn't have a table and a field that are named Company and ID respectively, the filter is ignored, and the tile will show unfiltered data. Select Save, and close the page. Pin tiles to a workspace To validate the configuration, open a workspace, such as Ledger budgets and forecasts or Reservation management. For this example, we will use the Ledger budgets and forecasts workspace. You should see the Power BI section and a banner. You might have to scroll to the right. In the banner, select Get started. If you're starting Power BI from Finance and Operations for the first time, you're prompted to authorize sign-in to Power BI from the Finance and Operations client. Select Click here to provide authorization to Power BI. Your users will have to complete this step the first time that After you select Accept in the previous procedure, you might receive the following error message if the process is unsuccessful. Notice that details of the error appear at the bottom of the message. Additional technical information provide clues that can help you determine what went wrong (values are obscured in the following illustration). Some common issues and the resolution steps Technical details about OAuth 2.0 Authorization Code Grant Flow This section describes the authorization flow between Finance and Operations and the PowerBI.com service just before the list of tiles is shown to the user during the authentication phase. The Azure AD service runs this flow to enable two services to securely communicate on behalf of a user. The following illustration shows the authorization flow. - When a user visits a workspace in Finance and Operations for the first time, the Power BI banner prompts the user to start the first-time connection. If the user agrees to start the first-time connection, an OAuth 2.0 Authorization Code Grant Flow is started. - Finance and Operations redirects the user agent to the Azure AD authorization endpoint. The user is authenticated and consents, if consent is required. Because the user is running Finance and Operations, he or she is already signed in to Azure AD. Therefore, the user doesn't have to enter her or his credentials again. - The Azure AD authorization endpoint redirects the Azure AD agent back to the client application together with an authorization code. The user agent returns the authorization code to the client application’s redirect URL. The application redirect URL is a parameter that is maintained in your Power BI configuration, as described in this topic. - Now that Finance and Operations has an authorization code on behalf of the user, it requests an access token from the Azure AD token issuance endpoint. Finance and Operations presents the authorization code to prove that the user has consented. - The Azure AD token issuance endpoint returns an access token and a refresh token. Finance and Operations must have the access token to request a visualization from Power BI. Access tokens expire after a short time. The refresh token can be used to request a new token. - Finance and Operations uses the access token to authenticate to the Web API that is provided by Power BI. Finance and Operations uses the Web API to request that Power BI visualizations be shown on behalf of the user. - After the client application is authenticated, the Power BI Web API returns the requested visualization to the user. Note that Power BI returns only the data that the user is allowed to see. Because the Power BI Web API detects that the user is connecting via Finance and Operations, it can correctly resolve the user. - The user sees Power BI tiles in the Finance and Operations workspace. For subsequent visits, this whole flow doesn't occur. Because Finance and Operations has the access token on behalf of the user, steps 1 through 4 don't have to be repeated. What’s next Now that you've enabled the PowerBI.com integration feature, you might want to perform the following steps: - If your organization uses PowerBI.com, you can invite users to pin tiles and reports from their own PowerBI.com account to workspaces for easy access. - If you're using Microsoft Dynamics 365 for Finance and Operations, Enterprise edition July 2017 update or later, ready-made analytical workspaces might be built into your workspaces. Currently, this feature is available only in multi-box environments. If you're using a previous version, you can deploy the ready-made reports to your PowerBI.com account. The reports are distributed in Microsoft Dynamics Lifecycle Services (LCS). For more information, see Power BI content in LCS from Microsoft and your partners. - You might want to create your own Power BI content by using data that is available in Entity store. (Entity store is the operational data warehouse that is included with Finance and Operations.) For more information, see Overview of Power BI integration with Entity store. - You might want to mash up external data with ready-made Power BI content that is provided with Finance and Operations. You can do this data mash-up by using Power BI solution templates.
https://docs.microsoft.com/en-us/dynamics365/unified-operations/dev-itpro/analytics/configure-power-bi-integration
2017-08-16T17:38:58
CC-MAIN-2017-34
1502886102309.55
[array(['media/ce094da8b9e0674c5cbd75616e40d829.png', 'Error message'], dtype=object) array(['media/caaddad46d4ea9e3c7e12e896733594a.png', 'Authorization flow'], dtype=object) ]
docs.microsoft.com
Retry Failed Activities Activities sometimes fail for ephemeral reasons, such as a temporary loss of connectivity. At another time, the activity might succeed, so the appropriate way to handle activity failure is often to retry the activity, perhaps multiple times. There are a variety of strategies for retrying activities; the best one depends on the details of your workflow. The strategies fall into three basic categories: The retry-until-success strategy simply keeps retrying the activity until it completes. The exponential retry strategy increases the time interval between retry attempts exponentially until the activity completes or the process reaches a specified stopping point, such as a maximum number of attempts. The custom retry strategy decides whether or how to retry the activity after each failed attempt. The following sections describe how to implement these strategies. The example workflow workers all use a single activity, unreliableActivity, which randomly does one of following: Completes immediately Fails intentionally by exceeding the timeout value Fails intentionally by throwing IllegalStateException Retry-Until-Success Strategy The simplest retry strategy is to keep retrying the activity each time it fails until it eventually succeeds. The basic pattern is: Implement a nested TryCatchor TryCatchFinallyclass in your workflow's entry point method. Execute the activity in doTry If the activity fails, the framework calls doCatch, which runs the entry point method again. Repeat Steps 2 - 3 until the activity completes successfully. The following workflow implements the retry-until-success strategy. The workflow interface is implemented in RetryActivityRecipeWorkflow and has one method, runUnreliableActivityTillSuccess, which is the workflow's entry point. The workflow worker is implemented in RetryActivityRecipeWorkflowImpl, as follows: Copy public class RetryActivityRecipeWorkflowImpl implements RetryActivityRecipeWorkflow { @Override public void runUnreliableActivityTillSuccess() { final Settable<Boolean> retryActivity = new Settable<Boolean>(); new TryCatch() { @Override protected void doTry() throws Throwable { Promise<Void> activityRanSuccessfully = client.unreliableActivity(); setRetryActivityToFalse(activityRanSuccessfully, retryActivity); } @Override protected void doCatch(Throwable e) throws Throwable { retryActivity.set(true); } }; restartRunUnreliableActivityTillSuccess(retryActivity); } @Asynchronous private void setRetryActivityToFalse( Promise<Void> activityRanSuccessfully, @NoWait Settable<Boolean> retryActivity) { retryActivity.set(false); } @Asynchronous private void restartRunUnreliableActivityTillSuccess( Settable<Boolean> retryActivity) { if (retryActivity.get()) { runUnreliableActivityTillSuccess(); } } } The workflow works as follows: runUnreliableActivityTillSuccesscreates a Settable<Boolean>object named retryActivitywhich is used to indicate whether the activity failed and should be retried. Settable<T>is derived from Promise<T>and works much the same way, but you set a Settable<T>object's value manually. runUnreliableActivityTillSuccessimplements an anonymous nested TryCatchclass to handle any exceptions that are thrown by the unreliableActivityactivity. For more discussion of how to handle exceptions thrown by asynchronous code, see Error Handling. doTryexecutes the unreliableActivityactivity, which returns a Promise<Void>object named activityRanSuccessfully. doTrycalls the asynchronous setRetryActivityToFalsemethod, which has two parameters: activityRanSuccessfullytakes the Promise<Void>object returned by the unreliableActivityactivity. retryActivitytakes the retryActivityobject. If unreliableActivitycompletes, activityRanSuccessfullybecomes ready and setRetryActivityToFalsesets retryActivityto false. Otherwise, activityRanSuccessfullynever becomes ready and setRetryActivityToFalsedoesn't execute. If unreliableActivitythrows an exception, the framework calls doCatchand passes it the exception object. doCatchsets retryActivityto true. runUnreliableActivityTillSuccesscalls the asynchronous restartRunUnreliableActivityTillSuccessmethod and passes it the retryActivityobject. Because retryActivityis a Promise<T>type, restartRunUnreliableActivityTillSuccessdefers execution until retryActivityis ready, which occurs after TryCatchcompletes. When retryActivityis ready, restartRunUnreliableActivityTillSuccessextracts the value. If the value is false, the retry succeeded. restartRunUnreliableActivityTillSuccessdoesn'thing and the retry sequence terminates. If the value is true, the retry failed. restartRunUnreliableActivityTillSuccesscalls runUnreliableActivityTillSuccessto execute the activity again. Steps 1 - 7 repeat until unreliableActivitycompletes. Note doCatch doesn't handle the exception; it simply sets the retryActivity object to true to indicate that the activity failed. The retry is handled by the asynchronous restartRunUnreliableActivityTillSuccess method, which defers execution until TryCatch completes. The reason for this approach is that, if you retry an activity in doCatch, you can't cancel it. Retrying the activity in restartRunUnreliableActivityTillSuccess allows you to execute cancellable activities. Exponential Retry Strategy With the exponential retry strategy, the framework executes a failed activity again after a specified period of time, N seconds. If that attempt fails the framework executes the activity again after 2N seconds, and then 4N seconds and so on. Because the wait time can get quite large, you typically stop the retry attempts at some point rather than continuing indefinitely. The framework provides three ways to implement an exponential retry strategy: The @ExponentialRetryannotation is the simplest approach, but you must set the retry configuration options at compile time. The RetryDecoratorclass allows you to set retry configuration at run time and change it as needed. The AsyncRetryingExecutorclass allows you to set retry configuration at run time and change it as needed. In addition, the framework calls a user-implemented AsyncRunnable.runmethod to run each retry attempt. All approaches support the following configuration options, where time values are in seconds: The initial retry wait time. The back-off coefficient, which is used to compute the retry intervals, as follows:Copy retryInterval = initialRetryIntervalSeconds * Math.pow(backoffCoefficient, numberOfTries - 2) The default value is 2.0. The maximum number of retry attempts. The default value is unlimited. The maximum retry interval. The default value is unlimited. The expiration time. Retry attempts stop when the total duration of the process exceeds this value. The default value is unlimited. The exceptions that will trigger the retry process. By default, every exception triggers the retry process. The exceptions that will not trigger a retry attempt. By default, no exceptions are excluded. The following sections describe the various ways that you can implement an exponential retry strategy. Exponential Retry with @ExponentialRetry The simplest way to implement an exponential retry strategy for an activity is to apply an @ExponentialRetry annotation to the activity in the interface definition. If the activity fails, the framework handles the retry process automatically, based on the specified option values. The basic pattern is: Apply @ExponentialRetryto the appropriate activities and specify the retry configuration. If an annotated activity fails, the framework automatically retries the activity according to the configuration specified by the annotation's arguments. The ExponentialRetryAnnotationWorkflow workflow worker implements the exponential retry strategy by using an @ExponentialRetry annotation. It uses an unreliableActivity activity whose interface definition is implemented in ExponentialRetryAnnotationActivities, as follows: Copy @Activities(version = "1.0") @ActivityRegistrationOptions( defaultTaskScheduleToStartTimeoutSeconds = 30, defaultTaskStartToCloseTimeoutSeconds = 30) public interface ExponentialRetryAnnotationActivities { @ExponentialRetry( initialRetryIntervalSeconds = 5, maximumAttempts = 5, exceptionsToRetry = IllegalStateException.class) public void unreliableActivity(); } The @ExponentialRetry options specify the following strategy: Retry only if the activity throws IllegalStateException. Use an initial wait time of 5 seconds. No more than 5 retry attempts. The workflow interface is implemented in RetryWorkflow and has one method, process, which is the workflow's entry point. The workflow worker is implemented in ExponentialRetryAnnotationWorkflowImpl, as follows: Copy public class ExponentialRetryAnnotationWorkflowImpl implements RetryWorkflow { public void process() { handleUnreliableActivity(); } public void handleUnreliableActivity() { client.unreliableActivity(); } } The workflow works as follows: processruns the synchronous handleUnreliableActivitymethod. handleUnreliableActivityexecutes the unreliableActivityactivity. If the activity fails by throwing IllegalStateException, the framework automatically runs the retry strategy specified in ExponentialRetryAnnotationActivities. Exponential Retry with the RetryDecorator Class @ExponentialRetry is simple to use. However, the configuration is static and set at compile time, so the framework uses the same retry strategy every time the activity fails. You can implement a more flexible exponential retry strategy by using the RetryDecorator class, which allows you to specify the configuration at run time and change it as needed. The basic pattern is: Create and configure an ExponentialRetryPolicyobject that specifies the retry configuration. Create a RetryDecoratorobject and pass the ExponentialRetryPolicyobject from Step 1 to the constructor. Apply the decorator object to the activity by passing the activity client's class name to the RetryDecoratorobject's decorate method. Execute the activity. If the activity fails, the framework retries the activity according to the ExponentialRetryPolicy object's configuration. You can change the retry configuration as needed by modifying this object. Note The @ExponentialRetry annotation and the RetryDecorator class are mutually exclusive. You can't use RetryDecorator to dynamically override a retry policy specified by an @ExponentialRetry annotation. The following workflow implementation shows how to use the RetryDecorator class to implement an exponential retry strategy. It uses an unreliableActivity activity that doesn't have an @ExponentialRetry annotation. The workflow interface is implemented in RetryWorkflow and has one method, process, which is the workflow's entry point. The workflow worker is implemented in DecoratorRetryWorkflowImpl, as follows: Copy public class DecoratorRetryWorkflowImpl implements RetryWorkflow { ... public void process() { long initialRetryIntervalSeconds = 5; int maximumAttempts = 5; ExponentialRetryPolicy retryPolicy = new ExponentialRetryPolicy( initialRetryIntervalSeconds).withMaximumAttempts(maximumAttempts); Decorator retryDecorator = new RetryDecorator(retryPolicy); client = retryDecorator.decorate(RetryActivitiesClient.class, client); handleUnreliableActivity(); } public void handleUnreliableActivity() { client.unreliableActivity(); } } The workflow works as follows: processcreates and configures an ExponentialRetryPolicyobject by: Passing the initial retry interval to the constructor. Calling the object's withMaximumAttemptsmethod to set the maximum number of attempts to 5. ExponentialRetryPolicyexposes other withobjects that you can use to specify other configuration options. processcreates a RetryDecoratorobject named retryDecoratorand passes the ExponentialRetryPolicyobject from Step 1 to the constructor. processapplies the decorator to the activity by calling the retryDecorator.decoratemethod and passing it the activity client's class name. handleUnreliableActivityexecutes the activity. If the activity fails, the framework retries it according to the configuration specified in Step 1. Note Several of the ExponentialRetryPolicy class's with methods have a corresponding set method that you can call to modify the corresponding configuration option at any time: setBackoffCoefficient, setMaximumAttempts, setMaximumRetryIntervalSeconds, and setMaximumRetryExpirationIntervalSeconds. Exponential Retry with the AsyncRetryingExecutor Class The RetryDecorator class provides more flexibility in configuring the retry process than @ExponentialRetry, but the framework still runs the retry attempts automatically, based on the ExponentialRetryPolicy object's current configuration. A more flexible approach is to use the AsyncRetryingExecutor class. In addition to allowing you to configure the retry process at run time, the framework calls a user-implemented AsyncRunnable.run method to run each retry attempt instead of simply executing the activity. The basic pattern is: Create and configure an ExponentialRetryPolicyobject to specify the retry configuration. Create an AsyncRetryingExecutorobject, and pass it the ExponentialRetryPolicyobject and an instance of the workflow clock. Implement an anonymous nested TryCatchor TryCatchFinallyclass. Implement an anonymous AsyncRunnableclass and override the runmethod to implement custom code for running the activity. Override doTryto call the AsyncRetryingExecutorobject's executemethod and pass it the AsyncRunnableclass from Step 4. The AsyncRetryingExecutorobject calls AsyncRunnable.runto run the activity. If the activity fails, the AsyncRetryingExecutorobject calls the AsyncRunnable.runmethod again, according to the retry policy specified in Step 1. The following workflow shows how to use the AsyncRetryingExecutor class to implement an exponential retry strategy. It uses the same unreliableActivity activity as the DecoratorRetryWorkflow workflow discussed earlier. The workflow interface is implemented in RetryWorkflow and has one method, process, which is the workflow's entry point. The workflow worker is implemented in AsyncExecutorRetryWorkflowImpl, as follows: Copy public class AsyncExecutorRetryWorkflowImpl implements RetryWorkflow { private final RetryActivitiesClient client = new RetryActivitiesClientImpl(); private final DecisionContextProvider contextProvider = new DecisionContextProviderImpl(); private final WorkflowClock clock = contextProvider.getDecisionContext().getWorkflowClock(); public void process() { long initialRetryIntervalSeconds = 5; int maximumAttempts = 5; handleUnreliableActivity(initialRetryIntervalSeconds, maximumAttempts); } public void handleUnreliableActivity(long initialRetryIntervalSeconds, int maximumAttempts) { ExponentialRetryPolicy retryPolicy = new ExponentialRetryPolicy(initialRetryIntervalSeconds).withMaximumAttempts(maximumAttempts); final AsyncExecutor executor = new AsyncRetryingExecutor(retryPolicy, clock); new TryCatch() { @Override protected void doTry() throws Throwable { executor.execute(new AsyncRunnable() { @Override public void run() throws Throwable { client.unreliableActivity(); } }); } @Override protected void doCatch(Throwable e) throws Throwable { } }; } } The workflow works as follows: processcalls the handleUnreliableActivitymethod and passes it the configuration settings. handleUnreliableActivityuses the configuration settings from Step 1 to create an ExponentialRetryPolicyobject, retryPolicy. handleUnreliableActivitycreates an AsyncRetryExecutorobject, executor, and passes the ExponentialRetryPolicyobject from Step 2 and an instance of the workflow clock to the constructor handleUnreliableActivityimplements an anonymous nested TryCatchclass and overrides the doTryand doCatchmethods to run the retry attempts and handle any exceptions. doTrycreates an anonymous AsyncRunnableclass and overrides the runmethod to implement custom code to execute unreliableActivity. For simplicity, runjust executes the activity, but you can implement more sophisticated approaches as appropriate. doTrycalls executor.executeand passes it the AsyncRunnableobject. executecalls the AsyncRunnableobject's runmethod to run the activity. If the activity fails, executor calls runagain, according to the retryPolicyobject configuration. For more discussion of how to use the TryCatch class to handle errors, see AWS Flow Framework for Java Exceptions. Custom Retry Strategy The most flexible approach to retrying failed activities is a custom strategy, which recursively calls an asynchronous method that runs the retry attempt, much like the retry-until-success strategy. However, instead of simply running the activity again, you implement custom logic that decides whether and how to run each successive retry attempt. The basic pattern is: Create a Settable<T>status object, which is used to indicate whether the activity failed. Implement a nested TryCatchor TryCatchFinallyclass. doTryexecutes the activity. If the activity fails, doCatchsets the status object to indicate that the activity failed. Call an asynchronous failure handling method and pass it the status object. The method defers execution until TryCatchor TryCatchFinallycompletes. The failure handling method decides whether to retry the activity, and if so, when. The following workflow shows how to implement a custom retry strategy. It uses the same unreliableActivity activity as the DecoratorRetryWorkflow and AsyncExecutorRetryWorkflow workflows. The workflow interface is implemented in RetryWorkflow and has one method, process, which is the workflow's entry point. The workflow worker is implemented in CustomLogicRetryWorkflowImpl, as follows: Copy public class CustomLogicRetryWorkflowImpl implements RetryWorkflow { ... public void process() { callActivityWithRetry(); } @Asynchronous public void callActivityWithRetry() { final Settable<Throwable> failure = new Settable<Throwable>(); new TryCatchFinally() { protected void doTry() throws Throwable { client.unreliableActivity(); } protected void doCatch(Throwable e) { failure.set(e); } protected void doFinally() throws Throwable { if (!failure.isReady()) { failure.set(null); } } }; retryOnFailure(failure); } @Asynchronous private void retryOnFailure(Promise<Throwable> failureP) { Throwable failure = failureP.get(); if (failure != null && shouldRetry(failure)) { callActivityWithRetry(); } } protected Boolean shouldRetry(Throwable e) { //custom logic to decide to retry the activity or not return true; } } The workflow works as follows: processcalls the asynchronous callActivityWithRetrymethod. callActivityWithRetrycreates a Settable<Throwable>object named failure which is used to indicate whether the activity has failed. Settable<T>is derived from Promise<T>and works much the same way, but you set a Settable<T>object's value manually. callActivityWithRetryimplements an anonymous nested TryCatchFinallyclass to handle any exceptions that are thrown by unreliableActivity. For more discussion of how to handle exceptions thrown by asynchronous code, see AWS Flow Framework for Java Exceptions. doTryexecutes unreliableActivity. If unreliableActivitythrows an exception, the framework calls doCatchand passes it the exception object. doCatchsets failureto the exception object, which indicates that the activity failed and puts the object in a ready state. doFinallychecks whether failureis ready, which will be true only if failurewas set by doCatch. If failureis ready, doFinallydoesn'thing. If failureisn't ready, the activity completed and doFinallysets failure to null. callActivityWithRetrycalls the asynchronous retryOnFailuremethod and passes it failure. Because failure is a Settable<T>type, callActivityWithRetrydefers execution until failure is ready, which occurs after TryCatchFinallycompletes. retryOnFailuregets the value from failure. If failure is set to null, the retry attempt was successful. retryOnFailuredoes nothing, which terminates the retry process. If failure is set to an exception object and shouldRetryreturns true, retryOnFailurecalls callActivityWithRetryto retry the activity. shouldRetryimplements custom logic to decide whether to retry a failed activity. For simplicity, shouldRetryalways returns trueand retryOnFailureexecutes the activity immediately, but you can implement more sophisticated logic as needed. Steps 2–8 repeat until unreliableActivitycompletes or shouldRetrydecides to stop the process. Note doCatch doesn't handle the retry process; it simply sets failure to indicate that the activity failed. The retry process is handled by the asynchronous retryOnFailure method, which defers execution until TryCatch completes. The reason for this approach is that, if you retry an activity in doCatch, you can't cancel it. Retrying the activity in retryOnFailure allows you to execute cancellable activities.
http://docs.aws.amazon.com/amazonswf/latest/awsflowguide/features-retry.html
2017-08-16T17:35:18
CC-MAIN-2017-34
1502886102309.55
[]
docs.aws.amazon.com
Contents This sample demonstrates the use of the TIBCO StreamBase® Adapters for Wall Street Systems. You must obtain the required IBM MQ Java libraries directly from IBM. The easiest way to make the IBM MQ API accessible to StreamBase is install them as Maven dependencies into your Maven repository. The IBM MQ sample comes with Studio launch files which perform a Maven install to the local Maven repository to aid in running the sample; simply copy the JAR files into the root of the sample project and run each launch config to install. It is however recommended you install to your main Maven repository to make sure they are available to all machines you may distribute your code to. Obtain the following required IBM MQ JAR files from an IBM MQ Series 7.0 installation, or from IBM: Note To use this sample, you must have access to a valid Wall Street Systems server. In the Project Explorer view, double-click wall-street-systems.sbapp. Select the WSSInput adapter icon to open the Properties view for the adapter. Select the Adapter Properties tab and enter valid values for Host Name, Port Number, Queue Manager, Queue Name, and Channel. Repeat the previous two steps for the WSSOutput adapter. Enter values for Reply To Queue Manager and Reply To Queue Name as well. Click the Run button. This opens the SB Test/Debug perspective and starts the application. In the Test/Debug Perspective, open the Application Output view. If connectivity to your Wall Street Systems server is configured correctly, look for tuples emitted on the InputStatusand OutputStatusstreams indicating the adapter has connected successfully. In the Manual Input view, select the SendMessagestream. Enter one or more values in the sub-fields of the FXTRADEfield and press Send Data. Observe an FXTRADE_RESPONSE tuple emitted on the ReceivedMessagest, type: sbd wall-street-systems.sbapp In window 2, type: sbc dequeue -v This window displays the tuples dequeued from the adapters' output ports. In window 3, send a message to the Wall Street Systems server: echo null,"Interface ID,null,null,null,null,null,null,null,null,null,null,null,null" | sbc enqueue SendMessage Observe in windows 2 an FXTRADE_RESPONSE tuple emitted on the ReceivedMessagestream. In window 3, type the following command to terminate the server and dequeuer: sbadmin shutdown_wall-street-systems See Default Installation Directories for the location of studio-workspace on your system. In the default TIBCO StreamBase installation, this sample's files are initially installed in: streambase-install-dir/sample/adapter/embedded/wall-street-systems See Default Installation Directories for the default location of studio-workspace on your system.
http://docs.streambase.com/latest/topic/com.streambase.sb.ide.help/data/html/samplesinfo/WallStreetSystems.html
2017-08-16T17:32:00
CC-MAIN-2017-34
1502886102309.55
[]
docs.streambase.com
This document uses the Hello LiveView sample to explain how to use grid views in LiveView Desktop. Load and run the Hello LiveView sample as delivered with LiveView, and view the resulting tables with LiveView Desktop. Follow these steps: Start StreamBase Studio in the SB Authoring perspective. Load the Hello LiveView sample: Select→. If you already have a project folder of that name, Studio adds a counter digit to the folder name and adds a new project. In the Project Explorer view, select the name of the project, right-click, and from the context menu, select→ . The Console view shows several messages as the LiveView Server compiles the project and starts. This startup process can take several minutes, depending on the configuration of your computer. When you see the message All tables have been loadedin the Console view, start LiveView Desktop with one of the following methods: On Windows and Linux, if your StreamBase Studio sample project folder includes Start LiveView Desktopentries, then a Run Configuration entry was automatically added to Studio when you loaded the sample. In this case, the simplest way to start LiveView Desktop is to invoke → → . On macOS, only if LiveView Desktop is installed in the canonical location in STREAMBASE_HOME/liveview/desktop, you can invoke → → . Otherwise, use Spotlight or Launchpad to locate and invoke the app named liveview. On Windows, you can also invoke→ → → → . On Linux,. Open a view on a LiveView table by double-clicking its table name in the LiveView Tables view. When you double-click a table name, you issue a query to LiveView Server that returns the entire contents of the table. By default, LiveView Desktop opens a grid view on the table. You can use the following features to modify the display of a grid view: Filter the display using a simple string match, discussed in Using LiveView Desktop Filters. Limit the number of rows a grid view can display, discussed in Setting Row Limits in LiveView Desktop. Apply cosmetic or alert formatting based on specified conditions in the incoming data, discussed in Conditional Formatting. LiveView Desktop's grid view allows you to filter data from a query. A filter allows you to enter a specific string and limit results to rows that match that string. Use a filter when you have a set of results that is too large to show in one page. To open the table Filter field for a grid view, you can either: Click to select a grid view and press Ctrl+F. Click thebutton ( ) on the right side of the grid view's toolbar, and select . In the Filter field, enter a string or substring to be matched against. If you enter an all-lowercase string, the filter matches case-insensitively. If you enter a mixed-case search string, the match is case-sensitive. For example, to limit the current table display to the rows that report book sales, type book into the Filter field. Notice that if you type Book, the filter does not match any rows. The Filter field provides a simple substring match functionality, and is not designed to provide an exhaustive query mechanism. See Querying Your Data for ways to perform queries against tables. The row limit for query results allows a LiveView client to prevent the server from sending excessive data. The limit specifies the maximum number of rows that the server returns to the client. Without a row limit, an insufficiently restrictive query on a large table or a rapid-changing table might overwhelm the client with more data than it can process. Regard the row limit feature as a mechanism to protect clients from an extraordinary data load or from incorrectly defined queries. Because query results are not updated once the row limit is reached, we recommend that users not run queries that consistently reach the row limit. (If a query does reach the row limit, you can use theoption in LiveView Desktop to see recent updates.) Set row limits low enough that client processing capacity is not exceeded, and define query predicates to be restrictive enough that the row limit is never or rarely reached. - Changing the LiveView Desktop Default Query Row Limit In LiveView Desktop on Windows, invoke TIBCO LiveView from the left-hand navigation panel. The Default Query row limit field is set to the default value of 2000. Increase or reduce this value as appropriate for your expected data set.→ ; on macOS, invoke → . Select This setting, like all LiveView Desktop Preference settings, is saved in your LiveView Desktop workspace, and persists across LiveView Desktop sessions if you restart Desktop and select the same workspace name. - Changing the Query Row Limit Per Table You can change the query row limit for individual tables. To do this, right-click the table name in the Tables pane and select → .
http://docs.streambase.com/latest/topic/com.streambase.sb.studio.liveview.help/data/html/lv-desktop/lv-desktop-grid-view.html
2017-08-16T17:31:59
CC-MAIN-2017-34
1502886102309.55
[]
docs.streambase.com
What Is AWS Data Exchange? AWS Data Exchange is a service that makes it easy for AWS customers to securely exchange file-based data sets in the AWS Cloud. As a subscriber, you can find and subscribe to hundreds of products from qualified data providers. Then, you can quickly download the data set or copy it to Amazon S3 for use across a variety of AWS analytics and machine learning services. Anyone with an AWS account can be a AWS Data Exchange subscriber. For information about becoming a subscriber, see Subscribing to Data Products on AWS Data Exchange. For providers, AWS Data Exchange eliminates the need to build and maintain any data delivery, entitlement, or billing technology. Providers in AWS Data Exchange have a secure, transparent, and reliable channel to reach AWS customers and grant existing customers their subscriptions more efficiently. The process for becoming an AWS Data Exchange provider requires a few steps to determine eligibility. For more information, see Register to Be a Provider. What Is An AWS Data Exchange Product? A product is the unit of exchange in AWS Data Exchange that is published by a provider and made available for use to subscribers. When a provider publishes a product, that product is listed on AWS Data Exchange and AWS Marketplace. A product has the following parts: Product details – This information includes name, descriptions (both short and long), logo image, and support contact information. Providers complete the product details. For more information as a subscriber, see Product Subscriptions. For more information as a provider, see Filling Out Product Details. Product offers – To make a product available on AWS Data Exchange, providers must define a public offer. This offer includes prices and durations, data subscription agreement, refund policy, and the option to create custom offers. For more information, see Creating an Offer for AWS Data Exchange Products. Data sets – A product can contain one or more data sets. A data set is a dynamic set of file-based content. Data sets are dynamic and are versioned through the use of revisions. Each revision can contain multiple assets. For more information, see Working with Data Sets. Malware Prevention Security and compliance is a shared responsibility between you and AWS. To promote a safe, secure, and trustworthy service for everyone, AWS Data Exchange scans all data published by providers before it is made available to subscribers. If AWS detects malware, the affected asset is removed. AWS Data Exchange does not guarantee that the data you consume as a subscriber is free of any potential malware. We encourage that you conduct your own additional due-diligence to ensure compliance with your internal security controls. You can find anti-malware and security products in AWS Marketplace. Supported Data Sets AWS Data Exchange takes a responsible approach to facilitating data transactions by promoting transparency through use of the service. AWS Data Exchange reviews permitted data types, restricting products that are not permitted. Providers are limited to distributing data sets that meet the legal eligibility requirements set forth in the Terms and Conditions for AWS Marketplace Sellers. For more information about permitted data types, see Publishing Guidelines. As an AWS customer, you are encouraged to conduct your own additional due-diligence to ensure compliance with any applicable data privacy laws. If you suspect that a product or other resources on AWS Data Exchange are being used for abusive or illegal purposes, report it using the Report Amazon AWS abuse form Your AWS Data Exchange subscriptions are displayed in the currency you specified for your AWS account. You can change your preferred currency for your AWS account in the AWS Billing and Cost Management console. For instructions, see Changing which currency you use to pay your bill in the AWS Billing and Cost Management User Guide. Changing your preferred currency changes your remittance instructions. To view updated remittance instructions, see your AWS Marketplace invoice or view the Account Settings page in the AWS Billing and Cost Management For pricing information, see Supported Regions AWS Data Exchange has a single, globally available product catalog offered by providers. Subscribers can see the same catalog regardless of which AWS Region they are using. The resources underlying the product (data sets, revisions, assets) are regional resources that you manage programmatically or through the AWS Data Exchange console in supported AWS Regions. For information about which regions are supported, see Global Infrastructure Region Table Related Services The following services are related to AWS Data Exchange: Amazon S3 – Currently, the only supported asset type for data sets is Amazon S3 object snapshots. Subscribers can export data sets to Amazon S3 programatically. For more information, see What Is Amazon S3? in the Amazon Simple Storage Service Developer Guide. AWS Marketplace – AWS Data Exchange allows data sets to be published as products on AWS Marketplace. AWS Data Exchange providers must be registered as AWS Marketplace sellers, and can use the AWS Marketplace Management Portal or the AWS Marketplace Catalog API. For information about becoming an AWS Marketplace subscriber, see What Is AWS Marketplace? in the AWS Marketplace Buyer Guide. For information about becoming an AWS Marketplace seller, see What Is AWS Marketplace? in the AWS Marketplace Seller Guide.
https://docs.aws.amazon.com/data-exchange/latest/userguide/what-is.html
2020-10-20T04:33:01
CC-MAIN-2020-45
1603107869785.9
[]
docs.aws.amazon.com
Stop existing Flink applications You need to stop your stateful Flink applications with a savepoint. You can use savepoints to resume the application state after the upgrade. - Find the YARN application IDs. yarn application -list -appTypes "Apache Flink" - Determine the related Flink job IDs. flink list -yid <YARN application ID> - Stop your Flink applications.You have two choices: - Stop your applications with a savepoint to store the application state. flink stop -yid <YARN application ID> <Flink job ID> The command returns an HDFS path, which is the automatically created savepoint that stores the application state. - Cancel your applications without creating a savepoint. flink cancel -yid <YARN application ID> <Flink job ID> Use this method if you do not need to restore the application state after the upgrade.
https://docs.cloudera.com/csa/1.2.0/upgrade/topics/csa-stop-applications.html
2020-10-20T04:21:42
CC-MAIN-2020-45
1603107869785.9
[]
docs.cloudera.com
Using Mobiscroll with Ionic 1 Installing Mobiscroll in your Ionic app takes a couple of minutes. Let's see how you can start with a simple app. This guide is strictly written for Ionic 1. If you are interested in usage with Ionic 2 and above, including Ionic 3 read it here.! Select ANGULARJS. Then hit the big blue button. Including Mobiscroll in your Ionic tabs --type=ionic1 $ cd myStarterApp Step 2: Copy Mobiscroll into your app The next step is to unpack the downloaded mobiscroll package and copy the css and js folders to the www/lib/mobiscroll folder of the myStarterApp At the end you should have the mobiscroll js file under the myStarterApp/www/lib/mobiscroll/js/ folder. The same is true for the css file: myStarterApp/www/lib/mobiscroll/css/. Step 3: Include the CSS and JS resources Open the index.html file and include the mobiscroll js and the css files in the header sections. mobiscroll.angularjs.min.jsis included after the ionic.bundle.js! <!-- mobiscroll css --> <link href="lib/mobiscroll/css/mobiscroll.angularjs.min.css" rel="stylesheet" /> <!-- ionic/angularjs js --> <script src="lib/ionic/js/ionic.bundle.js"></script> <!-- mobiscroll js --> <script src="lib/mobiscroll/js/mobiscroll.angularjs.min.js"></script> Step 4: Set up the module dependencies You have to set up the dependencies to use the mobiscroll components. For example if you want to use the Date & time and Select components, then the dependencies will be the mobiscroll-datetime and the mobiscroll-select modules. You should add these module dependencies in the controllers.js file like: angular.module('starter.controllers', ['mobiscroll-datetime', 'mobiscroll-select']) At this point the app should be ready for the development! Step 5: Let's see if Mobiscroll was installed correctly To test it let's add a simple input to the tab-dash.html: <div class="list"> <label class="item item-input"> <input ng- </label> </div> To build the app just run the serve command in the CLI: $ ionic serve Other ways to get started To see how mobiscroll is installed in an Ionic 1 project you can download the Starter for Ionic.
https://docs.mobiscroll.com/angularjs/with-ionic1
2020-10-20T02:27:18
CC-MAIN-2020-45
1603107869785.9
[]
docs.mobiscroll.com
Fetch Yo Stories / Articles Posted in General by Hassan Ali Thu Dec 04 2014 11:11:47 GMT+0000 (Coordinated Universal Time)·Viewed 2,472 times I'm trying to integrate / show Yo stories on my intranet Blog (PHP based), can someone help me /guide me towards the API code or instructions on how to achieve this?
https://docs.justyo.co/v1.0/discuss/54804173db5b6a2000724ac0
2020-10-20T02:43:00
CC-MAIN-2020-45
1603107869785.9
[]
docs.justyo.co
Configuring Storefront Address Validation Your Vertex Cloud account also includes the Vertex Address Validation module. When this functionality is enabled for your Magento store, the storefront prompts the customer to correct the address information on both the shipping and billing steps of the one-page checkout and when the customer adds an address to their account. Address validation message for correction Using the storefront address validation requires that you first configure the Vertex Tax Calculations to connect to your Vertex Cloud account. To enable Vertex address validation for the storefront: Address Validation. Set Use Vertex Address Validation to Enable. This setting enables allows you to configure the address validation settings. Address Validation settings - enabled Verify and accept the default value for Vertex Address Validation API URL. This connects the integration with Vertex Cloud. The value should match the URL displayed in your Vertex Connectors page for the Magento connector in the Address Lookup URL field. If you want to display a message when the address is correctly verified, set Confirmation Message when no action is needed to Enable. By default, this function is disabled so that a message is displayed only if the address does not match a validated address with a prompt to correct it. When complete, click Save Config. To refresh the cache, do the following: On the Admin sidebar, go to System > Tools > Cache Management. Select the checkbox of each invalid cache. Set Actions to Refreshand click Submit.
https://docs.magento.com/user-guide/v2.3/tax/vertex-configure-address.html
2020-10-20T04:04:42
CC-MAIN-2020-45
1603107869785.9
[]
docs.magento.com
Transformation Overview¶. Processing can also be overwritten by Page TSconfig, see the according section of the Page TSconfig reference for details. Transformation Filters¶ Transformation filter css_transform Description Transforms the html markup either for display in the richtext editor or for saving in the db. The name “css_transform” is historical; earlier TYPO3 versions had a long since removed “ts_transform” mode, which basically only saved a minimum amount of HTML in the db and produced a lot of nowadays outdated markup like <font> tag style rendering in the frontend. Transformation filter ts_links Description Processes anchor tags and resolves them via \TYPO3\CMS\Core\LinkHandling\LinkService before saving them to the db, while using the TYPO3-internal t3:// syntax. In addition, it is possible to define custom transformations can be created allowing your to add your own tailor made transformations with a PHP class where you can program how content is processed to and from the database.
https://docs.typo3.org/m/typo3/reference-coreapi/10.4/en-us/ApiOverview/Rte/Transformations/Overview.html
2020-10-20T03:40:24
CC-MAIN-2020-45
1603107869785.9
[]
docs.typo3.org
October 17, 2019 (EMP 12.5.0) Update for: r1000 Release notes for Enterprise Monitoring Point version 12.5.0. New features - 100Gbps support available on r1000!!—The r1000 Enterprise Monitoring Point can now be configured with up to two 100Gbps Delivery/Experience ports. Ports 2 and 3 on the r1000 are QSFP ports and can accept 10Gbps (SFP+ using included QSFP28 to SFP+ adapter), 40Gbps (QSFP+), or 100Gbps (QSFP28) modules. As shipped, port 2 is configured for Delivery/Experience monitoring (100Gbps max) and port 3 is configured for Usage monitoring (10Gbps max). Port 3 can be reconfigured for Delivery/Experience monitoring (100Gbps max). Ports configured for more than 10Gbps require a “100Gbps monitoring” add-on license. - Note that this upgrade will take longer than most (~ 10 minutes) and requires two reboots. The first is automatic. The second is manual. Resolved issues Work arounds PathTest and MTU size—If the mid-path MTU size is less than the source or target interface MTU, PathTest will time out. You can work around this by setting the Packet Size within PathTest to the Target MTU size.
https://docs.appneta.com/release-notes/2019-10-17-appliance2.html
2020-10-20T03:36:19
CC-MAIN-2020-45
1603107869785.9
[array(['/files/target-mtu-size2.png', 'Screenshot showing Packet Size within PathTest being set to 1280 - the same size as the Target MTU size.'], dtype=object) ]
docs.appneta.com
Onboarding wizard use cases This section describes real-world use cases for using the Onboarding wizard. The mode of operation, Guided or Unguided, doesn't affect the use cases. The user interface interactions are the same in both modes. The following use cases are provided: Onboarding data manually This use case describes using the manual data-entry method, with the Onboarding wizard running in Guided mode. - Francie Stafford is a data administrator at Calbro Services. The Calbro IT department has just installed a new instance of the BMC Remedy ITSM 8.1.02 suite. Francie logs on to the newly installed system for the first time, and the system automatically displays a message providing information about Foundation data setup and a link that opens the Onboarding wizard. - Francie clicks the Onboarding wizard link, which opens a splash screen that provides a quick overview of the wizard. Francie clicks the link to continue. - The Onboarding asks Francie if she wants to use Guided or the Unguided mode. Francie selects Guided mode and starts entering company information. - When she finishes entering company information, Francie clicks Next Step, and the wizard asks her to identify which company is the Onboarding company. Francie selects Calbro, and the wizard advances to Organization. - Francie continues the cycle of entering information and clicking Next Step until she has entered data for all of the required steps (only steps 1 through 5 are required). When Francie clicks Next Step after entering the Assignment information, the wizard displays information about how to review and promote the data. To continue, Francie selects Automatic and clicks Finalize Review and Activate. Francie selects the Automatic method for promoting the data. - The Onboarding wizard displays the Data Visualizer, which provides Francie with a way to see all of the data and its relationships in one place. - Francie reviews the data in the Data Visualizer and notices that one of the records that she created for Assignment has a mistake in the Assigned Group field for one of the events. In the Data Visualizer, Francie double-clicks the record with the data exception, and an edit window opens with the record details. Francie fixes the data exception in the window and saves the record. - Francie clicks Activate, and the Onboarding wizard begins pushing the data to the production data forms. A panel slides into view and displays a progress bar and information about the current process (who started it, how long it has been running, and so on). When the Promote job finishes running, a message is displayed, showing the total number of records, the number of records promoted to the production data forms, and any data exceptions found. - Francie checks the Exceptions counter located under the Promote progress bar and notices that a couple of data exceptions occurred during the promotion. - Francie clicks Exceptions to open Data Visualizer, from which she can review the exceptions and correct them. Francie clicks the record to see a report about what caused the exceptions. Francie then double-clicks a record to open an editor (like the one she used to correct the Assignment record). She corrects the condition that created the exception, then continues to the next record that had an exception. When Francie finishes correcting the data exceptions, she reruns the promotion. The promotion job processes only the records that Francie updated, then finishes with no further exceptions. - The data that Francie entered manually is now active and available to the system. - Francie exits from the Onboarding wizard. Onboarding data using spreadsheets This use case describes using the spreadsheet data-entry method, with the Onboarding wizard running in Unguided mode. Recently, Calbro Services acquired Conozco, a small consulting firm, and it needs to integrate the Conozco People records into its database. Francie Stafford, a data administrator at Calbro Services, uses the Onboarding wizard to accomplish this task. - From the Application menu on the BMC Remedy ITSM Home page, Francie selects Data Management > Onboarding wizard. When asked to choose an operating mode, Francie selects Unguided. From the Onboarding wizard UI, she clicks Step 6: People. From the drop-down menu beside Create Data Manually, Francie clicks the paper-clip icon to open the Import From Spreadsheet option. - A few days ago, Francie had performed the procedure described above and clicked Use template spreadsheet to download the People spreadsheet to her desktop. She then sent a copy of the spreadsheet to Conozco so that Conozco could provide Calbro with a .csv file of its People database, which had been normalized with the data structure of the People spreadsheet. - In Excel, Francie opens her local copy of the People spreadsheet and clicks the Non-Support tab (because none of the Conozco people are IT support people). Francie then opens the .csv file and performs a copy-and-paste operation to put the data into the People spreadsheet. Francie saves the spreadsheet and closes it. - Francie goes back to the Onboarding wizard UI and clicks Attach My Spreadsheet. She then navigates to the spreadsheet location on her desktop computer, selects the spreadsheet, and clicks Import. - The Onboarding wizard imports the data from the spreadsheet. - When the import finishes, Francie reviews the data in the onscreen table (to the right of the buttons and links that she used to attach the spreadsheet). Francie does not find any issues with the data, so she clicks Finalize, Review, and Activate and selects the Manual method for promoting data. - Francie clicks Validate, and the Onboarding Wizard starts running data-verification checks. When the checks finish running, Francie checks the Exceptions counter located under the progress bar and notices that exceptions were found. She clicks Exceptions to open the Data Visualizer, from which she can review the exceptions and correct them. It turns out that the Region value was misspelled in the imported data. - Francie opens the Exceptions console, and from the navigation panel, she selects Invalid > Invalid Region to show the list of records with the invalid region. She selects all of the records, and in the Region field at the bottom of the console, she corrects the spelling and clicks Update Staging to update the records. - Francie clicks Validate to rerun the verification. The verification checks finish with no further exceptions, the Promote button appears at the bottom of the console, and the Onboarding wizard pushes the records out to the production database where it is available to the system. Onboarding model data This use case describes using model data to help seed your system with best-practices data. - You are an IT operations analyst working for a company that is working toward implementing a fully mature, ITIL service-desk model. One of the identified milestones on the road to maturity is the implementation of Operational Catalogs and Product Catalogs, something that until now your service desk has not used. - Your company recently acquired and installed BMC Remedy ITSM 8.1.02. As part of the justification for adopting this specific release of BMC Remedy ITSM, the IT executive team pointed to Operational Catalog and Product Catalog model data available in the Data Management Onboarding wizard, which they can use to seed the system with best-practices catalog data. - To implement the model data, you use the Onboarding wizard and import the Foundation data from your old system into the new BMC Remedy ITSM installation. - You also import the Onboarding wizard's model Operational Catalog and Product Catalog data. - After importing the model data, you edit some of the Tier 2 and Tier 3 catalog entries during the Review step of Phase 2, to ensure that they match the taxonomy that your company uses. You perform the edits using the Data Visualizer. - When you finish, you review the data and then promote it. Using People Templates This use case describes using People templates to simplify the job of importing People data into a large organization, with complex relationships between user roles and support groups. - You are onboarding People data for a large organization that has many user roles across multiple support groups. - To ensure that the complex job of assigning the correct permissions, support groups, and functional roles to individuals goes smoothly, you decide to use People templates. The People templates predefine the permissions, support groups, and functional roles attributes for each user role. - From the Onboarding wizard, you download the template spreadsheet from Step 5 ("People template") on the Onboarding wizard screen, and make copies of it: one for each user role that you need to define. - You set up the spreadsheets according to the instructions provided with them. When you are finished, you use the Onboarding wizard to import the the People templates into the staging forms. - When you are finished importing the People template information, you proceed with onboarding the People data in Step 6 on the Onboarding wizard screen, providing the name of the People template associated with the person's user role in the Template Name column. - The remaining onboarding steps are the same as are described in the other use cases. Starting a new onboarding session The following use cases help you to understand a few different scenarios in which you might start a new onboarding session. Dismissing non-fatal exceptions This use case describes how you could handle a case in which the exception errors are nonfatal. - Francie Stafford is running several data-load jobs to onboard a series of Organization, People, and catalog records. - When she runs a validation on the People records, several exceptions are returned. - When Francie reviews the exceptions, she finds that they were caused by duplicate support groups; that is, some of the support groups in the data-load job are already present in the production database. - Because these exceptions are nonfatal and do not prevent Francie from running the next data-load job, she chooses to ignore them. Francie clicks New to return to the Phase 1 user interface and clear the data with exceptions from view in the staging forms. Troubleshooting fatal exceptions later If Francie encountered data exceptions that she could not ignore in the previous use case, she could still choose to ignore them for now and troubleshoot them later. To do this, Francie: - Clicks New to clear the exception data from view. - Runs any remaining data-load jobs. - Reruns the data-load job that produced the exception. - Troubleshoots the data exceptions.
https://docs.bmc.com/docs/itsm81/onboarding-wizard-use-cases-480252185.html
2020-10-20T03:51:37
CC-MAIN-2020-45
1603107869785.9
[]
docs.bmc.com
Reading and Writing from ElasticSearch¶ Elastic Search is often used for indexing, searching and analyzing datasets. Fire Insights makes it easy to read data from Elastic Search, clean it and transform it as needed. Elasticsearch-hadoop provides native integration between Elasticsearch and Apache Spark. In the example below we will first load data from HDFS into Elastic Search and then read it back into Apache Spark from Elastic Search. If your data is already in Elastic Search, skip to “Workflow for Reading data from Elastic Search”. Create a new empty workflow. Drag and drop the source dataset from which you want to load data into Elastic Search. If you don’t have a dataset for the source data, create one. Once the source processor is on the workflow canvas, drag and drop “SaveElasticSearch” processor in the workflow. Configure your Elastic Search processor in the dialog box shown below. After configuring “SaveElasticSearch” processor, connect your data source processor to Elastic Search processor. The example workflow below reads a Housing dataset which is in CSV format from HDFS. The ‘SaveElasticSearch’ takes in the incoming data and loads it into the Elastic Search Index ‘sparkflows/housing’. Workflow Execution¶ When the example workflow above is executed, it reads in the dataset from HDFS and saves it into Elastic Search. Reading data from Elastic Search¶ Reading data from Elastic Search is easy. Drag and drop ‘ReadElasticSearch’ process into your workflow and configure it. The screenshot below shows the dialog box for the Elastic Search Read processor. In the dialog above, ‘Refresh Schema’ button infers the schema of the index. Thus it is able to pass down the output schema to the next processor making it easy to build workflows. The SQL field specifies the SQL to be used for reading from Elastic Search. It allows you to limit the columns of interest, and apply where clauses etc. The Elastic Search processor understands the SQL and translates it into the appropriate QueryDSL. The connector pushes down the operations directly to the source, where the data is efficiently filtered out so that only the required data is streamed back to Spark. This significantly increases the query performance and minimizes the CPU, memory and I/O operations on both Spark and Elastic Search clusters. The example workflow below reads the data from the sparkflows/housing index in Elastic Search and prints out the first few lines.
https://docs.sparkflows.io/en/latest/tutorials/reading-writing/elasticsearch.html
2020-10-20T02:58:26
CC-MAIN-2020-45
1603107869785.9
[]
docs.sparkflows.io
Instances and AMIs<< Your instances keep running until you stop or terminate them, or until they fail. If an instance fails, you can launch a new one from the AMI. Instances An instance is a virtual server in the cloud. Its configuration at launch is a copy of the AMI that you specified when you launched the instance. about the hardware specifications for each Amazon EC2 instance type, see Amazon EC2 Instance Types After you. For more information about this limit, and how to request an increase, see How many instances can I run in Amazon EC2 Storage for your instance The root device for your instance contains the image used to boot the instance. For more information, see Amazon EC2 root device volume. Your instance may include local storage volumes, known as instance store volumes, which you can configure at launch time with block device mapping. For more information, see Block device mapping. After these volumes have been added to and mapped on your instance, they are available for you to mount and use. If your instance fails, or if your instance is stopped or terminated, the data on these volumes is lost; therefore, these volumes are best used for temporary data. To keep important data safe, you should use a replication strategy across multiple instances, or store your persistent data in Amazon S3 or Amazon EBS volumes. For more information, see Storage. Security best practices Use AWS Identity and Access Management (IAM) to control access to your AWS resources, including your instances. You can create IAM users and groups under your AWS account, assign security credentials to each, and control the access that each has to resources and services in AWS. For more information, see Identity and access management for Amazon EC2. Restrict access by only allowing trusted hosts or networks to access ports on your instance. For example, you can restrict SSH access by restricting incoming traffic on port 22. For more information, see Amazon EC2 security groups for Linux instances. Review the rules in your security groups regularly, and ensure that you apply the principle of least privilege—only open up permissions that you require. You can also create different security groups to deal with instances that have different security requirements. Consider creating a bastion security group that allows external logins, and keep the remainder of your instances in a group that does not allow external logins. Disable password-based logins for instances launched from your AMI. Passwords can be found or cracked, and are a security risk. For more information, see Disable password-based remote logins for root. For more information about sharing AMIs safely, see Shared AMIs. Stopping and terminating instances You can stop or terminate a running instance at any time. Stopping an instance When an instance is stopped, the instance performs a normal shutdown, and then transitions to a stopped state. All of its Amazon EBS volumes remain attached, and you can start the instance again at a later time. You are not charged for additional instance usage while the instance is in a stopped state. A minimum of one minute is charged for every transition from a stopped state to a running state. If the instance type was changed while the instance was stopped, you will be charged the rate for the new instance type after the instance is started. All of the associated Amazon EBS usage of your instance, including root device usage, is billed using typical Amazon EBS prices. When an instance is in a stopped state, you can attach or detach Amazon EBS volumes. You can also create an AMI from the instance, and you can change the kernel, RAM disk, and instance type. Terminating an instance When an instance is terminated, the instance performs a normal shutdown. The root device volume is deleted by default, but any attached Amazon EBS volumes are preserved by default, determined by each volume's deleteOnTermination attribute setting. The instance itself is also deleted, and you can't start the instance again at a later time. To prevent accidental termination, you can disable instance termination. If you do so, ensure that the disableApiTermination attribute is set to true for the instance. To control the behavior of an instance shutdown, such as shutdown -h in Linux or shutdown in Windows, set the instanceInitiatedShutdownBehavior instance attribute to stop or terminate as desired. Instances with Amazon EBS volumes for the root device default to stop, and instances with instance-store root devices are always terminated as the result of an instance shutdown. For more information, see Instance lifecycle. AMIs Amazon Web Services (AWS) publishes many Amazon Machine Images a web service, your AMI could include a web server, the associated static content, and the code for the dynamic pages. As a result, after you launch an instance from this AMI, your web server starts, and your application is ready to accept requests.. The description of an AMI indicates the type of root device (either ebs or instance store). This is important because there are significant differences in what you can do with each type of AMI. For more information about these differences, see Storage for the root device. You can deregister an AMI when you have finished using it. After you deregister an AMI, you can't use it to launch new instances. Existing instances launched from the AMI are not affected. Therefore, if you are also finished with the instances launched from these AMIs, you should terminate them.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instances-and-amis.html
2020-10-20T03:13:52
CC-MAIN-2020-45
1603107869785.9
[array(['images/architecture_ami_instance.png', 'Launch multiple instances from an AMI'], dtype=object)]
docs.aws.amazon.com
Configure post-authentication Endpoint Analysis scan as a factor in Citrix ADC nFactor authentication On Citrix Gateway, Endpoint Analysis (EPA) can be configured to check if a user device meets certain security requirements and accordingly allow internal resources access to the user. The Endpoint Analysis plug-in downloads and installs on the user device when users log on to Citrix Gateway for the first time. If a user does not install the Endpoint Analysis plug-in on the user device or chooses to skip the scan, the user cannot log on with the Citrix Gateway plug-in. Optionally, users can be put in a quarantine group where the user gets limited access to internal network resources. Previously post-EPA was configured as part of session policy. Now it can be linked to nFactor providing more flexibility, as to when it can be performed. In this topic, EPA scan is used as a final check in a nFactor or multifactor authentication. User tries to connect to Citrix Gateway virtual IP address. A simple login page with user name and password field is rendered to user to provide login credentials. With these credentials, LDAP or AD-based authentication is performed at the back end. If successful, user is presented with a popup message to authorize EPA scan. Once user authorizes, EPA scan is performed and based on the success or failure of user client settings, user is provided access. PrerequisitesPrerequisites It is assumed that following configuration are in place. - VPN virtual server/gateway and authentication virtual server configurations - LDAP server configurations and associated policies Note: The setup can also be created through the nFactor Visualizer available in Citrix ADC version 13.0 and later. The following image shows mapping of policies and policy label. This is the approach used for configuration, but from right to left. Perform the following by using the CLI Create an action to perform EPA scan and associate it with an EPA scan policy. add authentication epaAction EPA-client-scan -csecexpr "sys.client_expr (\"app_0_MAC-BROWSER_1001_VERSION_<=_10.0.3\")||sys.client_expr(\"os_0_win7_sp_1\")" The above expression scans if macOS users have browser version less than 10.0.3 or if Windows 7 users have Service pack 1 installed. add authentication Policy EPA-check -rule true -action EPA-client-scan Configure policy label post-ldap-epa-scan that hosts the policy for EPA scan. add authentication policylabel post-ldap-epa-scan -loginSchema LSCHEMA_INT Note: LSCHEMA_INT is an in-built schema with no schema (noschema), meaning no additional webpage is presented to user at this step. Associate policy configured in step 1 with policy label configured in step 2. bind authentication policylabel post-ldap-epa-scan -policyName EPA-check - priority 100 -gotoPriorityExpression END Configure ldap-auth policy to and associate it with an LDAP policy which is configured to authenticate with a particular LDAP server. add authentication Policy ldap-auth -rule true -action ldap_server1 where ldap_server1 is LDAP policy and ldap-auth is the policy name Associate ldap-auth policy to authentication, authorization, and auditing virtual server with next step pointing to policy label post-ldap-epa-scan to perform EPA scan. bind authentication vserver MFA_AAA_vserver -policy ldap-auth -priority 100 -nextFactor post-ldap-epa-scan -gotoPriorityExpression NEXT Note: Pre-authentication EPA scan is always performed as the first step in nFactor authentication. Post-authentication EPA scan is always performed as the last step in nFactor authentication. EPA scans cannot be performed in between a nFactor authentication. Configuring using the nFactor VisualizerConfiguring using the nFactor Visualizer The above configuration can also be performed using nFactor Visualizer, which is a feature available on firmware 13.0 and later. NAvigate to Security > AAA-Application Traffic > nFactor Visualizer > nFactor Flow and click Add. Click + to add the nFactor flow. Add a factor. The name that you enter is the name of the nFactor flow. Click Add Schema to add a schema for the first factor and then click Add. Click Add Policy to add the LDAP policy. If the LDAP policy is already created, you can select the same. Note: You can create an LDAP policy. Click Add and in the Action field, select LDAP. For more details about adding an LDAP server, see) Click + to add the EPA factor. Leave the Add Schema section blank, to have the default no schema applied for this factor. Click Add policy to add the post auth EPA policy and action. EPA action: EPA policy: Click Create. After the nFactor flow is complete, bind this flow to the authentication, authorization, and auditing virtual server.
https://docs.citrix.com/en-us/citrix-adc/current-release/aaa-tm/how-to-articles/Configure-postauth-epa-scan-as-a-factor.html
2020-10-20T03:22:08
CC-MAIN-2020-45
1603107869785.9
[array(['/en-us/citrix-adc/media/postauth-epa-scan-as-nfactor1.jpeg', 'EPA scan as a final check-in nFactor or multifactor authentication'], dtype=object) array(['/en-us/citrix-adc/media/postauth-epa-scan-as-nfactor2.jpeg', 'Mapping of policies and policy label used in this example'], dtype=object) array(['/en-us/citrix-adc/media/postauth-epa-scan-as-nfactor3.jpeg', 'Nfactor flow representation in visualizer'], dtype=object) ]
docs.citrix.com
Corpus comes with the Visual Composer, a visual layout builder that allows you to forget about syntax of shortcodes and create multiple layouts within minutes without writing a single line of code! You can edit, delete, increase and decrease. Custom Heading This is what you read. You are able to select any Google font you like, set the size, the tag, align, color, line height and set the link URL. Dividers – Gaps With this element you can create a divider to better separate your elements and sections. Additionally, you can split you pages by using full-width dividers. Empty Space With this element you can empty space between the elements. and choose the icon color as well. Buttons The button element is an easy way to add a styled button to your page. Just choose the appropriate type(simple, outline), size, color and shape, fill out the other fields (text, link) and off you go! Icon Box! Media Box Media for Corpus means image, video or map. Combine one of these with title, text, link and you’re ready! Image Text With this element. Slider This element is not just a simple slider. Upload your images, select the navigation, the image size, speed control and more! Keep in mind that you can expand element to full width. Gallery The Gallery element has 3 different styles for showing your image galleries. Fitrows, Masonry and Carousel with multiple options for hovers, overlays, columns and more. You can also create full width galleries. Message Box With this element you create a message text with an icon (any of the font libraries) and background color. Google Map Give the address you like and your map is ready. Upload the marker you like, set the height and type. Don’t miss to expand your map to full width if you wish. Video You can just add a video(YouTube, Vimeo), even full width video. Accordion – Toggle This element creates an accordion panel that expands when the user clicks on the title to reveal more information. Tabs Simply add tabs as needed until you are ready. Tours Simply add the vertical tabbed content you wish. Testimonial This element creates a nicely slider out of your testimonials items. Go Testimonial > Testimonial Items and create your testimonials. Afterwards, add the tesimonial element and define the various settings like categories, speed, navigation and more.. Social Share With this element you can simply add social media icons anywhere in your pages. Team Member This element takes in a quick profile for a team member/employee and formats it attractively. Add the information you wish and select between two styles.. Typed Text Add your prefix text, your suffix text and add-style the Typed Text which makes your text moves. Interesting enough?.
https://docs.euthemians.com/tutorials/usage-of-corpus-elements/
2020-10-20T03:37:32
CC-MAIN-2020-45
1603107869785.9
[]
docs.euthemians.com
Getting Started¶ Registering for nixbuild.net is simple, and every account includes 50 free CPU hours of build time, so you can try out the service on your own terms, without committing to anything. Before filling out the registration form, you need to generate a new ssh key for authenticating with the service. Once registered, you can add and remove ssh keys as you like. The ssh key must be of type Ed25519. With OpenSSH, you can generate the key in the following way: ssh-keygen -t ed25519 -C my-key-comment -f my-nixbuild-key If you're using nix-daemon to run nix builds (which is usually the case), you shouldn't set a password on the ssh key. The key comment provided with the -C option will help you distinguish keys within nixbuild.net, if you add multiple ssh keys to your account, but is not used in any other way. Now you can fill out the registration form, providing the public part of your ssh key. The public key should have this syntax: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDzLYWrTuAlSWsdTHkTwLGoIaXOq6vrifbBt/X060KwL my-key-comment When you've submitted the registration form, an activation link is emailed to you. After you've activated the account, follow the rest of this guide in order to setup your system for using nixbuild.net. If you run into any troubles when going through the guide, or during your future use of nixbuild.net, just ask [email protected] for help. You are also welcome to submit issue reports, questions or any other feedback here. SSH Configuration¶ If you use nix-daemon to run builds on your local machine you need to make the private ssh key available (with permissions 0600) to the user that nix-daemon runs as ( root). The easiest way to do this is to add the following entry to your system wide (or root user) ssh client config: Host eu.nixbuild.net PubkeyAcceptedKeyTypes ssh-ed25519 IdentityFile /path/to/your/private/key The PubkeyAcceptedKeyTypes ssh-ed25519 entry above is important, since it will keep ssh from using key types that nixbuild.net can't handle. Then, you need to add the public host key of eu.nixbuild.net to the known hosts file of the root user ( /root/.ssh/known_hosts). Alternatively, you can add it to the global known hosts file ( /etc/ssh/ssh_known_hosts). The reason you have to add the public host key manually is that when Nix sets up an ssh connection, it can't ask you to confirm the host key. You should add the following line to the known hosts file: eu.nixbuild.net ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPIQCZc54poJ8vqawd8TraNryQeJnvH1eLpIDgbiqymM Verifying Shell Access¶ You should now verify that you can connect to the nixbuild.net shell, which is used to administer your account. Run the following command (using the root user via sudo, to emulate how nix-daemon works): sudo ssh eu.nixbuild.net shell You should be greeted by a screen looking similar to this:'. For convenience, you probably want to generate another ssh key and add to your nixbuild.net account now. This way, you can have one key owned by the root user, used by nix-daemon, and one key owned by your normal user, for accessing the nixbuild.net shell. Your user key can have a password or be managed by an ssh agent, since it will be used interactively. It must be of type Ed25519, and you need to have the PubkeyAcceptedKeyTypes ssh-ed25519 configured as above. To add a new ssh key, use the ssh-keys add command, like this: nixbuild.net> ssh-keys add --help ssh-keys add - Add a public ssh key to your account Usage: ssh-keys add KEY_TYPE KEY COMMENT Add a key Available options: KEY_TYPE Type of ssh key (must be 'ssh-ed25519') KEY The base64-encoded key byte sequence COMMENT A key comment, useful for identifying keys -h,--help Show this help text Example usage: ssh-keys add ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDrqgdPd6EB9YOUPy5CRNc+3dDvkc5jc2ZXfBNusi3LF laptop The options match the format of OpenSSH public keys. Only ssh keys of type 'ssh-ed25519' are supported. WARNING! When adding a new public key, any person with access to the corresponding private key will gain immediate access to this account, and will be able to run builds and make changes to the account. Nix Configuration¶ You can now go ahead and configure nixbuild.net as a Nix remote build machine. As such, you can add it as an entry in your /etc/nix/machines file: eu.nixbuild.net x86_64-linux - 100 1 big-parallel,benchmark or simply use the --builders option of nix-build: --builders "ssh://eu.nixbuild.net x86_64-linux - 100 1 big-parallel,benchmark" The big-parallel,benchmark assignment is something that is called system features in Nix. You can use that as a primitive scheduling strategy if you have multiple remote machines. Nix will only submit builds that have been marked as requiring a specific system feature to machines that are assigned that feature. The number 100 in the file above tells Nix that it is allowed to submit up to 100 simultaneous builds to eu.nixbuild.net. Usually, you use this property to balance builds between remote machines, and to make sure that a machine doesn't run too many builds at the same time. This works OK when you have rather homogeneous builds, and only one single Nix client is using a set of build servers. If multiple Nix clients use the same set of build servers, this simplistic scheduling breaks down, since a given Nix client loses track on how many builds are really running on a server. However, when you're using nixbuild.net, you can set this number to anything really, since nixbuild.net will take care of the scheduling and scaling on its own, and it will not let multiple Nix clients step on each other's toes. You probably want to activate the builders-use-substitutes Nix option. This option allows nixbuild.net to download dependencies directly from cache.nixos.org. You can read more about how nixbuild.net uses binary caches here. Your First Build¶ Run the following build to verify that your configuration works as expected: nix-build \ --max-jobs 0 \ --builders "ssh://eu.nixbuild.net x86_64-linux - 100 1 big-parallel,benchmark" \ -I nixpkgs=channel:nixos-20.03 \ --expr '((import <nixpkgs> {}).runCommand "test${toString builtins.currentTime}" {} "echo Hello nixbuild.net; touch $out")' The output should look something like this: these derivations will be built: /nix/store/0alhvbxdqq4hakna5sp6l51180q2l1l9-test1598214851.drv building '/nix/store/0alhvbxdqq4hakna5sp6l51180q2l1l9-test1598214851.drv' on 'ssh://eu.nixbuild.net'... Hello nixbuild.net copying 1 paths... copying path '/nix/store/8zj14ysp2jd3whldi99hk501628a9gaf-test1598214851' from 'ssh://eu.nixbuild.net'... /nix/store/8zj14ysp2jd3whldi99hk501628a9gaf-test1598214851 That's it, now you can use nixbuild.net! Running Distributed Builds¶ Read through the Nix remote build documentation to get an understanding for how Nix performs remote builds. Everything there applies to nixbuild.net, since nixbuild.net just appears as an ordinary remote builder to Nix. When setting up remote builds it can be helpful to use the --max-jobs 0 Nix option, that explicitly disables local builds. That way, if remote builds are not working for some reason, this will not be obscured by local building. Nix prefers using remote builders before building locally. If you want to disable remote builds for a specific build session, you can provide nix-build with the option --builders ''. NixOS configuration¶ NixOS users can use a configuration similar to the one below to configure both ssh and Nix distributed builds: programs.ssh.extraConfig = '' Host eu.nixbuild.net PubkeyAcceptedKeyTypes ssh-ed25519 IdentityFile /path/to/your/private/key ''; nix = { distributedBuilds = true; buildMachines = [ { hostName = "eu.nixbuild.net"; system = "x86_64-linux"; maxJobs = 100; supportedFeatures = [ "benchmark" "big-parallel" ]; } ]; };
https://docs.nixbuild.net/getting-started/
2020-10-20T02:44:48
CC-MAIN-2020-45
1603107869785.9
[]
docs.nixbuild.net
Step 7: (Optional) Create Systems Manager service roles This topic explains the difference between a service role and a service-linked role for Systems Manager. It also explains when you need to create or use either type of role. Service role: A service role is an AWS Identity and Access Management (IAM) that grants permissions to an AWS service so that the service can access AWS resources. Only a few Systems Manager scenarios require a service role. When you create a service role for Systems Manager, you choose the permissions to grant in order for it to access or interact with other AWS resources. Service-linked role: A service-linked role is predefined by Systems Manager and includes all the permissions that the service requires to call other AWS services on your behalf. Currently, the Systems Manager service-linked role can be used for the following: The Systems Manager Inventory capability uses the service-linked role to collect inventory metadata from tags and resource groups. The Maintenance Windows capability can use the service-linked role in some situations. Other situations require a custom service role that you create, as described below. For more information about the service-linked role, see Using service-linked roles for Systems Manager. Create a service role You can create the following service roles as part of Systems Manager setup, or you can create them later. Service role for Automation Automation previously required that you specify a service elevated privileges. In this scenario, you can create a service role with elevated privileges and allow the user to run the workflow. Operations that you expect to run longer than 12 hours require a service role. If you need to create a service role and an instance profile role for Automation, you can use one of the following methods. Service role for maintenance window tasks To run tasks on your managed instances, the Maintenance Windows service must have permission to access those resources. This permission can be granted using either a service-linked role for Systems Manager or a custom service role that you create. You create a custom service role in the following cases: If you want to use a more restrictive set of permissions than those provided by the service-linked role. If you need a more permissive or expanded set of permissions than those provided by the service-linked role. For example, some actions in Automation documents require permissions for actions in other AWS services. For more information, see the following topics in the Maintenance Windows section of this user guide: Service role for Amazon Simple Notification Service notifications Amazon Simple Notification Service (Amazon SNS) is a web service that coordinates and manages the delivery or sending of messages to subscribing endpoints or clients. In Systems Manager, you can configure Amazon SNS to send notifications about the status of commands that you send using the Run Command capability, or the status of tasks run in maintenance windows. You create a service role for Amazon SNS as part of the process of configuring the service for use with Systems Manager. After you complete this configuration, you choose whether to receive notifications for particular Run Command commands or maintenance windows tasks at the time you create each one. For more information, see Monitoring Systems Manager status changes using Amazon SNS notifications. Service role for a Systems Manager hybrid environment If you plan to use Systems Manager to manage on-premises servers and virtual machines (VMs) in what is called a hybrid environment, you must create an IAM role for those resources to communicate with the Systems Manager service. For more information, see Create an IAM service role for a hybrid environment. Continue to Step 8: (Optional) Set up integrations with other AWS services.
https://docs.aws.amazon.com/systems-manager/latest/userguide/setup-service-role.html
2020-10-20T03:48:44
CC-MAIN-2020-45
1603107869785.9
[]
docs.aws.amazon.com
Using our Fleet Planner API Fleet Planner is a stateless API that finds the most optimal routes for a fleet of vehicles given a set of tasks. Unlike the Dispatch API, Fleet Planner doesn't retain state and will only re-optimize an existing set of tasks if the same set of tasks are modified and re-sent to Fleet Planner. This makes Fleet Planner optimal for cases when you have a pre-existing list of pickups/dropoffs and vehicles that need to be scheduled ahead of time. That being said, a potential stateful integration of Fleet Planner could involve maintaining states on one's own and continually calling Fleet Planner whenever said state changes. In this guide, we'll be running through a quick example use case of Fleet Planner where we feed the API a set of tasks/available vehicles and receive a plan. All that's required to retrieve the plan is one call to the Fleet Planner API.. Step 1: Get a recommendation plan for a fleet of vehicles and tasks. All we need to do here is send the Fleet Planner API information about the fleet of vehicles and tasks to complete. One thing to note is that you can also schedule a specific time window for the pickup/dropoff tasks if they need to be completed at a certain time. You can find a full list of parameters for vehicles/tasks in Fleet Planner in our API documentation here. curl --request POST '' \ --header 'Content-Type: application/json' \ --header 'X-Api-Key: $RIDEOS_API_KEY' \ --data '{ "optimizeFor": "RIDE_HAIL", "vehicles": { "vehicle-0": { "resourceCapacity": 4, "vehicleId": "vehicle-0", "position": { "latitude": 37.78861129958993, "longitude": -122.42121679763515 } } }, "tasks": { "task-0": { "resourcesRequired": 1, "pickupStep": { "position": { "latitude": 37.788710054546385, "longitude": -122.42034205962396 } }, "dropoffStep": { "position": { "latitude": 37.79878236715864, "longitude": -122.4222166856741 } } }, "task-1": { "resourcesRequired": 1, "pickupStep": { "position": { "latitude": 37.78883349777378, "longitude": -122.41859090561832 } }, "dropoffStep": { "position": { "latitude": 37.79900453502346, "longitude": -122.42068402876973 } } } } } ' Step 2: Get a recommendation plan for each vehicle/task And that's really all there is to it! The resulting plan for our Fleet Planner call gets returned in the response. { "recommendations": [ { "vehicleId": "vehicle-0", "planRecommendation": { "assignedSteps": [ { "taskId": "task-0", "stepType": "PICKUP", "remainingTime": 14.296854873316038, "pickup": {} }, { "taskId": "task-1", "stepType": "PICKUP", "remainingTime": 41.571288138543096, "pickup": {} }, { "taskId": "task-1", "stepType": "DROPOFF", "remainingTime": 246.31030744938874, "dropoff": {} }, { "taskId": "task-0", "stepType": "DROPOFF", "remainingTime": 267.3589796045157, "dropoff": {} } ] } } ], "unresolvedTasks": [] } Thanks for reading! If you need to manage a fleet that supports on-demand requests check out our dispatch APIs, or if you just need ETAs/paths check out our routing APIs.
https://docs.rideos.ai/fleet-planner/
2020-10-20T03:49:54
CC-MAIN-2020-45
1603107869785.9
[]
docs.rideos.ai
Creating Envelope Deformation Rigs Before adding deformations, you might want to use a default type of region of influence. You can set these parameters in the Rigging tool's properties - Once your element is selected, select the Rigging tool in the Deformation toolbar. - In the Tool Properties view, enable the Envelope mode. - Place the cursor where you want to start creating your envelope. - Press and hold the mouse button to create the point, then drag to towards the direction where you want your curve to bend to set the position of this curve's bezier handle, just as you would when drawing a curve using the Polyline tool —see Drawing with the Polyline Tool. - Working as you would when building a Curve deformer, continue adding control points around your shape. You can place your control points slightly outside of your contour line. -. - When you're ready to close Envelope deformer, hold down Alt and click on the first point of your deformation chain. NOTE: It's not recommended to use the Envelope deformers on bitmap images and textures.
https://docs.toonboom.com/help/harmony-14/premium/deformation/create-envelope-deformation-rig.html
2020-10-20T02:51:06
CC-MAIN-2020-45
1603107869785.9
[array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../Resources/Images/HAR/Stage/Deformation/HAR12/HAR12_Envelope_001.png', None], dtype=object) array(['../Resources/Images/HAR/Stage/Deformation/HAR12/HAR12_Envelope_002.png', None], dtype=object) array(['../Resources/Images/HAR/Stage/Deformation/HAR12/HAR12_Envelope_003.png', None], dtype=object) ]
docs.toonboom.com
.. At last if you already know Python but check the Scipy Lecture Notes
https://scikit-criteria.readthedocs.io/en/latest/tutorial/index.html
2020-10-20T03:18:09
CC-MAIN-2020-45
1603107869785.9
[]
scikit-criteria.readthedocs.io
IP host The IP host page displays the list of all the dynamic hosts, default hosts and manually added hosts. Hosts allow the entities to be defined once, which can be re-used in multiple referential instances throughout the configuration. For example, consider an internal mail server with an IP address 192.168.1.15. Rather than repeated use of the IP address while configuring security policies or NAT policies, it allows to create a single entity internal mail server as a hostname with an IP address 192.168.1.15. This host, internal mail server can then be selected in any configuration that uses host as a defining criterion. By using host name instead of numerical address, you only need to make changes in a single location, rather than in each configuration where the IP address appears. Using hosts, reduces the error of entering incorrect IP addresses, makes it easier to change IP addresses, and increases readability. You can group multiple entities performing the same function within a single hostname. - System hosts cannot be updated or deleted. - Dynamic hosts which are automatically added on creation of VPN remote access connections cannot be deleted. - Default hosts (IPv6 and IPv4) for remote access connection - ##ALL_RW, ##WWAN1, ##ALL_IPSEC_RW and ##ALL_SSLVPN_RW cannot be updated or deleted.
https://docs.sophos.com/nsg/sophos-firewall/17.5/Help/en-us/webhelp/onlinehelp/nsg/concepts/HostManage.html
2020-10-20T02:46:02
CC-MAIN-2020-45
1603107869785.9
[]
docs.sophos.com
Unauthenticated traffic When the firewall detects non-authenticated traffic from an IP address, STAS puts the address in learning mode and sends a request to the collector for user information. While in learning mode, the firewall drops the traffic generated by the address. When there is no response from the collector while in learning mode, STAS puts the address into unauthenticated status for one hour. It will try to log on again after one hour by going into learning mode. While in unauthenticated status, the firewall applies rules for unauthenticated traffic. Hosts not in the domain are not controlled by STAS and are considered unauthenticated by the firewall. Therefore, if the network contains any host which is not a part of the domain, create clientless users for these IP addresses. Doing so allows the firewall to treat the traffic from these IPs according to the associated clientless policies rather than dropping the traffic.
https://docs.sophos.com/nsg/sophos-firewall/18.0/Help/en-us/webhelp/onlinehelp/nsg/sfos/concepts/UnauthenticatedTraffic.html
2020-10-20T02:59:36
CC-MAIN-2020-45
1603107869785.9
[]
docs.sophos.com
Interacting with external arrays¶ Although Taichi fields are mainly used in Taichi-scope, in some cases efficiently manipulating Taichi field data in Python-scope could also be helpful. We provide various interfaces to copy the data between Taichi fields and external arrays. The most typical case maybe copying between Tachi fields and Numpy arrays. Let’s take a look at two examples below. Export data in Taichi fields to a NumPy array via to_numpy(). This allows us to export computation results to other Python packages that support NumPy, e.g. matplotlib. @ti.kernel def my_kernel(): for i in x: x[i] = i * 2 x = ti.field(ti.f32, 4) my_kernel() x_np = x.to_numpy() print(x_np) # np.array([0, 2, 4, 6]) Import data from NumPy array to Taichi fields via from_numpy(). This allows people to initialize Taichi fields via NumPy arrays. E.g., x = ti.field(ti.f32, 4) x_np = np.array([1, 7, 3, 5]) x.from_numpy(x_np) print(x[0]) # 1 print(x[1]) # 7 print(x[2]) # 3 print(x[3]) # 5 API reference¶ We provide interfaces to copy data between Taichi field and external arrays. External arrays refers to NumPy arrays or PyTorch tensors. We suggest common users to start with NumPy arrays. Interacting with NumPy¶ We provide interfaces to copy data between Taichi fields and NumPy arrays. External array shapes¶ Shapes of Taichi fields (see Scalar fields) and those of corresponding NumPy arrays are closely connected via the following rules: - For scalar fields, the shape of NumPy array is exactly the same as the Taichi field: field = ti.field(ti.i32, shape=(233, 666)) field.shape # (233, 666) array = field.to_numpy() array.shape # (233, 666) field.from_numpy(array) # the input array must be of shape (233, 666) - For vector fields, if the vector is n-D, then the shape of NumPy array should be (*field_shape, vector_n): field = ti.Vector.field(3, ti.i32, shape=(233, 666)) field.shape # (233, 666) field.n # 3 array = field.to_numpy() array.shape # (233, 666, 3) field.from_numpy(array) # the input array must be of shape (233, 666, 3) - For matrix fields, if the matrix is n*m, then the shape of NumPy array should be (*field_shape, matrix_n, matrix_m): field = ti.Matrix.field(3, 4, ti.i32, shape=(233, 666)) field.shape # (233, 666) field.n # 3 field.m # 4 array = field.to_numpy() array.shape # (233, 666, 3, 4) field.from_numpy(array) # the input array must be of shape (233, 666, 3, 4) Using external arrays as Taichi kernel arguments¶ Use the type hint ti.ext_arr() for passing external arrays as kernel arguments. For example: import taichi as ti import numpy as np ti.init() n = 4 m = 7 val = ti.field(ti.i32, shape=(n, m)) @ti.kernel def test_numpy(arr: ti.ext_arr()): for i in range(n): for j in range(m): arr[i, j] += i + j a = np.empty(shape=(n, m), dtype=np.int32) for i in range(n): for j in range(m): a[i, j] = i * j test_numpy(a) for i in range(n): for j in range(m): assert a[i, j] == i * j + i + j Note Struct-for’s are not supported on external arrays.
https://taichi.readthedocs.io/en/stable/external.html
2020-10-20T03:22:15
CC-MAIN-2020-45
1603107869785.9
[]
taichi.readthedocs.io
- Data Migration Configuration File Overview This document gives an overview of configuration files of DM (Data Migration). DM process configuration files inventory.ini: The configuration file of deploying DM using DM-Ansible. You need to edit it based on your machine topology. For details, see Edit the inventory.inifile to orchestrate the DM cluster. dm-master.toml: The configuration file of running the DM-master process, including the topology information of the DM cluster and the corresponding relationship between the MySQL instance and DM-worker (must be one-to-one relationship). When you use DM-Ansible to deploy DM, dm-master.tomlis generated automatically. Refer to DM-master Configuration File to see more details. dm-worker.toml: The configuration file of running the DM-worker process, including the upstream MySQL instance configuration and the relay log configuration. When you use DM-Ansible to deploy DM, dm-worker.tomlis generated automatically. Refer to DM-worker Configuration File to see more details. DM migration task configuration DM task configuration file When you use DM-Ansible to deploy DM, you can find the following task configuration file template in <path-to-dm-ansible>/conf: task.yaml.exmaple: The standard configuration file of the data migration task (a specific task corresponds to a task.yaml). For the introduction of the configuration file, see Task Configuration File. Data migration task creation You can perform the following steps to create a data migration task based on task.yaml.example: - Copy task.yaml.exampleas your_task.yaml. - Refer to the description in the Task Configuration File and modify the configuration in your_task.yaml. - Create your data migration task using dmctl. Important concepts This section shows description of some important concepts.
https://docs.pingcap.com/tidb-data-migration/stable/config-overview/
2020-10-20T03:04:04
CC-MAIN-2020-45
1603107869785.9
[]
docs.pingcap.com
Introduction Since the first version, PubCoder approach to code was that of trying to export clean code and don’t hide it to the final user, though understanding and writing code is an option and not a requirement when using the app. All PubCoder exports are based on web technologies (one XHTML, one CSS and one Javascript file per page, plus various shared code), and you have many ways of customizing the code or adding more. You can use the Code button in the project window to customize HTML headers, CSS or JavaScript code, both at a project-level and page-level: basically project-level custom code will be included in every generated HTML page. You can also use Smart Objects to add contents directly to the page DOM: at its core, a smart object is an blank DIV that you can fill with whatever you like, even an iframe. This is the best option to add custom content or embed external content. Finally, you also have a Run Javascript action that allows you to run custom javascript code as a PubCoder action, so you can combine it with other standard actions in PubCoder’s event/actions mechanism. Code Editor The Code Editor in PubCoder is really powerful; based on the widely adopted Ace, it. On the left, it displays up to three panels with snippets, double-click a snippet to append code to the code editor: - Snippets: a list of code snippets to quckly add ready-to-use code to include scripts, get objects, trigger events and more - Assets: quick access to Assets Library, double-click one to insert a reference to one of the assets in your project - Layers: visible only when editing page-level code, it lists all layers in your page, double-click one to insert a reference to an object On top of the editor, you can find a toolbar with buttons for the various code editor functionalities. Finally, when editing XHTML, the editor automatically checks that the code entered is valid XHTML, giving a warning if it is not. PubCoder Framework In Depth In this section, we analyze more deeply the code outputted by PubCoder to create your pages and objects, so that you can undestand how to how interact with it using your own code. Page Structure For each page, PubCoder outputs a folder containing three files: Here’s an example of the HTML page file for an “Hello World” page made with PubCoder, that is a page containing only a text box saying “Hello, world!” <!DOCTYPE html> <html xmlns="" xmlns: <head> <meta name="generator" content="PubCoder 3.3.0.970 for OS X" /> <meta charset="utf-8"/> <title>1</title> <meta name="viewport" content="width=736, height=414" /> <!-- import general CSS --> <link rel="stylesheet" type="text/css" href="../../css/general.styles.css" /> <!-- import page-specific CSS styles --> <link rel="stylesheet" type="text/css" href="styles.css" /> <!-- import jquery --> <script type="text/javascript" src="../../js/jquery.js"></script> <!-- more javascripts imports here... --> <!-- import actions and page-specific javascript --> <script type="text/javascript" src="actions.js"></script> </head> <body> <div class="SCPage SCPage1" dir="ltr"> <div class="SCOverlay SCOverlay1"> <!-- objects in the page overlay will go here --> </div> <div class="SCContent"> <!-- page objects will go here, like our "Hello World" text --> <div id="obj4" class="SCPageObject SCText"> <div id="obj4_content" class="SCTextContainer SCTextVAlignMiddle"> <p>Hello, world!</p> </div> </div> </div> </div> </body> </html> As you can see the code is very clean & simple: after declaring some namespaces and meta tags, we import the required CSS and Javascript, both generic code and code specific to your page in styles.css and actions.js. The structure of the body provides a container div for the page, which always has classes SCPage and SCPage<number>, which encapsulates the two layers of your page: one with the contents of the overlay page (in a div with class SCOverlay, which here is empty) and another one, below the overlay, with the actual contents of your page (in a div with class SCContent). As you can see, inside the .SCContent container we finally have our text object with class SCText. XHTML vs HTML Though we often use the terms HTML and XHTML indifferently throughout the documentation, PubCoder pages are actually XHTML files, since this format ensure compatibility with EPUB format and better tooling for code parsing and handling. XHTML is a family of XML languages which extend or mirror versions of HTML. It does not allow omission of any tags or use of attribute minimization. Basically, XHTML requires that there be an end tag to every start tag and all nested tags must be closed in the right order. For example, while <br> is valid HTML, it would be required to write <br /> in XHTML. The code editor will warn you if you are writing non-XHTML code, but fixing the issue is up to you, but this should be pretty easy since you often simply need to: - Use empty-element syntax on elements specified as empty in HTML, e.g. use <br />instead of <br>and <img src="path/to/img.jpg" />instead of <img src="path/to/img.jpg"> - Include close tags for elements that can have content but are empty, e.g. <div></div> Container Objects The HTML code that PubCoder writes for an object is not always “obvious”, in the sense that some objects are directly mapped to their HTML counterparts, like a button object, which is mapped to an html button element: <button id="obj4" type="button" role="button" class="SCPageObject SCButton">Click Me</button> …while others are exported using also a “container” object, like the text in the example above, in which the div containing the text paragraphs is encapsulated in a container. This is done for different reasons for each object, for example in the text object we use the div.SCTextContainer node to support vertical alignment of text inside the text box. Here’s how an Image Object is exported instead: <div id="obj11" class="SCPageObject SCImage"> <img id="obj11_img" src="../images/obj11_image.jpeg" /> </div> You may expect to see a simple img tag, while we are actually encapsulating it inside a div.SCImage. This is very important to understand, since some properties like border and shadow settings or CSS classes and styles are always applied to the container labelled with the SCPageObject class, which sometimes incapsulates the real content (e.g. to the div instead of the img). Please be sure to check manual pages for the various layout objects, widgets and controllers to see an example of how they are exported. Take this into account also when writing custom css code: if you want to select Image Objects with a myClass CSS class on the page, simply use .SCImage.myClass, to select the img tag of Image objects with that CSS class, use .SCImage.myClass img instead. Custom HTML Headers PubCoder allows to add custom HTML headers, both on a page-level and project-level: just click the Code button in the project window and select Page ▹ HTML HEAD or Project ▹ HTML HEAD to open the Code Editor and add headers that will be appended to the head element of the current page or of every page, respectively. HTML Headers are the right place to import external scripts or CSS files. The Code Editor also gives you snippets to easily add import tags and scripts definitions in the right way. Custom CSS Code PubCoder allows to add custom CSS styles definitions, both on a page-level and project-level: just click the Code button in the project window and select Page ▹ CSS or Project ▹ CSS to open the Code Editor and add styles definitions that will be appended to the page style.css file of the current page or of every page, respectively. Custom CSS allow to define styles that you use throughout you page or project. Two snippets in particular are useful to add Character and Paragraph styles that, once defined, will appear in the Text Editor’s Styles menu so that you can apply them to your text. Paragraph styles are applied to entrire paragraphs, namely p tags; here’s an example of a paragraph style that increments paragraph spacing and sets a small-caps style to the entire paragraph: .SCText p.MyParagraphStyle { margin-bottom: 10px; font-variant: small-caps; } Character styles, instead, can be applied to whatever portion of text, namely span tags; here’s an example of a character style that makes text red and underlined with a dotted line: .SCText span.MyCharacterStyle { color: red; border-bottom: 1px dotted red; } Another example of a useful character style, is one that will make an inline image in the text float to one side so that text can flow around it, you can define a style like this and assign it to an inline image in the text editor like you would assign a style to any other portion of the text: .SCText span.Float-Left { float: left; } Another important thing that you can set using a custom CSS snippet is a special style for the Read Aloud functionality, here’s an example of one that will animate and enlarge word highlighting: .-epub-media-overlay-active, span.-epub-media-overlay-active, span.-epub-media-overlay-active > span { border-radius: 6px; transition: .1s ease-in-out; -webkit-transition: .1s ease-in-out; -moz-transition: .1s ease-in-out; -o-transition: .1s ease-in-out; } Last but not least, you can define custom CSS classes and apply them to the various objects in your page using their CSS Classes property. Custom Javascript Code and pubcoder.js PubCoder allows to add custom javascript, both on a page-level and project-level: just click the Code button in the project window and select Page ▹ JavaScript or Project ▹ JavaScript to open the Code Editor and add JavaScript code that will be executed when the page loads. As you may already know, this is not the only way to add JavaScript code, since you can also add it in Smart Objects or execute custom code via actions using a Run Javascript action, but this is probably the best place to insert initialization code or definitions that you use throughout your page or project. PubCoder bundles JQuery with your publications, so you can use it to get objects and interact with them. Just double-click a row on the Layers panel at the right of the Code Editor to append a snippet to get a reference to the object, for example in the following animation you can see how to build a script to trigger the Tap event on a button without directly typing any code: Another way of referencing objects via code is defining an Alias for the object in the object inspector. An alias is a mnemonic identifier specified by the user, to be used as an alternative to the numeric object ID. If an object has alias “MyObject”, you can access its DOM element using pubcoder.objects.MyObject or via JQuery using $(pubcoder.objects.MyObject). Aliases can contain only letters, numbers and the _ (underscore) characters, and must begin with a letter. Aliases can also be defined for pages, this is very useful to obtain the URLs of the various pages (you can find them in pubcoder.pages.MyPageAlias) and when you add code to the overlay page that must discriminate between the various page ( pubcoder.page.alias always contains the alias of the current page) The pubcoder object also offers various library functions that you can use to trigger PubCoder events and actions from code and a lot more. We call this library pubcoder.js, and we’re continuously improving this adding new features. Here’s a list of functions that can be accessed using pubcoder.functionName where functionName is one of the following: Using Action Lists to invoke PubCoder actions from code Action lists are controller objects that allow to define a list of actions. You can use a single line of JavaScript code to trigger the Run event of an action list, thus executing its actions from your custom script. Let’s say you have an action list on your page with ID #obj33, you can run it in JavaScript code using this command (look for the snippet Trigger Event Run Action List in the code editor): $("#obj33").trigger(PubCoder.Events.Run); This will execute actions in the Run event of the action list, immediately returning control to your script. You can see a more complex example in this article on Inside PubCoder, where this technique is used to achieve the following effect:
https://docs.pubcoder.com/pubcoder_code.html
2020-10-20T02:31:52
CC-MAIN-2020-45
1603107869785.9
[array(['images/pubcoder_code_editor.png', 'Code Editor'], dtype=object) array(['images/pubcoder_code_editor_snippets_js.gif', 'JavaScript Snippets'], dtype=object) array(['images/pubcoder_code_scrollingexample.gif', 'scrolling'], dtype=object) ]
docs.pubcoder.com
Centralizing logs from various technologies and applications tends to generate tens or hundreds of different attributes in a Log Management environment—especially when many teams’ users, each one with their own personal usage patterns, are working within the same environment. This can generate confusion. For instance, a client IP might have the following attributes within your logs: clientIP, client_ip_address, remote_address, client.ip, etc. In this context, the number of created or provided attributes can lead to confusion and difficulty to configure or understand the environment. It is also cumbersome to know which attributes correspond to the the logs of interest and—for instance—correlating web proxy with web application logs would be difficult. Even if technologies define their respective logs attributes differently, a URL, client IP, or duration have universally consistent meanings. Standard Attributes have been designed to help your organization to define its own naming convention and to enforce it as much as possible across users and functional teams. The goal is to define a subset of attributes that would be the recipient of shared semantics that everyone agrees to use by convention. Log Integrations are natively relying on the default provided set, but your organization can decide to extend or modify this list. The standard attribute table is available in Log Configuration pages, along with pipelines and other logs intake capabilities (metrics generation, archives, exclusion filters, etc.). To enforce standard attributes, administrators have the right to re-copy an existing set of non-standard attributes into a set of standard ones. This enables noncompliant logs sources to become compliant without losing any previous information. Typically, during a transitional period, standard attributes may coexist in your organization along with their non-standard versions. To help your users cherry-pick the standard attributes in this context, they are identified as such in the explorer (e.g. in the facet list, or in measure or group selectors in Analytics). If you are an administrator or prescriptor of the naming convention in your organization, you can take this opportunity to educate other users with standard attributes, and nudge them to align. The standard attribute table comes with a set of predefined standard attributes. You can append that list with your own attributes, and edit or delete existing standard attributes: A standard attribute is defined by its: Path: The path of the standard attributes as you would find it in your JSON (e.g network.client.ip) Type( string, integer, double, boolean): The type of the attribute which is used to cast element of the remapping list Description: Human readable description of the attribute Remapping list: Comma separated list of non-compliant attributes that should be remapped to standard ones The standard attribute panel pops when you add a new standard attribute or edit an existing one: Any element of the standard attributes can then be filled or updated. Note: Any updates or additions to standard attributes are only applied to newly ingested logs. After being processed in the pipelines, each log goes through the full list of standard attributes. For each entry of the standard attribute table, if the current log has an attribute matching the remapping list, the following is done: Important Note: By default, the type of an existing standard attribute is unchanged if the remapping list is empty. Add the standard attribute to its own remapping list to enforce its type. To add or update a standard attribute, follow these rules: userand user.namecannot both be standard attributes). The default standard attribute list is split into 7 functional domains: The following attributes are related to the data used in network communication. All fields and metrics are prefixed by network. Typical integrations relying on these attributes include Apache, Varnish, AWS ELB, Nginx, HAProxy, etc. The following attributes are related to the geolocation of IP addresses used in network communication. All fields are prefixed by network.client.geoip or network.destination.geoip. These attributes are related to the data commonly used in HTTP requests and accesses. All attributes are prefixed by http. Typical integrations relying on these attributes include Apache, Rails, AWS CloudFront, web applications servers, etc. These attributes provide details about the parsed parts of the HTTP URL. They are generally generated thanks to the URL parser. All attributes are prefixed by http.url_details. These attributes provide details about the meanings of user-agents’ attributes. They are generally generated thanks to the User-Agent parser. All attributes are prefixed by http.useragent_details. These attributes are related to the data used when a log or an error is generated via a logger in a custom application. All attributes are prefixed either by logger or error. Typical integrations relying on these attributes are: Java, NodeJs, .NET, Golang, Python, etc. Database related attributes are prefixed by db. Typical integrations relying on these attributes are: Cassandra, MySQL, RDS, Elasticsearch, etc. Performance metrics attributes. Datadog advises you to rely or at least remap on this attribute since Datadog displays and uses it as a default measure for trace search. All attributes and measures are prefixed by usr. These attributes are related to the data added by a syslog or a log-shipper agent. All fields and metrics are prefixed by syslog. Some integrations that rely on these are: Rsyslog, NxLog, Syslog-ng, Fluentd, Logstash, etc. All attributes and measures are prefixed by dns.
https://docs.datadoghq.com/ja/logs/processing/attributes_naming_convention/
2020-01-17T18:52:57
CC-MAIN-2020-05
1579250590107.3
[]
docs.datadoghq.com
If you want restrict access to your ezVIS, add an access key containing login and plain or sha1 subkeys. Using plain will bypass sha1 value. login is a username. plain is plain password. sha1 is the SHA-1 hash of the password (so that it will not be stored in the settings). Example for a pwd value of the password: "access": { "login": "user", "sha1" : "37fa265330ad83eaa879efb1e2db6380896cf639" } Warning: when you access the ezVIS report from the same machine as the one running the server, you will not be asked for your identity. This is to allow local:///protocol to work, even when not knowing the password (see corpusFields). Tip: to generate a SHA1, either use a Linux commande like sha1sumor shasum(be careful: don't integrate any carriage return, use ^Dat the end of plain password), or online services like SHA-1 online
https://ezvis.readthedocs.io/en/stable/Access/
2020-01-17T19:25:01
CC-MAIN-2020-05
1579250590107.3
[]
ezvis.readthedocs.io
Deferred Data Ingestion If the Source system does not contain any new Events to be ingested, Hevo defers the data ingestion for a pre-determined time. Hevo re-attempts to fetch the data only after the deferment period elapses. Hevo uses the ingestion results as feedback to decide the deferment period: If no Events are fetched from the Source during the first attempt, Hevo defers the data ingestion for a short duration of time. For example, five minutes. This time is referred to as the deferment period. Note: Hevo automatically assigns a value for the deferment period. Once the initial deferment period has elapsed, Hevo re-attempts to fetch the Events from the Source. If no Events are fetched in the subsequent attempt, Hevo increases the deferment period. For example, 10 minutes. Each time data ingestion gets deferred, the deferment period increases. Note: The maximum deferment period allowed for data ingestion is six hours. Therefore, the deferment period is not applicable if the data ingestion frequency is more than six hours. If Hevo fetches any Event(s) in any subsequent attempt, the deferment period is reset to zero. The following Sources support deferment of data ingestion: - Google Drive - Google Ads - Google Cloud - Google Play Console - DynamoDB - S3 - Salesforce - Zendesk - Databases (Except log-based fetching tasks) Overriding the Deferring of Data Ingestion You can request Hevo to never defer ingesting data from a Source. Contact Hevo Support for more details.
https://docs.hevodata.com/pipelines/pipeline-concepts/data-ingestion/deferred-data-ingestion/
2021-05-06T08:51:22
CC-MAIN-2021-21
1620243988753.91
[]
docs.hevodata.com
Unity provides some facilities to ease the debugging on Windows for forensic or live debugging of game or editor processes. First, clarity regarding debugging. There are two types of debugging that need addressing within Unity. There is the native C++ debugging as well as the C# managed debugging. For platforms supporting IL2CPP, there will be only native debugging, but managed debugging will stay for the editor for fast iteration purposes. Native Debugging is facilitated by having symbols (pdb files) for the associated binary files (exe or dll). On Windows, the standard .NET managed symbols are stored in pdb files as well, however when using mono, there are mdb files. Unity provides a symbol store at . This server URL can be utilized in windbg or VS2012 and later for automatic symbol resolution and downloading (much like Microsoft’s symbol store). The easy way to add a symbol store on windbg is the .sympath command. .sympath+ SRV*c:\symbols-cache* Let’s break that down: .sympath+ The + addition, leaves the existing symbol path alone, and appends this symbol store lookup SRV*c:\symbols-cache The SRV indicates a remote server to fetch from, while the c:\symbols is a local path to cache the downloaded symbols and to look there first before downloading again. * The path to the symbol store to fetch from Note: VS2010 and earlier do not function with http server symbol stores. 1. Go to Tools -> Options 2. Expand the Debugging Section, select Symbols 3. Specify a cache directory (if not already specified) 4. Add a “Symbol file (.pdb) location” of Live Debugging is the scenario of attaching a debugger to a process that is running normally, or to a process where an exception that has been caught. In order for the debugger to know what’s going on, the symbols need to be included in the build. That’s what the steps above should address. The one additional thing to know is that the game executable is named according to your game name, so the debugger may have issues finding the correct pdb if it doesn’t have access to your renamed executable. On Windows, Microsoft sets up automatically on application crashes to go to Dr Watson/Error Reporting to Microsoft. However, if you have Visual Studio or windbg installed, Microsoft provides a facility to savvy developers to instead opt to debug the crashes. For ease of installing, here’s a registry file contents to install: Windows Registry Editor Version 5.00 [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\AeDebug] "Auto"="1" [HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Windows NT\CurrentVersion\AeDebug] "Auto"="1" A little extra for editor debugging: Unity.exe -dbgbreak Will launch Unity and immediately offer a debugger to connect if the automatic crash handling is set up Windows provides facilities to investigate crash dump files (.dmp or .mdmp). Depending on the type of crash dump, there may simply be stack information or perhaps the entire process memory. Depending on the contents, various possibilities exist in seeing what may have happened to cause a crash. In the usual case, you often at least have a stack to investigate (if it’s a valid stack…) To investigate a dump file, your options are to load up via Visual Studio or windbg. While Visual Studio is a more friendly tool to use, its power is a bit more limited than windbg. A NullReferenceException will often look like this: This is not a crash in malloc, nor in mono - it’s a NullReferenceException that’s either: * Caught by the VS debugger * Unhandled in a user’s player, causing the player to exit With the previous example again: The lines without any information are managed frames. There is, however, a way to get the managed stack information: mono has a builtin function called mono_pmip, which accepts the address of a stack frame and returns a char* with information. You can invoke mono_pmip in the Visual Studio immediate window: ?(char*)mono.dll!mono_pmip((void*)0x1b45558c) 0x26a296c0 “ Tiles:OnPostRender () + 0x1e4 (1B4553A8 1B4555DC) [065C6BD0 - Unity Child Domain]”` Note: This only works where mono.dll symbols are properly loaded. Sometimes there are cases where the application doesn’t crash with the debugger attached, or an application crashes on a remote device where the debugger is not available. However, you can still get useful information if you can get the dump file - follow the below steps in order to do so. Note: These instructions are for Windows Standalone and Universal Windows Platform (when running on desktop). • 2017–05–16 Page amended with no editorial review
https://docs.unity3d.com/cn/2018.3/Manual/WindowsDebugging.html
2021-05-06T10:57:29
CC-MAIN-2021-21
1620243988753.91
[]
docs.unity3d.com
挤出至光标¶ 参考 - 模式 编辑模式 - 快捷键 Ctrl-RMB Interactively places new vertices with Ctrl-RMB at the mouse cursor position. 在不选中其他顶点的情况下,使用 Ctrl-RMB 添加最基础网格元素,顶点。由于摄像头空间(电脑屏幕)是二维的,Blender无法根据确定鼠标单击产生的顶点坐标,所以新添加的顶点是放置在3D游标所确定的平面上的。 To create interconnected vertices, you can add a vertex and continuously make subsequent Ctrl-RMB operations with the last vertex selected. This will link the last selected vertex with the vertex created at the mouse position with an edge (see Fig. 逐个添加顶点。), and will continuously create and connect new vertices if you continue repeating this operation. 创建面¶ 如果选中了一条边的两个顶点, Ctrl-RMB 单击创建一个平坦的表面,也叫做四边形。Blender会按照鼠标指针位置,从视图的观察平面创建四边形。 For Ctrl-LMB, Blender will automatically rotate the last selected Edge (the source) for the subsequent operations if you have at least one face created, dividing the angles created between the newly created edge and the last two edges, creating a smooth angle between them. Blender will calculate this angle using the last positive and negative position of the last X and Y coordinates and the last connected unselected edge. If this angle exceeds a negative limit (following a quadrant rule) between the recently created edge and the last two, Blender will wrap the faces. But if you do not want Blender to rotate and smooth edges automatically when extruding from Ctrl-RMB, you can also inhibit Blender from rotating sources using the shortcut Shift-Ctrl-RMB. In this case, Blender will not rotate the source dividing the angle between those edges when creating a face. 在这两种情况下,Blender都会告知用户在创建过程中是否进行了旋转。如果看看 调整上一步操作 面板,并按下 Ctrl-RMB 后,可以发现旋转源选项已经自动勾选;如果用的是 Ctrl-Shift-RMB,则会自动取消勾选。 If you have three or more vertices selected, and Ctrl-RMB click, you will also create planar faces, but along the vertices selected, following the direction of the cursor. This operation is similar to an extrude operation. Tip When adding objects with Ctrl-RMB, the extrusions of the selected elements, being vertices, edges and faces with the Ctrl-RMB, are viewport dependent. This means, once you change your viewport, for example, from top to left, bottom or right, the extrusion direction will also follow your viewport and align the extrusions with your planar view.
https://docs.blender.org/manual/zh-hans/dev/modeling/meshes/tools/extrude_cursor.html
2021-05-06T10:24:20
CC-MAIN-2021-21
1620243988753.91
[]
docs.blender.org
Microsoft Teams analytics and reporting A new analytics and reporting experience for Microsoft Teams is available in the Microsoft Teams admin center. You can run different reports to get insights into how users in your organization are using Teams. For example, you can see how many users communicate through channel and chat messages and the kinds of devices they use to connect to Teams. Your organization can use the information from the reports to better understand usage patterns, help make business decisions, and inform training and communication efforts. How to access the reports To access the reports, you must be a global admin in Microsoft 365 or Office 365, Teams service admin, or Skype for Business admin. To learn more about Teams admin roles and which reports each admin role can access, see Use Teams administrator roles to manage Teams. Go to the Microsoft Teams admin center, in the left navigation, select Analytics & reports, and then under Report, choose the report you want to run. Note The reports in the Microsoft Teams admin center are separate from the activity reports for Teams that are part of the Microsoft 365 reports in the Microsoft 365 admin center. For more information about the activity reports in the Microsoft 365 admin center, see Teams activity reports in the Microsoft 365 admin center Teams reporting reference Here's a list of the Teams reports available in the Microsoft Teams admin center and an overview of some of the information that's available in each report. We're continually improving the Teams reporting experience and adding features and functionality. Over time, we'll be building additional capabilities into the reports and adding new reports in the Microsoft Teams admin center. and Teams device usage report anonymous, you have to be a global administrator. This will hide identifiable information such as display name, email and AAD ID in reports and their exports. In Microsoft 365 admin center, go to the Settings > Org Settings, and under Services tab, choose Reports. Select Reports, and then choose to Display anonymous identifiers. This setting gets applied both to the usage reports in Microsoft 365 admin center as well as Teams admin center. Select Save changes. Note Enabling this setting will de-identify information in Teams user activity report and Teams device usage report reports. It will not affect other usage reports available in Teams admin center.
https://docs.microsoft.com/en-us/microsoftteams/teams-analytics-and-reports/teams-reporting-reference?WT.mc_id=M365-MVP-4039827
2021-05-06T11:19:04
CC-MAIN-2021-21
1620243988753.91
[]
docs.microsoft.com
Understanding the Importance of SPF and DKIM for Domain Authentication Domain authentication is mandatory as part of email marketing, as it can affect email deliverability. Here you will learn about what SPF and DKIM are, and how they influence and play an important role in the email sending process. SPF and DKIM can save your email from Spam, spoofing, and phishing. Prior to SPF and DKIM, we will first learn what domain authentication is. Domain Authentication Definition Domain authentication is a necessary means of verifying that e-mail is sent to its owner. For example, suppose we have the mtarget.co domain. By authenticating the domain, the email we send with the mtarget.co domain will be accepted by the email client and won’t go to the spam box. Is it important to do domain authentication? Of course, if you are a business that sends out emails for the commercial reason or perhaps sends out transactional emails, then it is very important to use SPF and DKIM. This protocol will not only protect your business from sabotage who wants to commit fraud using your domain, but SPF and DKIM ultimately help protect your customer relationships and brand reputation. Then how do you do domain authentication? By setting the configuration in SPF and DKIM. Definition of Sender Policy Framework (SPF) Sender Policy Framework (SPF) is an authentication mechanism so that recipient servers can recognize email senders through their DNS servers. So, if someone sends an email by faking your data, the email will be rejected by the receiving server because it is not recognized. How SPF Works Basically, SPF specifies a method for receiving mail servers to verify that incoming mail from a domain is sent from a host authorized by the domain administrator. The following three steps explain how SPF works: - The domain administrator grants access to the e-mail server that is authorized to send e-mail from that domain. This access is called SPF, and is registered as part of the overall DNS records for the domain. - When the e-mail server receives incoming e-mail, the e-mail server then compares the e-mail sender’s IP address with the official IP address that has been registered as SPF. - Receiver email server then decides whether to accept, reject, or mark email messages. DomainKeys Identified Mail (DKIM) Definition DomainKeys Identified Mail (DKIM) is a method for verifying that message content is trustworthy and from a clear sender via the sender’s public or private key. Its function is to detect fraudulent sender identities and prevent malicious e-mails such as spam. How DKIM Works To put it simply, DKIM adds tags in email message headers. This tag is validated for encryption via public cryptographic key (Example: mt1.domainkey) by DNS records. - The domain owner will publish a cryptographic key. It is specifically formatted as a TXT record across domain DNS records. - After the message is sent by the outgoing e-mail server, the server generates and attaches a unique DKIM signature to the message header. - The DKIM key is then used by the incoming mail server to detect and decrypt the tag and compare it with the new version. If the values match then the message is proven authentic, so the email can be sent. Set SPF and DKIM In MTARGET, SPF and DKIM need to be set so that the MTARGET system has permission to send email with your domain user. So when sending an email, the receiving server can clearly identify you. For a complete tutorial on domain authentication, you can read here. If you are still having trouble setting up SPF and DKIM, our team is ready to help. Ask for problems via live chat which is active for 24 hours.
https://docs.mtarget.co/en/guide/guide-aboutauthenticationdomain/
2021-05-06T09:35:03
CC-MAIN-2021-21
1620243988753.91
[]
docs.mtarget.co
7. Topologie¶ 7.1. Übersicht. 7.2. Topologiefehler<< Abb. 7.11 ‚dangling nodes‘! 7.3. Topologie-Vorschriften).). 7.4. Topologische Werkzeuge¶ Many GIS applications provide tools for topological editing. For example in QGIS you can enable topological editing to improve editing and maintaining common boundaries in polygon layers. A GIS such as QGIS ‚detects. 7.5.). 7.6. Suchradius. 7.7. Bekannte Probleme / womit man rechnen muss. Was haben wir gelernt?¶ Lassen Sie uns zusammenfassen, was wir in diesem Arbeitsblatt behandelt haben: Topology shows the spatial relation of neighbouring vector features.. Snapping distance and search radius help us to digitise topologically correct vector data. Simple feature data is not a true topological data format but it is commonly used by GIS applications. 7.9. Versuchen Sie es selbst!¶ Hier sind einige Ideen für Sie, die Sie mit Ihren Lernenden versuchen sollten:? 7.10. Etwas zum nachdenken¶ If you don’t have a computer available, you can use a map of a bus or railway network and discuss the spatial relationships and topology with your learners. 7.11. Literaturhinweise¶ Bücher: Chang, Kang-Tsung (2006). Introduction to Geographic Information Systems. 3rd Edition. McGraw Hill. ISBN: 0070658986 DeMers, Michael N. (2005). Fundamentals of Geographic Information Systems. 3rd Edition. Wiley. ISBN: 9814126195 Webseiten: Das QGIS-Handbuch beinhaltet noch detailiertere Informationen über topologische Bearbeitung in QGIS .
https://docs.qgis.org/3.10/de/docs/gentle_gis_introduction/topology.html
2021-05-06T10:33:41
CC-MAIN-2021-21
1620243988753.91
[array(['../../_images/topology_errors.png', '../../_images/topology_errors.png'], dtype=object)]
docs.qgis.org
Remedyforce Discovery The following topics are provided: Features and options The following are the frequently asked questions (FAQs) about discovery features and options in BMC Remedyforce: Agentless discovery is included in the BMC Remedyforce license at no additional cost. You get the ability to natively and intuitively configure and populate the Remedyforce CMDB in few, easy steps. With the agentless discovery, you install agents (or scanners) on at least one device in your network. These devices (based on the configurations that you set) scan your network for devices and send the scanned devices information to the Remedyforce CMDB. Agentless discovery does not require an agent on each device. Both discovery options provide device details. The following table lists the additional capabilities provided by agent discovery: BMC Remedyforce Agentless Discovery empowers you to scan, identify, and manage devices on your network. With a simple and intuitive interface, setup is quick and easy. Once enabled and configured, your Remedyforce CMDB is populated with a wealth of device information including hardware configurations and software installations. The benefits of direct access to this information include: - Proactive management by automating discovery to know what is available in your network - Empower the support team - Increase first call resolution rates - Reduce support call times Yes, to perform the agentless discovery, at least one scanner must be installed. You can continue using your existing discovery tool. Note: To avoid possible record duplication, ensure that there is no overlap across the multiple discovery tools, discovering the same IP addresses. Remedyforce Client Management is an extension of the agentless discovery capabilities provided with BMC Remedyforce starting with the Summer 16 release. Remedyforce Client Management provides a range of advanced capabilities empowering you to more efficiently and proactively manage and support your devices. The capabilities include agent discovery, remote management, hardware and software compliance, software normalization, patch management and deployment management. In addition to these capabilities, the solution delivers “advanced actions” while empowering you to define rules and actions to ultimately become more proactive and reduce the number of support calls. For example, you could define an advanced action to monitor drive space and either automatically create an incident when a device hits a certain threshold or perform an action (for example, disk cleanup) to free up additional drive space. The BMC Remedyforce Client Management application server, also known as the Master server, is hosted as a unique instance in a server pool and has a single associated database instance that is used to store various data constructs. The Java based BMC Client Management administration console and devices under BMC Client Management connect to the application server through its public DNS name. This configuration allows administration of any child devices that have an active Internet connection. In addition, an on premise site relay can be optionally implemented as a local parent for up to 2000 site clients to reduce the amount of Internet traffic generated between the site and the hosted application server. Multiple licenses are available to enable discovery in BMC Remedyforce. The agentless discovery is free. For more information, see Supported discovery licenses and features. An on premise device behind an unmanaged firewall that can connect to a public Internet IP address and port will create a managed network tunnel used by only the client executable. The client maintained network tunnel allows for bi-directional traffic between the on premise client and the public facing hosted application server. BMC Remedyforce Client Management can manage several hundred thousand client workstations. The hosted application server (or the Master Server) can manage up to 5,000 simultaneous client connections. Each client can be either a standalone workstation or a dedicated on premise site relay. Each site relay can manage up to 2,000 client workstations using a parent–child hierarchy. Consider the following points while deciding an option suitable for you: - The new integration for the discovery feature replaces Pentaho with web services. As a result, no on premise component is required for the integration. - Currently, you cannot modify the mappings between discovery and Remedyforce, which is an option with Pentaho. BMC Remedyforce discovery and BMC Discovery provide agentless discovery, however, BMC Remedyforce Client Management provides a number of capabilities, such as software identification, metering, and compliance. BMC Discovery focuses on the datacenter, providing more in-depth discovery for specific datacenter environments. The following table highlights the primary differences: Yes. Once devices are saved as configuration items (CIs) or assets in BMC Remedyforce and shown in the Remedyforce CMDB tab, you can apply normalization rules and models on these devices. If an import process updates an existing device, you must reapply the normalization rules and perform the model synchronization. No. If you have enabled integration with BMC Client Management OnPremise 11.0 (its patches or earlier), you will not get the option to enable BMC Remedyforce Discovery. - Request the Remedyforce Discovery Server on the Remedyforce Administration > Configure CMDB 2.0 > Discovery Setup & Configuration page. - Contact your BMC representative. Passwords are encrypted. The data is not encrypted, but the data is isolated. BMC creates one user per customer and only this user can access their file system to ensure data security. Configuration The following are the FAQs about configuring discovery: You must install at least one scanner at each physical location behind your corporate firewall. The scanner scans 10 devices simultaneously. Each scan of 10 devices takes approximately 20 seconds. There are two primary scanning phases. The first phase is an initial scan to identify a device (for example, device name, IP address, device type). No credentials are required to perform this scan. The second phase is the inventory scan. This scan fetches more details from the device including hardware configuration and software installed. At least read only credentials are required to perform the inventory scan. The amount of concurrent threads is configurable for each scanner. The default is 10 at a time. If you are doing a lightweight discovery, it is very fast (approximately 5 seconds per device). If you are performing a full discovery including both Hardware and Software Inventory information, it could take anywhere from 30 seconds to 2 minutes per device. The impact to the network is very little because the inventory details packets are small (approximately 300,000). The discovery and inventory data is stored in the BMC Amsterdam datacenter, where once aggregated, it is passed to your Remedyforce CMDB. Industry standard PKI encryption technology is implemented to transmit data both from your on premise scanner to your BMC Remedyforce Discovery Server, and again from your hosted Discovery Server into your Remedyforce CMDB. This encryption technology is based on SSL/TLS encryption standards as detailed in the following IETF documents: - RFC 5280 implementation handles the certificate and trust process - RFC 5246 implementation handles the encryption and cypher negotiation The hosted application server uses a range of 10 TCP allocated during the server provisioning process. The hosted application server port ranges are static and cannot be altered. For premium licenses, ports 1610 and 1611 ports are required. The Assembly ID field determines uniqueness. This is a system-generated value from the discovery source. As a result, there is a risk of duplicate values when are also importing records from other sources (such as other discovery tools, manual entry, procurement feeds, advanced ship notices) because these sources do not share the same unique value. To avoid the risk of duplicate records with Remedyforce discovery, ensure the following with your discovered records: - The records do not already exist in your Remedyforce CMDB. - They will not be discovered and imported from other discovery sources. - These same records will not be manually created. If any of these conditions exist and have occurred, you will need to analyze and clean up your data to remove duplicate records. All discovered devices are stored in the Computer System class. If you select other classes (Operating System, LAN Endpoint, and Processor), additional records are created in these classes and relationships to the corresponding devices in the Computer System class are also created. No, the Remedyforce Agentless Discovery and Client Management capabilities leverage web services to replace the need to install and configure Pentaho or any other third-party tool. Note: If you need to modify the field mappings, consider the older integration with BMC Client Management. You can view the operating system, hardware configurations, and software inventory of the discovered devices. Staff members can run all CMDB actions, operational rules, and other enhanced capabilities that are entitled to you by your license. Also, you must assign correct access permissions and capabilities to the staff members. No. You can enable and configure discovery on a sandbox. However, you will have to enable and configure discovery again in your production organization. It takes some time to show a scanner in the Scanner Details list. Also, the number of scanners allowed depend upon the Remedyforce discovery license that you choose. If a device is not displayed in the Scanner Details list, verify the number of scanners allowed with your license and match it with the number of scanners displayed in the Scanner Details list. For more information, see the Scanner Roll-out tab on the Discovery Setup & Configuration page and Supported discovery licenses and features. Raise a case with BMC Support. Ensure that you are entering correct Remedyforce Discovery Server credentials. Use your Remedyforce Discovery Server credentials. Enter the administrator credentials to access the devices that you want to discover. No, you must enter only those credentials that are necessary for the protocols and devices that you want to scan. If you do not enter credentials for all the selected protocols, only the devices that do not require access credentials are discovered. Devices protected with password are not discovered.
https://docs.bmc.com/docs/remforce201601/en/remedyforce-discovery-620306599.html
2021-05-06T09:01:23
CC-MAIN-2021-21
1620243988753.91
[]
docs.bmc.com
Deleting Attributes When an attribute is deleted, it is removed from any related products and attribute sets. System attributes are part of the core functionality of your store and cannot be deleted. Before deleting an attribute, make sure that it is not currently used by any product in your catalog. edit mode. Click Delete Attribute. When prompted to confirm, click OK. Delete Attribute
https://docs.magento.com/user-guide/v2.3/stores/attribute-delete.html
2021-05-06T10:50:17
CC-MAIN-2021-21
1620243988753.91
[]
docs.magento.com
Currently, our main support channel is on the Embarcadero Newsgroups. As you may know, there is a specific forum for IntraWeb on the Embarcadero newsgroups server. For web access, use. Note that Embarcadero uses a self singed certificate and your browser will warn you about this. Accept it to enter the web forums. For NTTP access, use the instructions in. For NTTP access, use your prefered NTTP client, as for example Mozilla Thunderbird. We also run a IRC chat and you ocasionally can chat with the support team and other IntraWeb users. Free IRC Clients Bersirc - Small, lightweight and very nice! My personal favourite. Unfortunately Bersirc is DEAD DEAD DEAD. The client runs fine and its easy to use. But forget about docs (Site offline), source (site offline), or even its IRC channel (server offline)! Freenode Webchat - Join instantly without installing any software. IceChat - Open source, written in C#. HydraIRC - Website has been offline for several weeks, but supposedly very nice. Mibbit - Web based chat. Miranda Sea Monkey VisualIRC - Scriptable and with a lot of nice customization features. Although we try to support all users, paid users receive priority. As we also run our priority support on the Embarcadero forums, if you are a priority user you just need to identify yourself as such and our support team will give special attention to your posts. Before asking for support, we invite you to check our FAQ database. You can also search the Embarcadero newsgroups to check if your question has been answered before. Although the Newsgroups Search seems to be broken, you can use a personalized search using Google. On the Google search box type: "Your Intraweb query" site:forums.codegear.com "Your Intraweb query" site:forums.codegear.com If you already read the Documentation, the FAQ, searched the newsgroups and still did not find an answer for your question, you are almost ready for your post. Please read the following advices before posting:
http://docs.atozed.com/docs.dll/technical%20information/How%20to%20get%20support.html
2021-05-06T09:08:01
CC-MAIN-2021-21
1620243988753.91
[]
docs.atozed.com
For SNMP, the Server Manager uses the management framework provided by the following system vendors to discover and monitor additional key hardware performance data: Dell OpenManager Sun Management Center IBM Director HP Systems Insight Manager Each vendor provides proprietary MIBs that provide detailed performance metrics, such as fan speed, temperature and voltage, and power supply. For example, the Server Manager can identify when a redundant power supply has failed. The Server Manager accesses the target host systems by using specific administrative credentials for these management systems. The Server Manager analyzes this data against configured parameter values.
https://docs.vmware.com/en/VMware-Smart-Assurance/10.1.0/esm-user-configuration-guide-10.1.0/GUID-984ABD4F-D5FF-4ADC-9C8E-1832E5903100.html
2021-05-06T10:57:37
CC-MAIN-2021-21
1620243988753.91
[]
docs.vmware.com
: - Size: 50x50 - Opacity: 5 - Position: Tile Tiled Watermark Add watermarks to product images On the Admin sidebar, go to Content > Design > Configuration. Find the store view that you want to configure and click Edit in the Action column. Under Other Settings, expand the Product Image Watermarks section. Complete the Base, Thumbnail, Small, and Swatch Image image settings as follows. The fields in each section are the same. Enter the Image Opacity as a percentage. For example: 40 Enter the Image Size in pixels. For example: 200 x 200 Click Upload and choose the image file that you want to use. Set Image Position to determine where the watermark appears. Product Image Watermarks - Base When complete, click Save Config. When prompted to refresh the cache, click Cache Management in the system message and refresh the invalid cache. Refresh Cache You can click Use Default Value ( ) to restore the default value. Delete a watermark In the lower-left corner of the image, click the Delete ( ) icon. Delete Watermark Click Save Config. When prompted to refresh the cache, click Cache Management in the system message and refresh the invalid cache. If the watermark image persists in the storefront, return to Cache Management and click Flush Magento Cache.
https://docs.magento.com/user-guide/v2.3/catalog/product-image-watermarks.html
2021-05-06T10:52:09
CC-MAIN-2021-21
1620243988753.91
[]
docs.magento.com
In this document Terminology In this document the term NT80E3-2-PTP does not include NT80E3-2-PTP-8×10/2×40 SmartNICs, except in the title of the Hardware Installation Guide (DN-0980), the term NT80E3-2-PTP-NEBS does not include NT80E3-2-PTP-NEBS-8×10/2×40 SmartNICs, except in the title of the Hardware Installation Guide (DN-0981), the term NT20E2 does not include NT20E2-PTP SmartNICs, and the term NT4E does not include NT4E2-4-PTP, NT4E2-4T-BP and NT4E-STD SmartNICs. However, the term NT200C01-2 includes NT200C01-2-NEBS SmartNICs, the term NT100E3-1-PTP includes NT100E3-1-PTP-NEBS SmartNICs, the term NT80E3-2-PTP includes NT80E3-2-PTP-NEBS SmartNICs, the term NT80E3-2-PTP-8×10/2×40 includes NT80E3-2-PTP-NEBS-8×10/2×40 SmartNICs, the term NT40E3-4-PTP includes NT40E3-4-PTP-NEBS SmartNICs, the term NT20E3-2-PTP includes NT20E3-2-PTP-NEBS SmartNICs, and the term NT4E includes NT4E-4-NEBS SmartNICs, except in contexts where the NEBS SmartNICs are also mentioned specifically.
https://docs.napatech.com/r/n6hMMwNNPP2QfyjKhkVMNw/6_ns0BPbuy0LlCmRst2Lcw
2021-05-06T10:38:00
CC-MAIN-2021-21
1620243988753.91
[]
docs.napatech.com
To define a SubShader in ShaderLab, you use a SubShader block. This page contains information on using SubShader blocks. For information on how a Shader object works, and the relationship between Shader objects, SubShaders and Passes, see Shader objects introduction. A Shader object contains one or more SubShaders. SubShaders let you define different GPU settings and shader programs for different hardware, render pipelines, and runtime settings. Some Shader objects contain only a single SubShader; others contain multiple SubShaders to support a range of different configurations. In ShaderLab, you define a SubShader by placing a SubShader block inside a Shader block. Inside the SubShader block, you can: LODblock. See assigning a LOD value to a SubShader. Tagsblock. See ShaderLab: assigning tags to a SubShader. Passblock. See ShaderLab: defining a Pass. This example code demonstrates the syntax for creating a Shader object that contains a single SubShader, which in turn contains a single Pass. Shader "Examples/SinglePass" { SubShader { Tags { "ExampleSubShaderTagKey" = "ExampleSubShaderTagValue" } LOD 100 // ShaderLab commands that apply to the whole SubShader go here. Pass { Name "ExamplePassName" Tags { "ExamplePassTagKey" = "ExamplePassTagValue" } // ShaderLab commands that apply to this Pass go here. // HLSL code goes here. } } }
https://docs.unity3d.com/cn/2021.1/Manual/SL-SubShader.html
2021-05-06T10:58:36
CC-MAIN-2021-21
1620243988753.91
[]
docs.unity3d.com
Welcome to gpars - Groovy Parallel Systems gpars provides a set of Groovy DSLs for concurrent processing to offer Groovy developers intuitive ways to handle tasks concurrently This project has been formerly known as GParallelizer. Main Areas - Concurrent collection processing - Actor programming model - Distributed (remote) actors - Dataflow concurrency constructs Project's main values - clever and clean design - elegant APIs - flexibility through metaprogramming - application-level solutions that scale with number of cores - distribution through "scripting" - shipping groovy scripts over the wire
http://docs.codehaus.org/pages/viewpage.action?pageId=131432496
2014-08-20T06:58:14
CC-MAIN-2014-35
1408500800767.23
[]
docs.codehaus.org
public interface RepeatOperations RepeatCallback, where a single item or record is processed. The batch behaviour, boundary conditions, transactions etc, are dealt with by the RepeatOperationsin such as way that the client does not need to know about them. The client may have access to framework abstractions, like template data sources, but these should work the same whether they are in a batch or not. RepeatStatus iterate(RepeatCallback callback) throws RepeatException CompletionPolicy. callback- the batch callback. RepeatOperationscan continue processing if this method is called again. RepeatException
http://docs.spring.io/spring-batch/apidocs/org/springframework/batch/repeat/RepeatOperations.html
2014-08-20T07:15:28
CC-MAIN-2014-35
1408500800767.23
[]
docs.spring.io
@Target(value={METHOD,ANNOTATION_TYPE}) @Retention(value=RUNTIME) @Documented public @interface Scheduled Annotation that marks a method to be scheduled. Exactly one of the cron, fixedDelay, or fixedRate attributes must be provided. The annotated method must expect no arguments and have a void return type. Processing of @Scheduled annotations is performed by registering a ScheduledAnnotationBeanPostProcessor. This can be done manually or, more conveniently, through the <task:annotation-driven/> element or @ EnableScheduling annotation. EnableScheduling, ScheduledAnnotationBeanPostProcessor public abstract String cron "0 * * * * MON-FRI"means once per minute on weekdays (at the top of the minute - the 0th second). public abstract long fixedDelay public abstract long fixedRate
http://docs.spring.io/spring/docs/3.1.3.RELEASE/javadoc-api/org/springframework/scheduling/annotation/Scheduled.html
2014-08-20T06:59:07
CC-MAIN-2014-35
1408500800767.23
[]
docs.spring.io
BlackBerry Balance the BlackBerry Balance content available within the User Guides at Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/smartphone_users/deliverables/62526/mar1391184433778.jsp
2014-08-20T07:00:41
CC-MAIN-2014-35
1408500800767.23
[]
docs.blackberry.com
Message-ID: <867196879.26339.1408518321752.JavaMail.haus-conf@codehaus02.managed.contegix.com> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_26338_1190130891.1408518321751" ------=_Part_26338_1190130891.1408518321751 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: This section (inputs welcome!) intends to provide data(symbols, = raster, ..) to simplify the development and build a community.=20 Geotools b= asic Symbol libraries=20 You can find misc font files to download :=20 You can use fonts for describing a symbol. Below you can find some font = files for this purpose... Put them in your c:/windows/fonts directory, and = you can use them directly by the ascii code.=20 Help needed to convert some of these symbols into svg, png or gif images= which could be used in SLD files too instead of fonts or ,better, to build= button images in the GUI.=20 The NRCS Symbol Palettes include line and marker symbols for Soil and Na= tural Resource mapping including the specific symbols for the SSURGO data. = There are three separate Palettes, NRCS Planning, NRCS SSURGO and NRCS Adho= c. This converts the symbols from the original NRCS Symbol Extension that w= as developed for ArcView 3.x to the ArcGIS environment. All line and marker= symbols can be customized (size and color) as the user needs.=20 The font file : NRCS Plannin= g
http://docs.codehaus.org/exportword?pageId=16748
2014-08-20T07:05:21
CC-MAIN-2014-35
1408500800767.23
[]
docs.codehaus.org
changes.mady.by.user Kalle Korhonen Saved on Mar 11, 2012 Saved on Mar 29, 2012 ... JDO has nine lives, tapestry-jdo rocks! Want BPM integration? We just introduced tapestry-activiti! Federatedaccounts now supports both Facebook and Twitter, check it out. Even more security add-on modules coming up, stay tuned. Powered by a free Atlassian Confluence Open Source Project License granted to Codehaus. Evaluate Confluence today.
http://docs.codehaus.org/pages/diffpages.action?pageId=230392027&originalId=228186171
2014-08-20T07:08:07
CC-MAIN-2014-35
1408500800767.23
[]
docs.codehaus.org
Submissions from 2012 Aligning Public Health, Health Care, Law and Policy: Medical-Legal Partnership as a Multilevel Response to the Social Determinants of Health, Elizabeth Tobin Tyler Submissions from 2010 Don't Do It Alone: A Community-Based, Collaborative Approach to Pro Bono, Laurie Barron, Suzanne Harrington-Steppen, Elizabeth Tobin Tyler, and Eliza Vorenberg Submissions from 2008 Allies Not Adversaries: Teaching Collaboration to the Next Generation of Doctors and Lawyers to Address Social Inequality, Elizabeth Tobin Tyler
http://docs.rwu.edu/law_feinstein_sp/
2014-08-20T06:49:45
CC-MAIN-2014-35
1408500800767.23
[]
docs.rwu.edu
Description / Features This plugin will make mark the build report as failed if at least one alert is raised during analysis. Thresholds for alerts are defined in Quality profiles, for example "coverage < 50%". - copy the JAR file in the directory /extensions/plugins - restart Sonar server - Define alert thresholds in the Quality profile of your project - Execute code analysis Changelog Usage -".
http://docs.codehaus.org/pages/diffpages.action?pageId=127107399&originalId=230394281
2014-08-20T06:50:42
CC-MAIN-2014-35
1408500800767.23
[]
docs.codehaus.org
The MMTk test harness allows you to run MMTk as a user-level application, step through it in a debugger and do any of the other standard Java debugging procedures that being the memory manager of JikesRVM prevents you from doing. The test harness incorporates a simple interpreted scripting language with just enough features to build interesting data structures, perform GCs and assert properties of them. The test harness is a recent development. Please don't expect it to be 100% bug-free just yet. Running the test harness The harness can be run standalone or via Eclipse (or other IDE). Standalone There is a collection of sample scripts in the MMTk/harness/test-scripts directory. In eclipse Define a new run configuration with main class org.mmtk.harness.Main Test harness options Options are passed to the test harness as 'keyword=value' pairs. The standard MMTk options that are available through JikesRVM are accepted (leave off the "-X:gc:"), as well as the following harness-specific options:
http://docs.codehaus.org/pages/viewpage.action?pageId=91979848
2014-08-20T06:57:35
CC-MAIN-2014-35
1408500800767.23
[]
docs.codehaus.org
Security Feature Overview Local Navigation Overview: BlackBerry Pushcast Software security Your organization can send content to audiences to distribute information, communications, and training. Authors use the BlackBerry® Pushcast™ Software to design content for BlackBerry® devices. Along with the increased demand for mobile content, organizations have security requirements that they consider when they evaluate mobile content solutions. Organizations want to know that the mobile content provider they select provides a security infrastructure that is designed to protect sensitive data so that information is not threatened by unauthorized users, theft, or misuse. Security is a consideration during the design, development, and delivery of all BlackBerry Pushcast Software components and infrastructure. The BlackBerry Pushcast Software implements several highly secure mechanisms, such as RBAC, instance level access control, roles and permissions, RSA® digital signatures, and SSL security to protect the information that it stores on its servers and sends to the BlackBerry Pushcast Player on devices. Was this information helpful? Send us your comments.
http://docs.blackberry.com/nl-nl/admin/deliverables/32591/Overview_1000641_11.jsp
2013-05-18T14:56:56
CC-MAIN-2013-20
1368696382450
[]
docs.blackberry.com
Description / Features This plugin enables the delegation of SonarQubeTM authentication to underlying PAM subsystem. The plugin works on *nix box assign the user to the desired groups in order to grant him necessary rights. If exists, the password in the SonarQubeTM account will be ignored as the external system password will override it. Works on)
http://docs.codehaus.org/pages/viewpage.action?pageId=231081904
2015-02-27T07:36:06
CC-MAIN-2015-11
1424936460577.67
[array(['/s/en_GB/5510/701ab0bfc8a95d65a5559a923f8ed8badd272d36.15/_/images/icons/wait.gif', None], dtype=object) array(['/s/en_GB/5510/701ab0bfc8a95d65a5559a923f8ed8badd272d36.15/_/images/icons/wait.gif', None], dtype=object) ]
docs.codehaus.org
User Guide Local Navigation You can purchase an optional headset to use with your BlackBerry smartphone. If you use a headset, you can use a headset button to answer or end a call, or to turn on or turn off mute during a call. Depending on your smartphone: Switch applications during a call Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/smartphone_users/deliverables/36023/About_using_a_headset_61_1596983_11.jsp
2015-02-27T07:58:01
CC-MAIN-2015-11
1424936460577.67
[]
docs.blackberry.com
A Joomla! Working Group is a group of people who are working towards one particular set of goals. For example, Joomla! currently has Working Groups for Development, Documentation, Translation and Sites & Infrastructure. Each working group has a specific set of goals, tasks, and responsibilities. Each Working Group usually has at least one Coordinator who is a member of the Joomla! Core Team. The management and administration of each Working Group is the primary responsibility of the Working Group’s Coordinator and because of that, the Coordinator is free to set up and run the Working Group in any way they see fit and that agrees with our Volunteer Code of Conduct. This allows for highly agile teams that can each decide which protocols, processes, communication methods, etc. are best for them. You can find out more information about the Joomla! Working Groups by visiting the following pages:
https://docs.joomla.org/index.php?title=What_is_a_%E2%80%9CWorking_Group%E2%80%9D_anyway%3F&diff=next&oldid=73632
2015-02-27T07:32:38
CC-MAIN-2015-11
1424936460577.67
[]
docs.joomla.org
Help Center Local Navigation Tips Save time and maximize your efficiency with these quick tips. - Tips: Finding apps - Tips: Doing things quickly - Tips: Managing indicators - Tips: Extending battery life - Tips: Freeing and conserving storage space - Tips: Keeping your information safe - Tips: Updating your software - Visit the Setup application Next topic: Tips: Finding apps Previous topic: Feature availability Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/smartphone_users/deliverables/25326/Tips_60_1295787_11.jsp
2015-02-27T07:52:26
CC-MAIN-2015-11
1424936460577.67
[]
docs.blackberry.com
The JRuby community is pleased to announce the release of JRuby 1.1.2! Download: JRuby 1.1.2 is the second: - Startup time drastically reduced - YAML symbol parsing >100x faster - Performance, threading, and stack depth improvements for method calls - Fixed several nested backref problems - Fixed bad data race (JRUBY-2483) - Gazillions of bigdecimal issues fixed (all?) - 95 issues resolved since JRuby 1.1.1 JRUBY-672 java.lang.Class representation of Ruby class not retrievable JRUBY-1051 Rubinius bignum_spec failures JRUBY-1163 Doesn't allow 'included' to be protected JRUBY-1190 Cannot call protected constructors from an abstract base class JRUBY-1332 It should be possible to add a jar to the load path and have it act like a regular directory JRUBY-1338 Concurrent file uploads in Rails cause OOM errors with JRuby+Goldspike+Glassfish JRUBY-1386 instance_eval is a nightmarish can of worms; it needs to be completely refactored JRUBY-1387 define_method methods are pushing two frames onto the stack, among other inefficiencies JRUBY-1390 Calling super without args does not (always) pass original args JRUBY-1395 while loops and other protected constructs may require synthetic methods in the compiler JRUBY-1463 Java deserialization through java-integration is broken in JRuby JRUBY-1574 Extract into jruby.home from jar: url JRUBY-1582 Allow heap and stack to be set via environment variables JRUBY-1688 Problems with multiple arguments to Kernel#exec/system and Rake's FileUtils#sh JRUBY-1725 Gem installs a bad shebang on application scripts (like rails) JRUBY-1749 JRuby fails test/externals/bfts/test_time.rb on Japanese environment JRUBY-1753 while cases disabled with precompiled tests now runninng; known lackings in the compiler JRUBY-1767 JRuby needs a fast JSON library JRUBY-2041 Calling the attached method after 6 times returns nil JRUBY-2086 class cast exception randomly appears JRUBY-2230 Compiler emits exception-handling sections of code that can be reached through non-exceptional paths. JRUBY-2247 Object#methods is incorrect in some cases JRUBY-2265 BigDecimal outputs to_s("F") differently than MRI JRUBY-2267 in `method_missing': no id given (ArgumentError) (RubyKernel class) JRUBY-2318 $~/Regexp.last_match lost when evaluation is inside a block JRUBY-2347 Race condition in DRb: Socket not always closed in DRb.stop_service JRUBY-2348 FasterCSV's :auto option for row separator doesn't work in JRuby JRUBY-2370 JRuby startup time significantly slower than MRI JRUBY-2378 Hundreds of new rubyspec fiailures with BigDecimal JRUBY-2383 File.stat fails confusingly on large files JRUBY-2392 Problem marshalling time JRUBY-2418 protected method bug: plugin will_paginate shows symptoms JRUBY-2423 Avoid double copying data in ChannelDescriptor#read() JRUBY-2431 Rubygems under JRuby doesn't install BAT executable files on Windows JRUBY-2432 Rubygems under JRuby detects the ruby executable name incorrectly on Windows JRUBY-2434 Implement BigDecimal#sqrt JRUBY-2438 Support SQLite3 using JRuby JRUBY-2442 Each value of SCRIPT_LINES__ contains two redundant empty lines JRUBY-2444 NPE from o.j.r.scope.ManyVarsDynamicScope#getValue JRUBY-2445 Regression: jirb_swing broken, prints out to the stdin, not to the GUI JRUBY-2450 StringIO#gets should set $_ to nil when it runs out of lines JRUBY-2451 Cannot compile JRuby (regression of rev: 6565) JRUBY-2452 Predefined globals $_ and $~ handled incorrectly JRUBY-2453 Etc.getpwnam crashes JVM on Linux JRUBY-2458 Move jruby.properties to a proper package JRUBY-2459 Upgrade rubygems to version 1.1.1 JRUBY-2461 RubyGems are installing with incorrect shebang line JRUBY-2474 --debug for interpreted mode, --jdb for jdb JRUBY-2476 Rubygems fails with NameError: StringIO JRUBY-2477 ClassCastException org.jruby.RubyString cannot be cast to org.jruby.RubySymbol JRUBY-2478 InlineCachingCallSite perf degradation due to JRUBY-2477 fix JRUBY-2479 YAML Parse Error for Array of Hash of Hash JRUBY-2480 Ruby object passed to Java method impl passed back to Ruby method impl loses original ruby instance JRUBY-2482 ClassCastException in RubyThreadGroup.add JRUBY-2483 PatternCache data race in RubyRegexp#initialize JRUBY-2485 Regression: Most BAT starter scripts are broken on Windows JRUBY-2486 rails --version command still broken JRUBY-2487 Bugs in REXML::Document JRUBY-2489 Regexp.last_match broken inside Enumerable's grep block JRUBY-2490 Initializing structs including Java interfaces crashes JRuby JRUBY-2491 File.umask with no argument sets umask to 0 JRUBY-2492 Add --debug option explanation in RubyInstanceConfig JRUBY-2493 Classpath changes for workspace in eclipse JRUBY-2494 REXML unusable from multiple threads: java.lang.ClassCastException: org.jruby.RubyString JRUBY-2499 Parser bug with :do JRUBY-2502 Major regression in Array#pack JRUBY-2503 variance from MRI: Module.new expects zero block params JRUBY-2509 URI::HTTP.build behave incompatibly with MRI JRUBY-2510 JRuby crashes with -XstartOnFirstThread on carbon JRUBY-2511 Dir.pwd with non-ascii chars does not display correctly JRUBY-2512 YAML 10x slower loading Graticule data JRUBY-2514 JIT max and JIT threshold should be adjusted for improvements in JRuby over the past months JRUBY-2523 Deprecated StringScanner#getbyte is infinitely recursive JRUBY-2524 File.exists? "file:/" crashes jruby (I believe the actual cause is the file: prefix) JRUBY-2527 jruby -e chomp throws AbstractMethodError JRUBY-2530 Multiply-binding JRubyMethod's with arity (min:0, max:2) can't have block args JRUBY-2531 IO#seek= with non-fixnum vaule breaks JRuby (and rubyspec run) JRUBY-2533 NPE when using a closed Iconv object JRUBY-2536 Bignum#div should never return non-integer values, even if arg is Float JRUBY-2537 Fixnum rubyspec failures for methods with Bignum arguments JRUBY-2540 Two rubyspec failures for Complex JRUBY-2547 JRuby 1.1.1 can't install native gems like Mongrel, Hpricot, etc JRUBY-2549 Calling java.lang.Intger#method raises Exception JRUBY-2551 JavaProxyClassFactory and JavaClass should use getDeclaredConstructors to get all public/protected constructors JRUBY-2558 Rational#divmod follows MRI bug behavior JRUBY-2563 java.lang.NoSuchMethodError: org.jruby.Ruby.newFixnum(I)Lorg/jruby/RubyFixnum; happens when trying to access rails application JRUBY-2568 Float divided by BigDecimal incorrectly coerced to Fixnum JRUBY-2569 Specs to test method reflection and invocation JRUBY-2570 BigDecimal#to_f incorrectly handles negative zero JRUBY-2571 some IO constants not defined JRUBY-2572 File::FNM_SYSCASE defined incorrectly on non-Windows systems JRUBY-2573 Revision 6754 randomly dispatches the wrong method under multithreaded loads. JRUBY-2575 Regression on Windows: Can't execute jruby, with path constructed out of rbconfig's CONFIG entries JRUBY-2579 Yaml ParserException JRUBY-2580 Regression: yaml tests break JRuby hard
http://docs.codehaus.org/plugins/viewsource/viewpagesrc.action?pageId=85295154
2015-02-27T07:48:13
CC-MAIN-2015-11
1424936460577.67
[]
docs.codehaus.org
Pages that link to "Klocwork Refactoring in Vim" From Insight-9.5 The following pages link to Klocwork Refactoring in Vim:View (previous 50 | next 50) (20 | 50 | 100 | 250 | 500) - Klocwork Refactoring (← links) - Refactoring for C/C++ in Vim (redirect page) (← links) - Klocwork Refactoring/ja (← links)
http://docs.klocwork.com/daiquiri/index.php?title=Special:WhatLinksHere&target=Klocwork+Refactoring+in+Vim&title=Special:WhatLinksHere&target=Klocwork+Refactoring+in+Vim
2015-02-27T07:28:47
CC-MAIN-2015-11
1424936460577.67
[]
docs.klocwork.com
Note: This article applies only to Joomla! version 1.0.x. For Administration FAQs for Joomla! version 1.5, see Category:Version 1.5 FAQ.. After logging into the Administration backend, go the module manager. A list of all the modules installed on your site will appear. Edit the one that says "Login Form" under Module Name ("mod_login" under the Type column). Under Parameters, enter the URL of page where you want to redirect successful logins to where it says "Login Redirection URL."! This will give a list of existing menu items. Note: For some component links you need to edit the new link in order to apply parameters. 1. Go to your site >> global configuration and change "allow user registration". You have two choices, either add a new super administrator or change the password stored in the data base. To do this you need to go to phpMyAdmin (or use a similar tool).
https://docs.joomla.org/index.php?title=Administration_FAQs_Version_1.0&diff=prev&oldid=73481
2015-02-27T07:55:26
CC-MAIN-2015-11
1424936460577.67
[]
docs.joomla.org
This is a list of changes made recently to pages linked from a specified page (or to members of a specified category). Pages on your watchlist are bold. 08:15(Page translation log) MATsxm (Talk | contribs) marked Help34:Extensions Template Manager Styles for translation m 08:15Help34:Extensions Template Manager Styles (diff; hist; +4) MATsxm
https://docs.joomla.org/index.php?title=Special:RecentChangesLinked&from=20130307121145&target=Help30%3ASite_Control_Panel
2015-02-27T08:44:17
CC-MAIN-2015-11
1424936460577.67
[]
docs.joomla.org
Message-ID: <1492522717.1613.1425023360668.JavaMail.haus-conf@codehaus02.managed.contegix.com> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_1612_505431686.1425023360668" ------=_Part_1612_505431686.1425023360668 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: Hello, <application server> enthusiasts=20 This is <name> from the Codehaus CARGO team. I'm sending this shor= t e-mail to tell you about the immediate availability of CARGO 1.0.4.= =20.=20 Of course, <application server> is part of the servers supported b= y CARGO... In detail, supported versions include <supported versions>= .=20 Second question, what would you need CARGO for? Well, typical use cases = for CARGO are:=20 For more information, please visit p>=20 Sent to:=20
http://docs.codehaus.org/exportword?pageId=164626731
2015-02-27T07:49:20
CC-MAIN-2015-11
1424936460577.67
[]
docs.codehaus.org
Jetty is a project at the Eclipse Foundation. private support for your internal/customer projects ... custom extensions and distributions ... versioned snapshots for indefinite support ... scalability guidance for your apps and Ajax/Comet projects ... development services from 1 day to full product delivery Jetty @ Codehaus Wiki. Starting points - Jetty <= 6 Documentation Index @ codehaus - Jetty >= 7 Documentation Index @ eclipse - Jetty Hightide 6 Documentation Index - Jetty blogs - Commercial Support. - #jetty @irc.codehaus.org Webtide sponsor Jetty by employing the lead developers, and offer full professional Jetty support and web 2.0 consulting services. Jetty Powered Product Index Get listed! Email us your product details at [email protected] to join
http://docs.codehaus.org/pages/viewpage.action?pageId=217514026
2015-02-27T07:46:19
CC-MAIN-2015-11
1424936460577.67
[]
docs.codehaus.org
. Do I really need Java to analyze my C# projects? Actually you do not! The analysis can be triggered either by the SonarQube Runner or by Maven. Both are Java programs so what is the trick? The SonarQube Runner works fine with IKVM. IKVM is a very nice open source project which implements java in .NET. If you want to try it with the SonarQubeQube Runner downloads plugins jar files at the beginning of the analysis from the target SonarQube server instance. Hence we cannot pre-compiled all the jars files needed to run an analysis without changing the way the runner works. Right now it is not very useful to use IKVM with the SonarQube Runner. However, this looks very promising for a future Visual Studio integration
http://docs.codehaus.org/pages/viewpage.action?pageId=231735457
2015-02-27T07:31:08
CC-MAIN-2015-11
1424936460577.67
[array(['/s/en_GB/5510/701ab0bfc8a95d65a5559a923f8ed8badd272d36.15/_/images/icons/emoticons/wink.png', '(wink)'], dtype=object) ]
docs.codehaus.org
The Iterator Pattern allows sequential access to the elements of an aggregate object without exposing its underlying representation. Groovy has the iterator pattern built right in to many of its closure operators, e.g. each and eachWithIndex as well as the for .. in loop. For example: Results in the output: 1 2 3 4 May=31 Mar=31 Apr=30 java.awt.Color[r=0,g=0,b=0] java.awt.Color[r=255,g=255,b=255] Another example:.
http://docs.codehaus.org/display/GROOVY/Iterator+Pattern
2015-02-27T07:34:43
CC-MAIN-2015-11
1424936460577.67
[]
docs.codehaus.org
Use the calculator If you are using your BlackBerry smartphone in landscape view, there are additional functions available for use. On the home screen or in the Applications folder, click the Calculator icon. - If you are using your smartphone in portrait view, to use the alternate function on a key, press the Arrow key. Press a key on the calculator. Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/smartphone_users/deliverables/38346/1578810.jsp
2015-02-27T07:50:28
CC-MAIN-2015-11
1424936460577.67
[]
docs.blackberry.com
influx export stack The influx export stack command exports all resources associated with a stack as a template. All metadata.name fields remain the same. To export resources as a template, you must use the Operator token created for the initial InfluxDB user or an All-Access token. For information about creating an All-Access API token, see Create an API token. Usage influx export stack <stack_id> [flags] Flags Examples Authentication credentials The examples below assume your InfluxDB host, organization, and token are provided by the active influx CLI configuration. If you do not have a CLI configuration set up, use the appropriate flags to provide these required credentials. Export a stack as a template influx export stack $STACK_ID.
https://docs.influxdata.com/influxdb/cloud/reference/cli/influx/export/stack/
2022-05-16T18:43:42
CC-MAIN-2022-21
1652662512229.26
[]
docs.influxdata.com
Internet Explorer 11 desktop application ending support for certain operating systems Originally published: May 20, 2021 Please go here to search for your product's lifecycle. Internet Explorer (IE) 11 is the last major version of Internet Explorer. Starting June 15, 2022, the Internet Explorer 11 desktop application will no longer be supported on certain versions of Windows 10*. Customers are encouraged to move to Microsoft Edge, which provides support for legacy and modern websites and apps. For organizations with a dependency on legacy Internet Explorer-based sites and apps, sites will need to be configured to open in Microsoft Edge using Internet Explorer (IE) mode. Internet Explorer mode in Microsoft Edge enables backward compatibility and will be supported through at least 2029. Additionally, Microsoft will provide notice one year prior to the retirement of IE mode. Go here for a list of Microsoft Edge supported operating systems. See the following resources for more information: - Internet Explorer 11 end of support and IE mode announcement - IE mode website - IE announcement technical FAQ - Microsoft Edge and IE Lifecycle FAQ - Windows OS support dates - Internet Explorer supported operating systems - Microsoft Edge supported operating systems * Go here for a list of supported operating systems for Internet Explorer, including versions impacted by this announcement.
https://docs.microsoft.com/en-us/lifecycle/announcements/internet-explorer-11-end-of-support
2022-05-16T19:45:25
CC-MAIN-2022-21
1652662512229.26
[]
docs.microsoft.com
API Microversions¶ Background¶ Zun uses a framework we call has as its value a string containing the name of the service, container, and a monotonically increasing semantic version number starting from 1.1. The full form of the header takes the form: OpenStack-API-Version: container 1.1 If a user makes a request without specifying a version, they will get the BASE_VER as defined in zun/api/controllers/versions.py. This value is currently 1.1 and is expected to remain so for quite a long time. When do I need a new Microversion?¶ A microversion is needed when the contract to the user is changed. The user contract covers many kinds of information such as: the Request the list of resource urls which exist on the server Example: adding a new container/{ID}/foo which didn’t exist in a previous version of the code the list of query parameters that are valid on urls Example: adding a new parameter is_yellowcontainer/{ID}?is_yellow=True the list of query parameter values for non free form fields Example: parameter filter_by takes a small set of constants/enums “A”, “B”, “C”. Adding support for new enum “D”. new headers accepted on a request the list of attributes and data structures accepted. Example: adding a new attribute ‘locked’: True/False to the request body the Response the list of attributes and data structures returned Example: adding a new attribute ‘locked’: True/False to the output of container/{ID} the allowed values of non free form fields Example: adding a new allowed statusto container/{ID} the list of status codes allowed for a particular request Example: an API previously could return 200, 400, 403, 404 and the change would make the API now also be allowed to return 409. See 2 for the 400, 403, 404 and 415 cases. changing a status code on a particular response Example: changing the return code of an API from 501 to 400. Note Fixing a bug so that a 400+ code is returned rather than a 500 or 503 does not require a microversion change. It’s assumed that clients are not expected to handle a 500 or 503 response and therefore should not need to opt-in to microversion changes that fixes a 500 or 503 response from happening. According to the OpenStack API Working Group, a 500 Internal Server Error should not be returned to the user for failures due to user error that can be fixed by changing the request on the client side. See 1. (except in 2). The reason why we are so strict on contract is that we’d like application writers to be able to know, for sure, what the contract is at every microversion in Zun. If they do not, they will need to write conditional code in their application to handle ambiguities. When in doubt, consider application authors. If it would work with no client side changes on both Zun versions, you probably don’t need a microversion. If, on the other hand, there is any ambiguity, a microversion is probably needed. - 2(1,2) The exception to not needing a microversion when returning a previously unspecified error code is the 400, 403, 404 and 415 cases. This is considered OK to return even if previously unspecified in the code since it’s implied given keystone authentication can fail with a 403 and API validation can fail with a 400 for invalid JSON request body. Request to url/resource that does not exist always fails with 404. Invalid content types are handled before API methods are called which results in a 415. When a microversion is not needed¶ A microversion is not needed in the following situation:. In Code¶ In zun/api/controllers/base.py we define an @api_version decorator which is intended to be used on top-level Controller methods. It is not appropriate for lower-level methods. Some examples: Adding a new API method¶ In the controller class: @base.Controller.api_version("1.2") def my_api_method(self, req, id): .... This method would only be available if the caller had specified an OpenStack-API-Version of >= 1.2. If they had specified a lower version (or not specified it and received the default of 1.1) the server would respond with HTTP/406. Removing an API method¶ In the controller class: @base.Controller.api_version("1.2", "1.3") def my_api_method(self, req, id): .... This method would only be available if the caller had specified an OpenStack-API-Version of >= 1.2 and OpenStack-API-Version of <= 1.3. If 1.4 or later is specified the server will respond with HTTP/406. Changing a method’s behavior¶ In the controller class: @base.Controller.api_version("1.2", "1.3") def my_api_method(self, req, id): .... method_1 ... @base.Controller.api_version("1.4") # noqa def my_api_method(self, req, id): .... method_2 ... If a caller specified 1.2, 1.3 (or received the default of 1.1) they would see the result from method_1, and for 1.4 or later they would see the result from (commonly accessed with pecan.request). Every API method has an versions object attached to the request object and that can be used to modify behavior based on its value: def index(self): <common code> req_version = pecan.request.version req1_min = versions.Version('', '', '', "1.1") req1_max = versions.Version('', '', '', "1.5") req2_min = versions.Version('', '', '', "1.6") req2_max = versions.Version('', '', '', "1.10") if req_version.matches(req1_min, req1_max): ....stuff.... elif req_version.matches(req2min, req2_max): ....other stuff.... elif req_version > versions.Version("1.10"): ....more stuff..... <common code> The first argument to the matches method is the minimum acceptable version and the second is maximum acceptable version. If the specified minimum version and maximum version are null then ValueError is returned. Other necessary changes¶ If you are adding a patch which adds a new microversion, it is necessary to add changes to other places which describe your change: Update REST_API_VERSION_HISTORYin zun/api/controllers/versions.py Update CURRENT_MAX_VERin zun/api/controllers/versions.py Add a verbose description to zun/api/rest_api_version_history.rst. There should be enough information that it could be used by the docs team for release notes. Update min_microversionin .zuul.yaml. Update the expected versions in affected tests, for example in zun/tests/unit/api/controllers/test_root.py. Update CURRENT_VERSIONin zun/tests/unit/api/base.py. Make a new commit to python-zunclient and update corresponding files to enable the newly added microversion API. If the microversion changes the response schema, a new schema and test for the microversion must be added to Tempest. Allocating a microversion¶ If you are adding a patch which adds a new microversion, it is necessary to allocate the next microversion number. Except under extremely unusual circumstances and this would have been mentioned in the zun spec for the change, the minor number of CURRENT_MAX_VER CURRENT_MAX_VER.
https://docs.openstack.org/zun/latest/contributor/api-microversion.html
2022-05-16T17:34:38
CC-MAIN-2022-21
1652662512229.26
[]
docs.openstack.org
Installation This installation guide describes the installation of VPMBench for Linux. We currently do not support Windows. Getting the Source You can download the current version of the VPMBench source from GitHub: $ git clone [email protected]:IDEA-PRIO/VPMBench.git Dependencies Python Version We recommend using the latest version of Python 3. VPMBench supports Python 3.6 or newer. You can check your Python version by running the following command in your terminal: $ python --version Python 3.9.1 These Python libraries will be installed automatically when installing VPMBench: Pandas implements a tabular data-structure. Pandera provides schema-based validations for Pandas PyYaml implements a YAML-Parser Docker-SDK lets you do anything dockercommand does Scikit-learn implements a bunch of machine-learning algorithms Numpy provides a large collection of mathematical functions PyVCF implement a parser for VCF-files Docker VPMBench requires Docker to run the variant prioritization methods. Therefore, you have to ensure that you have the proper rights to run Docker commands as the current user. You can easily check this by running: $ an error occurs check the Docker Documentation and try again. Automatic Installation In the repository, we provide an installation script install.sh. Using the installation script, you have to answer questions, e.g., for the plugin directory. Currently, we support the automatic installation of CADD and fathmm-MKL. Please make sure that Docker is installed and that enough disk space is available to install the plugin. If you want to install fathmm-MKL, tabix needs to be installed on your machine: $ sudo apt install tabix $ tabix --version tabix (htslib) 1.10.2-3 Copyright (C) 2019 Genome Research Ltd. After answering the questions, you will see an overview of the selected files before the download and installation starts. $ chmod +x install.sh $ ./install.sh ##################################### Guided Installation for VPMBench-v0.1 ##################################### The following questions will guide you through selecting the files and dependencies needed for VPMBench. After this, you will see an overview before the download and installation starts. Where do you want to store the plugins? [~/VPMBench-Plugins] Do you want to test if Docker works? (y)/n > Assuming YES. Do you want to copy the provided plugin to /home/andreas/VPMBench-Plugins? (y)/n > Assuming YES. Do you want to install the provided plugins (Warning: Might take a while)? (y)/n > Assuming YES. Do you want to install CADD (~ 200GB, Warning: Sometimes the installation seems to fail for no obvious reasons)? (y)/n > Assuming YES. Do you want to fathmm-MKL (~80GB)? (y)/n > Assuming YES. Do you want to do a test run after the installation? (y)/n > Assuming YES. Summary ======== * Plugin Path: /home/andreas/VPMBench-Plugins * Test Docker: true * Copy provided plugins: true * Install provided plugins: true - Install CADD: true - Install fathmm-MKL: true - Test run: true Please make sure you have enough disk space available to install the plugins. Please make sure you have the rights to run docker and install python packages! Ready to continue? (y)/n The complete installation with the two provided plugins for CADD and fathmm-MKL might take 2-3h and uses about 300GB of your disk space. So it’s enough time to drink a coffee or two. Manual Installation While we recommend using the automatic installation procedure, you can also install VPMBench following the following steps. Step 1 - Create a plugin directory By default VPMBench expects the plugins to be installed in the VPMBench directory in the home directory of the current user. You can create the directory via: $ mkdir ~/VPMBench-Plugins Step 2 - Install VPMBench To install VPMBench, simply run this simple command in your terminal after entering VPMBench directory: $ cd VPMBench $ pip install . We recommend installing VPMBench in its own virtual environment to prevent any conflicts with already installed Python libraries. After the installation you should be able to run the following command without errors: $ python -c "import vpmbench" Congratulations, you now can use VPMBench in your projects. You now might have a look at the Quickstart Guide or the API Documentation. Step 3 - Copy and install plugins (optional) The currently provided plugins can be found in the /plugin directory for the repository. To use these plugins you have to copy them to your plugin directory from Step 1. The following copies all plugins to the default plugin directory: $ cp plugins/* ~/VPMBench-Plugins After this, the plugins have to be installed. Therefore, each plugin directory contains its installation script install.sh which builds the Docker image and downloads the required files. After this, the plugins are ready to be used in VPMBench. Step 4 - Test the installed plugins (optional) To test the installed plugins you can run the script bin/after_install.py. Therefore, you have to provide a VCF-file as input and specify your plugin-path. $ python /bin/after_install.py tests/resources/test_grch37.vcf ~/VPMBench-Plugins During the execution of the script logging information is written to your terminal. The output should look like this: $ python /bin/after_install.py tests/resources/test_grch37.vcf ~/VPMBench-Plugins #### Run pipeline - Starting time: 10/03/2021 13:15:49 #### Extract data from tests/resources/test_grch37.vcf - Used extractor: <class 'vpmbench.extractor.ClinVarVCFExtractor'>! - Extracted Data: - UID CHROM POS REF ALT RG TYPE 0 0 1 865568 G A hg19 snp 1 1 1 949738 C T hg19 snp 2 2 1 949739 G A hg19 snp 3 3 1 955597 G T hg19 snp 4 4 1 955601 C T hg19 snp 5 5 1 20416314 G T hg19 snp 6 6 1 20978410 T C hg19 snp 7 7 1 20978956 G A hg19 snp 8 8 1 20978971 C T hg19 snp #### Load plugins from ../VPMBench-Plugins - Absolute plugin path: /home/arusch/extern/VPMBench-Plugins - Found 3 plugins: ['fathmm-MKL (coding)', 'fathmm-MKL (non-coding)', 'CADD'] - Returning 3 filtered plugins: ['fathmm-MKL (coding)', 'fathmm-MKL (non-coding)', 'CADD'] #### Invoke methods - #CPUs: 11 - Invoke method: CADD - Invoke method: fathmm-MKL (coding) - Invoke method: fathmm-MKL (non-coding) - Finish method: fathmm-MKL (non-coding) - Finish method: fathmm-MKL (coding) - Finish method: CADD #### Calculate reports - Calculate Specificity - Calculate Sensitivity #### Stop pipeline - Finishing time: 10/03/2021 13:16:07 Sensitivity - fathmm-MKL (coding): 0.75 - fathmm-MKL (non-coding): 0.5 - CADD: 0.75 Specificity - fathmm-MKL (coding): 0.0 - fathmm-MKL (non-coding): 0.6 - CADD: 0.0
https://vpmbench.readthedocs.io/en/latest/user/install.html
2022-05-16T19:13:59
CC-MAIN-2022-21
1652662512229.26
[]
vpmbench.readthedocs.io
The detailed options of the user role, including user level. The user level determines the level of services that a user can enjoy within the permissions of the user's role. For example, an audience can choose to receive remote streams with low latency or ultra low latency. Levels affect prices. The codec that the Web browser uses for encoding. "vp8": Use VP8 for encoding. "h264": Use H.264 for encoding. Safari 12.1 or earlier does not support the VP8 codec. The channel profile. The SDK differentiates channel profiles and applies different optimization algorithms accordingly. For example, it prioritizes smoothness and low latency for a video call, and prioritizes video quality for a video streaming. The SDK supports the following channel profiles: "live": Sets the channel profile as live streaming. You need to go on to call setClientRole to set the client as either a host or an audience. A host can send and receive audio or video, while an audience can only receive audio or video. "rtc": Sets the channel profile as communication. It is used for a one-on-one call or a group call where all users in the channel can converse freely. The user role determines the permissions that the SDK grants to a user, such as permission to publish local streams, subscribe to remote streams, and push streams to a CDN address. You can set the user role as "host" or "audience". A host can publish and subscribe to tracks, while an audience member can only subscribe to tracks. The default role in a live streaming is "audience". Before publishing tracks, you must set the user role as "host". After creating a client, you can call setClientRole to switch the user role. Interface for defining the behavior of a web client. You need to configure it when calling the createClient method to create a web client.
https://docs.agora.io/en/All/API%20Reference/web_ng/interfaces/clientconfig.html
2022-05-16T18:12:42
CC-MAIN-2022-21
1652662512229.26
[]
docs.agora.io
Bitbucket Setup NoteWorks with Gemini version 6.4.1+ and upwards. Open your Bitbucket repository and navigate to 'Settings'. Click 'Hooks' and add a new POST hook. Ensure you specify the URL like below. GEMINI URL/api/saucery/bitbucket/codecommit?auth=AUTHCREDENTIALS Replace the AUTHCREDENTIALS with a base64 encoded apikey:username combination, where apikey is taken from the web.config. Usage When committing files into Bitbucket simply provide a Gemini item number like so. Notethe 'GEM:' prefix is mandatory and you can specify multiple Gemini items by comma-separating them as part of the commit comment message. All committed files and comments appear under Code Review as follows. The first time you click on a file you will need to enter your Bitbucket credentials.
https://docs.countersoft.com/bitbucket/
2022-05-16T19:05:45
CC-MAIN-2022-21
1652662512229.26
[]
docs.countersoft.com
Cellular Data Usage for Apps Large app downloads can use a significant amount of cellular data. You can control how cellular data is used by turning off cellular data for automatic app updates or for the App Store completely. This must be performed on the iPad or iPhone, as it cannot be done remotely. If high cellular data usage is a concern, disablingcan help limit app updates as a cause of cellular data use.Best practice workflows cover common scenarios; however, the following recommendations may not apply in your environment. If you have unlimited data plans for your devices and want to see large apps install and update over cellular, you can manually change device settings to automatically install updates to installed apps larger than 200 MB with cellular. In Ask If Over 200 MB to Always Allow. This cannot be remotely configured by Jamf Now, it needs to be done on each device manually.change A Jamf Fundamentals account is able to deploy a Network App Usage payload created in iMazing Profile Editor or Apple Configurator 2 using a custom profile to force managed apps to only use WiFi, and disable the managed apps from using cellular for internet. For more information, see Custom Profiles in the Jamf Now Documentation .
https://docs.jamf.com/jamf-now/documentation/Cellular_Data_Usage_for_Apps.html
2022-05-16T18:50:31
CC-MAIN-2022-21
1652662512229.26
[]
docs.jamf.com
Terminology An alarm is a fault or a problem that is triggered by one or more events. A situation is used to represent a synthetic "master" alarm which is created by the correlation engine. A situation is the root of the alarm causality tree, whereby the situation is caused by one or more "child" alarms which may in turn be caused by other child alarms and so on. Situations should only be created if there are two or more alarms in the tree, otherwise there is a single alarm and there is no point in creating the situation. An inventory object (IO) is some abstract element to which alarms are related. Alarms can be related to zero or one inventory object. Inventory objects have relations to other inventory objects. Welcome Requirements
https://docs.opennms.com/alec/1.0.2/about/terminology.html
2022-05-16T17:46:55
CC-MAIN-2022-21
1652662512229.26
[]
docs.opennms.com