content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
Operators and functions for working with an array of bytes (buf type) are described here. The bool function returns false if the buffer is empty, otherwise, it returns true. The buf function converts a string to a buf value and returns it. The str function converts a buf value to a string and returns it. The Base64 function converts a value of the buf type into a string in base64 encoding and returns it. The DecodeInt function gets an integer from a parameter of buf type. offset is the offset in the buffer at which to read the number. The function reads 8 bytes and returns them as an integer. The Del function removes part of the data from the byte array. off is the offset of the data to be deleted, length is the number of bytes to be deleted. If length is less than zero, then the data will be deleted to the left of the specified offset. The function returns the b variable in which the deletion occurred. The EncodeInt function appends an integer number to a specified variable of buf type. Since the int value occupies 8 bytes, 8 bytes are appended to the buffer regardless of the i parameter value. The function returns the b parameter. The Hex function encodes a buf value to a hexadecimal string and returns it. The Insert function inserts an array of bytes src into the array b. off is the offset where the specified byte array will be inserted. The function returns the variable b. The SetLen function sets the size of the buffer. If size is less than the size of the buffer, then it will be truncated. Otherwise, the buffer will be padded with zeros to the specified size. The Subbuf function returns a new buffer that contains the chunk of the b buffer with the specified offset and length. The UnBase64 function converts a string in base64 encoding into a value of the buf type and returns it. The UnHex function returns the buf value represented by the hexadecimal string s. The input string must contain only hexadecimal characters. The Write function writes the byte array of the src variable into the b variable starting from the specified offset. The data is written over existing values. The function returns variable b.
https://docs.gentee.org/stdlib/buffer
2021-07-24T07:09:47
CC-MAIN-2021-31
1627046150134.86
[]
docs.gentee.org
To add a content slider you can use a “Content Slider” module in the Elementor, Visual Composer, or the shortcode. You can find the details of the shortcode inside the Shortcode Helper window. 1) Adding with Elementor - Go to the element list and find [RT] Slider element - Add and slides be using the “+ ADD ITEM” button and configure. 2) Adding with Visual Composer - Click to “Add New Element” then select “Content Slider” - The first
https://docs.rtthemes.com/document/content-slider-4/
2021-07-24T07:27:55
CC-MAIN-2021-31
1627046150134.86
[]
docs.rtthemes.com
AppsFlyer RTDS Integration With the bi-directional integration between Airship and AppsFlyer, you can import attribution data from AppsFlyer to segment and personalize messages, as well as send events back to AppsFlyer via Airship. Airship ingests media source, campaign, campaign ID, attributed Ad ID, or attributed adgroup AppsFlyer parameters as Airship attributes and then sends Airship Real-Time Data StreamingA service that delivers engagement events in real time via the Data Streaming API or an Airship partner integration. Send and Open events back to AppsFlyer. This enables AppsFlyer to analyze Airship-driven message response. See our AppsFlyer partner page for more information and setup steps. Categories
https://docs.airship.com/whats-new/2021-04-15-appsflyer-rtds-integration/
2021-07-24T08:25:14
CC-MAIN-2021-31
1627046150134.86
[]
docs.airship.com
The theme comes with a huge number of shortcodes that allow you to add pre-designed content blocks by shortcodes into the content area of any page. Some shortcodes can also be added to a Text or HTML Widget in a sidebar or into any text-area. A shortcode gets processed once the page is viewed in the front of your website. For example, a contact form shortcode will display a contact form an already pre-generated chosen The already pre-generated shortcode in the top right of that window can be altered by following the explanation on the left. It is wise to adjust the variables while having that window open before hitting the shortcode insert button. The moment you copy the shortcode and close the shortcode screen you need to paste it into a textarea at the location where you want the shortcode to be executed.
https://docs.rtthemes.com/document/shortcodes/
2021-07-24T07:35:38
CC-MAIN-2021-31
1627046150134.86
[]
docs.rtthemes.com
. フィードバック フィードバックありがとうございました このトピックへフィードバック
https://docs.us.sios.com/sps/8.6.4/ja/topic/deleting-a-sql-hiearchy
2021-07-24T08:09:28
CC-MAIN-2021-31
1627046150134.86
[]
docs.us.sios.com
#include <wx/withimages.h> A mixin class to be used with other classes that use a wxImageList. Sets the image list for the page control and takes ownership of the list. Return the image with the given index from the image list. If there is no image list or if index == NO_IMAGE, silently returns wxNullIcon. Returns the associated image list, may be NULL. Return true if we have a valid image list. Sets the image list to use. It does not take ownership of the image list, you must delete it yourself.
https://docs.wxwidgets.org/3.1.5/classwx_with_images.html
2021-07-24T06:49:01
CC-MAIN-2021-31
1627046150134.86
[]
docs.wxwidgets.org
Recognition: Fixed Assets Section Use this section to specify fixed assets for recognition. Fixed assets that have to be recognized are added as line items. The line item is a line structure that contains the fixed asset and its details. The line item is comprised of a number of fields, such as inventory number, fixed asset, initial value, salvage value and so on. There are three ways how to add fixed assets for recognition. These ways are described in Recognition creation and Recognition: General area. Recognition: Adding fixed assets Recognition: General Area
https://docs.codejig.com/en/entity2305843015656313960/view/4611686018427396124
2021-07-24T07:58:15
CC-MAIN-2021-31
1627046150134.86
[]
docs.codejig.com
Deterministic Corda Modules A Corda contract’s verify function should always produce the same results for the same input data. To that end, Corda provides the following modules: core-deterministic serialization-deterministic jdk8u-deterministic These are reduced version of Corda. kotlin-stdlib. Generating the Deterministic Modules and then build it. (This currently requires a C++ compiler, GNU Make and a UNIX-like development environment.) core-deterministic and serialization-deterministic are generated from Corda’s core and serialization modules respectively using both ProGuard and Corda’s JarFilter Gradle plugin. Corda developers configure these tools by applying Corda’s @KeepForDJVM and @DeleteForDJVM annotations to elements of core and serialization as described here.The build generates each of Corda’s deterministic JARs in six steps: - Some very few classes in the original JAR must be replaced completely. This is typically because e original class uses something like ThreadLocal, which is not available in the deterministic Java APIs, and t the class is still required by the deterministic JAR. We must keep such classes to a minimum! - The patched JAR is analysed by ProGuard for the first time using the following rule: keep '@interface net.corda.core.KeepForDJVM { *; }' ProGuard works by calculating how much code is reachable from given “entry points”, and in our case ese entry points are the @KeepForDJVM classes. The unreachable classes are then discarded by ProGuard’s hrink` option. - The remaining classes may still contain non-deterministic code. However, there is no way of writing ProGuard rule explicitly to discard anything. Consider the following class: @CordaSerializable @KeepForDJVM data class UniqueIdentifier @JvmOverloads @DeleteForDJVM constructor( val externalId: String? = null, val id: UUID = UUID.randomUUID() ) : Comparable<UniqueIdentifier> { ... } While CorDapps will definitely need to handle UniqueIdentifier objects, all of the secondary nstructors generate a new random UUID and so are non-deterministic. Hence the next “determinising” step is to ss the classes to the JarFilter tool, which strips out all of the elements which have been annotated as @DeleteForDJVM and stubs out any functions annotated with @StubOutForDJVM. (Stub functions that return a value will throw UnsupportedOperationException, whereas void or Unit stubs will do thing. - After the @DeleteForDJVMelements have been filtered out, the classes are rescanned using ProGuard remove any more code that has now become unreachable. - The remaining classes define our deterministic subset. However, the @kotlin.Metadataannotations the compiled Kotlin classes still contain references to all of the functions and properties that ProGuard has leted. Therefore we now use the JarFilterto delete these references, as otherwise the Kotlin compiler will pretend at the deleted functions and properties are still present. - Finally, we use ProGuard again to validate our JAR against the deterministic rt.jar: This step will fail if ProGuard spots any Java API references that still cannot be satisfied by the deterministic rt.jar, and hence it will break the build. Configuring IntelliJ with a Deterministic SDK We would like to configure IntelliJ so that it will highlight uses of non-deterministic Java APIs as not found. Or, more specifically, we would like IntelliJ to use the deterministic-rt.jar as a “Module SDK” for deterministic modules rather than the rt.jar from the default project SDK, to make IntelliJ consistent with Gradle. This is possible, but slightly tricky to configure because IntelliJ will not recognise an SDK containing only the deterministic-rt.jar as being valid. It also requires that IntelliJ delegate all build tasks to Gradle, and that Gradle be configured to use the Project’s SDK. Gradle creates a suitable JDK image in the project’s jdk8u-deterministic/jdk directory, and you can configure IntelliJ to use this location for this SDK. However, you should also be aware that IntelliJ SDKs are available for all projects to use.To create this JDK image, execute the following: $ gradlew jdk8u-deterministic:copyJdk Now select File/Project Structure/Platform Settings/SDKs and add a new JDK SDK with the jdk8u-deterministic/jdk directory as its home. Rename this SDK to something like “1.8 (Deterministic)”.This should be sufficient for IntelliJ. However, if IntelliJ realises that this SDK does not contain a full JDK then you will need to configure the new SDK by hand: - Create a JDK Home directory with the following contents: jre/lib/rt.jar where rt.jar here is this renamed artifact: <dependency> <groupId>net.corda</groupId> <artifactId>deterministic-rt</artifactId> <classifier>api</classifier> </dependency> - While IntelliJ is not running, locate the config/options/jdk.table.xmlfile in IntelliJ’s configuration directory. Add an empty <jdk>section to this file: <jdk version="2"> <name value="1.8 (Deterministic)"/> <type value="JavaSDK"/> <version value="java version "1.8.0""/> <homePath value=".. path to the deterministic JDK directory .."/> <roots> </roots> </jdk> Open IntelliJ and select File/Project Structure/Platform Settings/SDKs. The “1.8 (Deterministic)” SDK should now be present. Select it and then click on the Classpathtab. Press the “Add” / “Plus” button to add rt.jarto the SDK’s classpath. Then select the Annotationstab and include the same JAR(s) as the other SDKs. Open the root build.gradlefile and define this property: buildscript { ext { ... deterministic_idea_sdk = '1.8 (Deterministic)' ... } } Go to File/Settings/Build, Execution, Deployment/Build Tools/Gradle, and configure Gradle’s JVM to be the project’s JVM. Go to File/Settings/Build, Execution, Deployment/Build Tools/Gradle/Runner, and select these options: - Delegate IDE build/run action to Gradle - Run tests using the Gradle Test Runner Delete all of the outdirectories that IntelliJ has previously generated for each module. Go to View/Tool Windows/Gradleand click the Refresh all Gradle projectsbutton. These steps will enable IntelliJ’s presentation compiler to use the deterministic rt.jar with the following modules: core-deterministic serialization-deterministic core-deterministic:testing:common but still build everything using Gradle with the full JDK. Testing the Deterministic Modules The core-deterministic:testing module executes some basic JUnit tests for the core-deterministic and serialization-deterministic JARs. These tests are compiled against the deterministic rt.jar, although they are still executed using the full JDK. The testing module also has two sub-modules: core-deterministic:testing:data - This module generates test data such as serialised transactions and elliptic curve key pairs using the full non-deterministic corelibrary and JDK. This data is all written into a single JAR which the testingmodule adds to its classpath. core-deterministic:testing:common - This module provides the test classes which the testingand datamodules need to share. It is therefore compiled against the deterministic API subset. Applying @KeepForDJVM and @DeleteForDJVM annotations Corda developers need to understand how to annotate classes in the core and serialization modules correctly in order to maintain the deterministic JARs. Every Kotlin class still has its own .class file, even when all of those classes share the same source file. Also, annotating the file: @file:KeepForDJVM package net.corda.core.internal does not automatically annotate any class declared within this file. It merely annotates any accompanying Kotlin xxxKt class. For more information about how JarFilter is processing the byte-code inside core and serialization, use Gradle’s --info or --debug command-line options. Classes that must be included in the deterministic JAR should be annotated as @KeepForDJVM. To preserve any Kotlin functions, properties or type aliases that have been declared outside of a class, you should annotate the source file’s package declaration instead: @file:JvmName("InternalUtils") @file:KeepForDJVM package net.corda.core.internal infix fun Temporal.until(endExclusive: Temporal): Duration = Duration.between(this, endExclusive) Elements that must be deleted from classes in the deterministic JAR should be annotated as @DeleteForDJVM. You must also ensure that a deterministic class’s primary constructor does not reference any classes that are not available in the deterministic rt.jar. The biggest risk here would be that JarFilter would delete the primary constructor and that the class could no longer be instantiated, although JarFilter will print a warning in this case. However, it is also likely that the “determinised” class would have a different serialisation signature than its non-deterministic version and so become unserialisable on the deterministic JVM.Primary constructors that have non-deterministic default parameter values must still be annotated as @DeleteForDJVM because they cannot be refactored without breaking Corda’s binary interface. The Kotlin compiler will automatically apply this @DeleteForDJVM annotation - along with any others - to all of the class’s secondary constructors too. The JarFilter plugin can then remove the @DeleteForDJVM annotation from the primary constructor so that it can subsequently delete only the secondary constructors.The annotations that JarFilter will “sanitise” from primary constructors in this way are listed in the plugin’s configuration block, e.g. task jarFilter(type: JarFilterTask) { ... annotations { ... forSanitise = [ "net.corda.core.DeleteForDJVM" ] } } Be aware that package-scoped Kotlin properties are all initialised within a common <clinit> block inside their host .class file. This means that when JarFilter deletes these properties, it cannot also remove their initialisation code. For example: package net.corda.core @DeleteForDJVM val map: MutableMap<String, String> = ConcurrentHashMap() In this case, JarFilter would delete the map property but the <clinit> block would still create an instance of ConcurrentHashMap. The solution here is to refactor the property into its own file and then annotate the file itself as @DeleteForDJVM instead.Sometimes it is impossible to delete a function entirely. Or a function may have some non-deterministic code embedded inside it that cannot be removed. For these rare cases, there is the @StubOutForDJVM annotation: This annotation instructs JarFilter to replace the function’s body with either an empty body (for functions that return void or Unit) or one that throws UnsupportedOperationException. For example: fun necessaryCode() { nonDeterministicOperations() otherOperations() } @StubOutForDJVM private fun nonDeterministicOperations() { // etc }
https://docs.corda.net/docs/corda-enterprise/4.7/deterministic-modules.html
2021-07-24T08:48:52
CC-MAIN-2021-31
1627046150134.86
[]
docs.corda.net
使用 Anaconda 安装 Anaconda 简介 大多数安装程序都遵循固定的路径:你必须先选择语言,然后配置网络,然后再配置安装类型,再进行分区,做完这样才可以做另一样。然而 Fedora 安装程序 - Anaconda 和这些安装程序不一样,它拥有着独特的并行特性。 在 Anaconda 你只需要选择好语言和地区,就会被带到一个中央面板页面,你可以在这个页面以任意的顺序完成你的安装前设定。当然这也不适用于一切情形的,例如网络安装时就必须要配置好网络才可以选定安装的软件包。 有些设置项会根据你的硬件和你的安装介质而被自动设置,你依然可以随时对这些设置项进行手动修改。对于无法自动设置的设置项,我们会提供一个特殊标记来提醒你额外地留意这些选项。在完成这些设置之前你是不能开始系统安装的。 当然了还会有其他的差别,特别注意的是手动分区部分跟其它 Linux 发行版的区别不小,我们在后面再做介绍。 命令行与日志 接下来我们将介绍如何在安装期间访问命令行和日志,这在安装出现问题时会很有用处,当然大部分情况下我们是用不上它们的。 访问命令行 Fedora 安装程序使用 tmux 终端多路复用器以显示和控制多个窗口。每个窗口都有不同的用途,例如显示不同的日志文件以在安装时进行排错处理、使用带有 root 权限的交互式 Shell 等(当然你可以通过引导选项设置和 Kickstart 命令关闭特定的功能)。 终端多路复用器运行于 1 号虚拟终端,你可以使用 Ctrl+Alt+F1 从常规安装环境切换到位于 6 号虚拟终端的 tmux ,要想返回到常规安装环境可以使用 Ctrl+Alt+F6 。 运行 tmux 的终端有五个可用的窗口,我们在下面列举了每个窗口的快捷键和功能。注意快捷键包含两部分,你要首先按下 Ctrl+b ,然后松开这两个键,再按下数字键前往对应的窗口。 你还可以使用 Ctrl+b n 和 Ctrl+b p 切换到下一个或上一个 tmux 窗口。 屏幕截图储存. To-keyboard-layout), you can switch between them by clicking the layout indicator. 欢迎界面和语言选择 在图形化安装程序启动后显示的第一屏就是欢迎界面。 首先在左侧的栏目选择你偏好的语言,然后从右侧的栏目选择你的地区。如果你不想耗费时间在近七十种语言中寻找你的语言,你可以使用左下角的输入框进行搜索。 你所选择的语言会被设定为图形安装程序的显示语言,在安装的全程被使用,也会作为安装后系统的默认语言。尽管你可以在未来修改系统的语言,但是一旦你在这里点击继续,你就不能再回去重新制定安装程序的语言了。 我们会为你默认挑选一种语言,如果你的网络连接在这个时候已经可用了(例如你在网络服务器上引导了安装介质),我们就会通过 GeoIP 模组探测你的位置并将相应的语言设定为默认。指定默认语言的方式还包括在引导选项或者 PXE 服务器配置添加 inst.lang= 。默认语言会出现在语言列表顶部,但是你依然可以在操作选单任意指定其它的语言以用于后续安装和使用。 在选择好语言和地区后,点击继续以确认你的选择然后前往 Installing_Using_Anaconda.adoc#sect-installation-gui-installation-summary 。 安装摘要 安装摘要页面在安装程序中处于枢纽的地位,绝大多数的安装选项都可以从这里访问。 安装摘要页面包含了若干个跳转到其它页面的链接,这些链接以类别的形式组织起来。对于每一个链接,都有着下面几种状态: 图标旁边显示着 警告标志 (带有感叹号的黄色三角形)意味着在开始安装之前需要额外留意这个页面。通常安装路径页面对应的链接在初始时就是这样的,尽管有自动分区功能,但我们仍然要求你至少进入这个页面进行确认,哪怕你什么都不修改。 链接文字 变灰 意味着安装程序正在对这个页面的选项进行设置,在访问这个页面之前你必须等待安装程序完成有关的设置。当你修改了安装来源后通常就会有这种情况发生,这是因为安装程序会花上一点时间探测新的安装来源并获取可用软件包列表。 链接文字 黑色且无警告标志 意味着这个页面无需你的特别留意。你依然可以进入这个页面并进行一些改动,但是这对于完成安装来说不再是必要的。这常见于区域设置页面,因为这个页面的选项基本都被预先探测或在欢迎页配置妥当了。 页面底部显示有警告信息,开始安装按钮被禁用,这意味着存在没有完成配置的选项。 每个页面的标题下方都有解释性的文字,显示当前页面已经配置的内容。有时候显示出来的内容会被精简,在必要的时候可以将鼠标移到上面以获得完整的文字内容。与时间 日期与时间页面允许你对日期和时间进行设定。我们会基于你在 Installing_Using_Anaconda.adoc#sect-installation-gui-welcome 的设定为此屏幕给定一系列的默认值,但在开始安装之前,你可以随时对这些设置进行调整。 首先你可以从屏幕左上角的下拉菜单中选择你所在的地区,然后选择你所在的城市或者距离你所在位置最近的且位于同一时区的城市。指定一个准确的位置有利于 Fedora 充分考虑夏令时等条件,从而正确设定你的时间。 你也可以直接指定你所在的时区而不去指定地区,只需将 Etc 设置为你的地区即可。 右上角的切换按钮可以用于启用和关闭基于 NTP 的网络时间自动同步服务。启用这个选项可以保证你的系统在连接到互联网的前提下时间准确。NTP 池已经被默认配置好,但你可以随时对 NTP 服务器进行调整。 >. 语言支持 语言支持页面允许你对系统的语言进行配置。默认的语言由 Installing_Using_Anaconda.adoc#sect-installation-gui-welcome 决定,你只能额外添加其它的语言而不能删除默认的语言,设置好的语言将会在安装好的系统上可用 - 而不是安装程序。 如果你希望更换默认语言,或者安装时使用的语言,你必须重启你的系统重新进入安装程序,并在 Installing_Using_Anaconda.adoc#sect-installation-gui-welcome 选择其它的语言。 左栏包括了一些可选的语言组,例如英语和中文。如果你从中选择了至少一组,该组语言的左侧就会出现一个勾选标记,这个语言组也会被高亮显示。这样子你就可以很方便地看到哪些语言被配置过了。 要添加一种或多种语言,从左栏选择一个语言组,然后在右栏选择一个具体的语言变种。重复这一步骤,直到所有需要启用的语言都已经配置妥当。 Once you have made your selections, click Done in the top left corner to return to Installing_Using_Anaconda.adoc#sect-installation-gui-installation-summary. 安装源 安装源页面允许你指定从何处(本地或网络)下载软件包以用于安装。在大多数情况下,这个页面的设置项会被自动配置,但是你仍然可以对页面上的选项作出修改。 可用的选项如下所示,注意有些不适用于你的选项可能会被隐藏。 - 自动探测的安装介质 如果安装程序是通过一个带有安装源的媒体启动的,例如 Live DVD,那么这个选项就会成为默认值。你不需要采取额外的措施,作为可选项你可以在此处校验媒体的可用性。 - ISO 文件 如果你在启动的时候挂载了已有分区的磁盘设备,这个选项就会可用。选择此项后,你还需要指定一个 ISO 文件,同样你可以在此处校验文件的可用性。 - 网络 除了从本地媒体获取软件包,还可以选择此项以使用网络获取软件包。当你使用网络安装媒体时此选项将成为默认值。 在多数情况下,我们推荐你从下拉列表中选择最近的镜像一栏。这样子所有软件包都会从最合适的镜像站获取。。 如果需要为 HTTP 或 HTTPS 安装源指定代理,点击代理设置按钮。勾选启用 HTTP 代理并填写代理 URL。如果代理需要验证,勾选需要验证并输入你的帐户和密码。点击完成按钮以结束配置。 如果你的 HTTP 或 HTTPS 链接指向一个镜像站列表,在地址栏下方的勾选框打。 Once you have selected your installation source, click Done in the top left corner to return to Installing_Using_Anaconda.adoc#sect-installation-gui-installation-summary. 软件包选择 软件包选择页面允许你选择一个软件包集和一些附加组件。被选中的软件包会在系统安装过程中被安装到你的系统。 这个页面当且仅当 Installing_Using_Anaconda.adoc#sect-installation-gui-installation-source 已经正确设置而且软安装程序已经成功从软件源获取软件包元数据。 可供选用的软件包集和附加组件取决于你的安装来源。在默认情况下起决定作用的就是你使用的安装媒体介质文件。Fedora Server 提供的软件包集和附加组件就和 Fedora Cloud 所提供的有所不同,当然你可以自由切换安装来源。 要配置软件包选择,从页面左侧选择一个环境。只有选择了环境后才可以做下一步的定制。接下来,在页面右侧,你可以通过勾选列表上的内容来选择一个或多个的附加组件。 附加组件有两个类别,分隔线上方的附加组件与你选择的环境有关;选择不同的环境会有不同的额外组件可用。分隔线下方的附加组件和所选环境无关。 可用环境和附加组件由安装源的 comps.xml 文件定义(以 Fedora Server 安装 DVD 为例,这个文件位于 repodata/ 目录)。要了解每个环境和附加组件到底包含了什么软件包,可以查看这个文件。要了解更多信息,请阅读 这个页面 。 After you finish configuring your software selection, click Done in the top left corner to return to Installing_Using_Anaconda.adoc#sect-installation-gui-installation-summary. 安装目。 For information about the theory and concepts behind disk partitioning in Linux, see Installing_Using_Anaconda.adoc#sect-installation-gui-manual-partitioning-recommended. 在页面的顶部,所有在本地可用的储存设备都接入的设备不会被显示。). 在选择自动分区方案时,你可以选择是否让额外的空间可. 此外你还可以选择加密你的数据,这将使用 LUKS 加密所有的分区(除了用于系统引导的分区,如 /boot)。我们非常推荐你加密你的设备,请前往 阅读 Fedora 安全指引 以了解更多信息。 要指定在哪个储存设备安装 引导程序 , 时候(例如需要链式引导启动的时候)还是需要手动制定引导的设备。 在选择储存设备,指定自动分区还是手动分区,配置好磁盘加密和引导器位置后,点击左上角的完成按钮。然后根据你的选择,可能会有下面的情况发生: 如果你选择加密你的磁盘,将会弹出一个对话框提示你对密钥进行设置,你只需要你输入的时候,我们会检查密钥的强度并为你提出相关建议。要了解如果创建强密钥,请阅读 Fedora 安全手册 。删除按钮删除分区(或磁盘上的所有分区),或者点击压缩全部删除你为 Fedora 留出了足够多的空间,你就可以点击回收空间按钮以完成。 If you selected the I will configure partitioningoption, pressing Donewill open the Manual Partitioningscreen. See Installing_Using_Anaconda.adoc#sect-installation-gui-manual-partitioning for further instructions. 引导程序安装 Fedora 使用 GRUB2 (GRand Unified Bootloader version 2)作为默认的引导程序。引导加载程序是计算机启动时运行的第一个程序,负责将计算机控制权交接给操作系统。 GRUB2 可以引导绝大多数的操作系统(包括 Microsoft Windows),也可以使用链式加载将将计算机控制权交接不受支持操作系统的引导程序。 如果你还安装了其他的操作系统, Fedora 安装程序会尝试自动探测他们并为引导程序添加相应的启动项。如果这些操作系统没有被正确探测,你可以在系统安装结束之后进行相关的配置。要了解如何配置 GRUB2 ,可前往 并查看 Fedora 系统管理员指南 。 如果你在多个硬盘上安装 Fedora,你可能希望指定在哪个硬盘上安装引导程序。在安装目标配置页面点击磁盘摘要和引导器按钮即可弹出磁盘选择的对话框。引导程序将会安装在你选择的设备上,如果你使用 UEFI 模式,在向导式磁盘分区的过程中 EFI 系统分区也会被创建。 在引导栏目,被勾选的设备就是期望用于引导的设备。要改变引导设备以更改引导程序安装的位置,从列表中选择一个设备并点击设定为引导设备按钮。你只能在一个设备上安装引导设备。 如果你不希望安装新的引导程序,请选择当前标记为引导的设备,然后单击不安装引导程序按钮。这样子 GRUB2 就不会在任何设备上安装。 引导程序需要一个特定的分区,这取决于你在使用 BIOS 模式还是在使用 UEFI 模式,也取决于你在使用 GPT 分区表还是 MBR 分区表。如果你使用自动分区,安装程序会在必要的时候为你创建这些分区。详情请阅读 in Installation Destination. If none of the selected drives contain any existing partitions, then a message informing you that no mount points currently exist will appear. Here, you can choose a partitioning scheme such as LVM or BTRFS and click the to create a basic partitioning layout; this layout follows the guidelines described Installing_Using_Anaconda.adoc#sect-installation-gui-manual-partitioning-filesystems Installing_Using_Anaconda.adoc#sect-installation-gui-manual-partitioning-filesystems. Managament Installing_Using_Anaconda.adoc#sect-installation-gui-manual-partitioning-filesystems maintains snapshots of the file system that can be used for backup or repair. Creating a Btrfs layout is somewhat similar to LVM (described in Installing_Using_Anaconda.adoc#sect-installation-gui-manual-partitioning-lvm), later if you need to. Installing_Using_Anaconda.adoc#sect-installation-gui Installing_Using_Anaconda.adoc#sect-installation-gui-manual-partitioning-lvm.. 网络与主机名 网络与主机名页面允许你对网络进行配置。这个页面提供的选项无论是在安装期间(当需要从远程位置下载软件包的时候)还是在安装之后都是可用的。 - image ["网络与主机名页面。位于左栏", one physical interface and one custom VLAN interface is shown; the right side shows details of the currently selected interface. System hostname is configured at the bottom.] 安装程序会自动检测本地可用的接口,你不能手动添加更多接口或删除接口。屏幕左侧列出了所有检测到的接口,单击列表中的接口以显示其当前配置(例如 IP 和 DNS 地址),详细信息显示在屏幕的右侧。. Configuration and Installation Progress The Configuration screen is displayed after you finish configuring all required items in Installing_Using_Anaconda.adoc#sect-installation-gui_22<<. Root 帐户密码 Root 帐户密码页面允许你对`root` 账户的密码进行设置。这个密码将被用于登录到超级管理员账户,以处理各类需要高权限才能完成的任务(如安装和更新软件包、修改全局系统设置如网络设定、储存设定、用户管理和文件权限设置等)。. 当你选定一个强密码后,将它填写到表单中。为了安全起见你输入的字符都会被显示为实心原点。接下。请注意两次输入的密码应该是相同的。。 当你完成了 root 帐户密码设置后,点击左上角的确定按钮就可以回到 Installing_Using_Anaconda.adoc#sect-installation-gui-installation-progress 。如果你选择了一个弱密码,你需要连续点两次确认按钮。 创建用户 在安装的过程中你可以使用创建用户页面创建和配置一个(除了 root 之外的)用户。如果你需要多个账户,你可以在安装完成重启后通过命令行 useradd 或者 GNOME 设置等方式继续创建。 要配置一个用户帐户,你需要添加用户名称(例如 John Smith)和帐号(例如 jsmith)。帐号将被用于在控制台登录你的系统。如果你使用图形界面,你的登录管理器将会显示你的用户名称。 请务必勾选启用帐户密码,并在密码栏填写一个密码。为了安全起见无论你敲的字符是什么我们都只会显示实心圆点。接下来你还需。注意两次输入的密码要一致才行。。 你可以选择将用户添加到管理员组(也就是 wheel 组),这样子用户就可以通过 sudo 命令而从只需要自己的密码就能执行特权任务。这可以使得许多工作变得简单,但也有可能带来安全隐患。 如果你希望对用户做进一步的设置,你可以点击位于表单下方的高级选项按钮。此时一个对话框会弹出,我们会在下面介绍这个对话框。 用户高级选项 用户高级选项对话框允许你对新用户做以下的设定。 用户的家目录(默认是 /home/username` )。 用户的 ID(UID),默认值是 1000。0-999 是系统保留的 ID 号,你不能将这些 ID 号分配给用户。 用户默认组的 ID 号(GID)。默认的组名跟用户名相同,默认的 GID 是 1000。同样 0-999 是系统保留的 ID 号,你不能将这些 ID 号分配给用户组。 用户的其它组信息。所有的用户帐号都会有一个默认组信息(默认组有专门的选项进行设置),而在其它组信息设置这里你可以填写额外组别的信息,组别名称之间用逗号间隔。还没创建的组会被自动创建,你可以在圆括号内指定 GID 信息。如果你不为组别指定 GID,我们将为你的小组自动分配 GID。
https://docs.fedoraproject.org/zh_Hans/fedora/f31/install-guide/install/Installing_Using_Anaconda/
2021-07-24T07:34:40
CC-MAIN-2021-31
1627046150134.86
[array(['../../_images/anaconda/SummaryHub_TextMode.png', 'The main menu in during a text-based installation.'], dtype=object) array(['../../_images/anaconda/WelcomeSpoke.png', '显示有语言选择项的欢迎界面屏幕截图。'], dtype=object) array(['../../_images/anaconda/SummaryHub.png', '安装摘要页面'], dtype=object) array(['../../_images/anaconda/SummaryHub_States.png', '安装摘要页面图标示例截屏'], dtype=object) array(['../../_images/anaconda/SummaryHub_Mouseover.png', '安装摘要的每个栏目都包含着精简过的描述信息和能显示完整信息的提示框。'], dtype=object) array(['../../_images/anaconda/DateTimeSpoke.png', '日期与时间页面屏幕截图'], dtype=object) array(['../../_images/anaconda/DateTimeSpoke_AddNTP.png', 'A dialog window allowing you to add or remove NTP pools from your system configuration'], dtype=object) array(['../../_images/anaconda/KeyboardSpoke.png', 'The keyboard layout configuration screen'], dtype=object) array(['../../_images/anaconda/LangSupportSpoke.png', '语言支持页面。可以看到左栏中英语和法语都被选中了,而在右栏可以看到在法语栏目中,法语(法国)和法语(加拿大)都被选中了。'], dtype=object) array(['../../_images/anaconda/SourceSpoke.png', '安装源页面'], dtype=object) array(['../../_images/anaconda/SoftwareSpoke.png', '软件包选择页面。位于左侧'], dtype=object) array(['../../_images/anaconda/StorageSpoke.png', '安装目标页面。共有两个本地磁盘可用'], dtype=object) array(['../../_images/anaconda/StorageSpoke_Selected.png', '安装目标页的磁盘选择栏目。这里显示了两块硬盘,右边的硬盘将会被使用'], dtype=object) array(['../../_images/anaconda/StorageSpoke_BootLoader.png', '磁盘选择对话框'], dtype=object) array(['../../_images/anaconda/FilterSpoke.png', 'A list of currently configured network storage devices'], dtype=object) array(['../../_images/anaconda/CustomSpoke.png', 'The Manual Partitioning screen. At this point'], dtype=object) array(['../../_images/anaconda/CustomSpoke_RescanDisks.png', 'The Rescan Disks dialog'], dtype=object) array(['../../_images/anaconda/CustomSpoke_AddPhysical.png', 'The Manual Partitioning screen'], dtype=object) array(['../../_images/anaconda/CustomSpoke_SoftwareRAID.png', 'The Manual Partitioning screen'], dtype=object) array(['../../_images/anaconda/CustomSpoke_AddLVM.png', 'The Manual Partitioning screen'], dtype=object) array(['../../_images/anaconda/CustomSpoke_AddBtrfs.png', 'The Manual Partitioning screen'], dtype=object) array(['../../_images/anaconda/KdumpSpoke.png', 'The Kdump configuration screen'], dtype=object) array(['../../_images/anaconda/ProgressHub.png', 'The Configuration screen. Two more screens at the top require configuration. Installation progress is displayed at the bottom.'], dtype=object) array(['../../_images/anaconda/PasswordSpoke.png', 'Root 用户密码页面。填写表单以设置你的 root 用户密码。'], dtype=object) array(['../../_images/anaconda/UserSpoke.png', '用户创建页面。填写表单创建和配置用户。'], dtype=object) array(['../../_images/anaconda/UserSpoke_Advanced.png', '新用户高级设定。'], dtype=object) ]
docs.fedoraproject.org
Does Rich Returns provide an API to connect external systems? Yes. Documentation can be found over here in our API Reference and Developer Docs. API Keys can be created from within the Rich Returns' Dashboard: navigate to Account / API Keys. Make sure that your plan includes an API license. API access is an enterprise feature for high-volume merchants and currently available on our Plus Plans.
https://docs.richcommerce.co/api/api-integration-crm-erp-3pl-analytics
2021-07-24T07:52:02
CC-MAIN-2021-31
1627046150134.86
[]
docs.richcommerce.co
Manage Messaging Project Access Share or revoke access to messaging projects. Only the company account owner or a team member with administrator permission can control access to projects. Share a Messaging Project Sharing access to a messaging project is the same thing as sending a team invitation.. Revoke Access to a Messaging Project Revoking access to a messaging project is handled by changing their access level. Select Remove Access as the access level. - Go to Settings » Project Configuration and click Manage for Team Access. - Click edit next to the user’s current access level. - Select a new access level. - Click Save. Categories
https://docs.airship.com/guides/messaging/user-guide/admin/team/project-access/
2021-07-24T07:08:12
CC-MAIN-2021-31
1627046150134.86
[]
docs.airship.com
Assigning a Parameter Context to a Process Group For a component to reference a Parameter, its Process Group must first be assigned a Parameter Context. Once assigned, processors and controller services within that Process Group may only reference Parameters within that Parameter Context. A Process Group can only be assigned one Parameter Context, while a given Parameter Context can be assigned to multiple Process Groups. To assign a Parameter Context to a Process Group, click Configure, either from the Operate Palette or from the Process Group context menu. In the Flow Configuration window, select the "General" tab. From the Process Group Parameter Context drop-down menu, select an existing Parameter Context or create a new one. Select "Apply" to save the configuration changes. The Process Group context menu now includes a "Parameters" option which allows quick access to the Update Parameter Context window for the assigned Parameter Context. If the Parameter Context for a Process Group is changed, all components that reference any Parameters in that Process Group will be stopped, validated, and restarted assuming the components were previously running and are still valid.
https://docs.cloudera.com/cfm/2.1.1/nifi-user-guide/topics/nifi-assigning_parameter_context_to_pg.html
2021-07-24T08:29:52
CC-MAIN-2021-31
1627046150134.86
[array(['../images/nifi-process-group-configuration-parameters.png', None], dtype=object) array(['../images/nifi-process-group-parameter-context-menu.png', None], dtype=object) array(['../images/nifi-context-menu-parameters-option.png', None], dtype=object) ]
docs.cloudera.com
The conversion of time from one time zone to another occurs when sending control commands (with a timestamp), synchronizing time, and receiving information objects with the timestamp CP56Time2a. The time formation process for control and time synchronization commands is as follows: the current local time is converted to UTC (based on the operating system’s time zone settings), then the UTC time is converted to the time in the time zone specified in the Device Time Zone parameter. For example, the operating system’s time zone is Europe / Moscow (UTC +03: 00), the client’s current time is 2019-12-15 19:31:00, and the time zone specified for the device is America / Costa_Rica (UTC -06: 00). The total time that the command will contain will be generated as follows: 2019-12-15 19:31:00 (Local) → 2019-12-15 16:31:00 (UTC) → 2019-12-15 10:31:00 (Device Time Zone). The conversion process of the timestamp CP56Time2a of the information object to the timestamp of the tag is as follows: the time of the information object is converted to UTC (based on the device’s time zone settings), then the received UTC time is converted to the time in the time zone of the operating system. For example, the device’s time zone is America / Costa_Rica (UTC -06: 00), the device’s current time is 2019-12-14 19:15:00, and the time zone of the server is Europe / Moscow (UTC +03: 00). The total time of the tag will be formed as follows: 2019-12-14 19:15:00 (Device Time Zone) → 2019-12-15 01:15:00 (UTC) → 2019-12-15 04:15:00 (Local). If an invalid or ambiguous time occurs during the conversion of CP56Time2a time from one time zone to another, the ASDU will be assigned the timestamp of the tag. You can read more about invalid and ambiguous times here. To ensure cross-platform compatibility, time zone identifiers from the IANA database are used. Therefore, if time zone identifier is not found when uploading the configuration, for example, from the test server to the production server, then the local time zone will be used to convert the time, and an error message will appear in the server’s event log. In order to avoid problems associated with the time conversion from one time zone to another, it is recommended to generate timestamps in the UTC time zone at the controlled station.
https://docs.monokot.io/hc/en-us/articles/360038014271-Time-Conversion
2021-07-24T08:06:11
CC-MAIN-2021-31
1627046150134.86
[]
docs.monokot.io
Launcher Citrix Launcher lets you customize the user experience for Android Enterprise devices and legacy Android devices deployed by Endpoint Management. With Citrix Launcher, you can prevent users from accessing certain device settings and restrict devices to one app or a small set of apps. The minimum Android version supported for Secure Hub management of Citrix Launcher is Android 6.0. Use a Launcher Configuration Policy to control these Citrix Launcher features: - Manage Android Enterprise devices and legacy Android devices so that users can access only the apps that you specify. - Optionally specify a custom logo image for the Citrix Launcher icon and a custom background image for Citrix Launcher. - Specify a password that users must type to exit the launcher. Citrix Launcher isn’t intended to be an extra layer of security over what the device platform already provides. Set up Citrix Launcher for Android Enterprise devices Add the Citrix Launcher app (com.citrix.launcher.droid) to Endpoint Management as a public store app. In Configure > Apps, click Add, and then click Public App Store. For more information, see Add a public app store app. In the Kiosk device policy, specify which apps must be available on company-owned devices for dedicated use (also known as Android corporate owned single use (COSU) devices). Go to Configure > Device Policies, click Add, and select Kiosk. Then select the Citrix Launcher app and any additional apps in the allow list. If you previously added apps to the list, you don’t need to upload the apps again. For more information, see Android Enterprise settings. Add the Launcher Configuration device policy. Go to Configure > Device Policies, click Add, and select Launcher Configuration. In the Launcher Configuration policy, add any of the apps that you specified in the Kiosk policy. You don’t need to add all of the apps. Set up Citrix Launcher for legacy Android devices Note: In August 2020, Citrix deprecated support for the CitrixLauncher.apk for legacy Android devices. You can continue using the legacy Citrix Launcher app (com.citrix.launcher) for Android devices without receiving the new feature updates. To locate the Citrix Launcher app, go to the Citrix Endpoint Management download page and search for Citrix Launcher. Download the latest file. The file is ready for upload into Endpoint Management and doesn’t require wrapping. Add the Launcher Configuration device policy. Go to Configure > Device Policies, click Add, and select Launcher Configuration. For more information, see Launcher Configuration Policy. Add the Citrix Launcher app to Endpoint Management as an enterprise app. In Configure > Apps, click Add and then click Enterprise. For more information, see Add an enterprise app. Create a delivery group and deploy resources. For more information, see the Add a delivery group and deploy resources section in this article. Add a delivery group and deploy resources Create a delivery group for Citrix Launcher with the following configuration in Configure > Delivery groups. - On the Policies page, add a Launcher Configuration Policy. - On the Apps page, drag Citrix Launcher to Required Apps. - On the Summary page, click Deployment Order and ensure that the Citrix Launcher app precedes the Launcher Configuration policy. Deploy resources to a delivery group by sending a push notification to all users in the delivery group. For more information about adding resources to a delivery group, see Deploy resources. Manage devices without Citrix Launcher Instead of using Citrix Launcher, you can use features that are already available. To provision dedicated devices: Create an enrollment profile by setting the Device owner mode to Dedicated device. See Provisioning dedicated Android Enterprise devices and Enrollment profiles. Create a Kiosk device policy to add apps to the allow list and set lock task mode. If you previously added apps to the list, you don’t need to upload the apps again. For more information, see Android Enterprise settings. Enroll each device in the enrollment profile you.
https://docs.citrix.com/en-us/citrix-endpoint-management/apps/citrix-launcher.html
2021-07-24T07:14:41
CC-MAIN-2021-31
1627046150134.86
[]
docs.citrix.com
Flipping compare indicator This setting is specific to KPI visuals. To change the direction of the arrow (and the semantic meaning of the differences between the indicators), navigate to the Marks menu, and selection the Flip comparison option. This image illustrates the visual with the default comparison setting, and with the flipped comparison. Note that the arrows flipped.
https://docs.cloudera.com/data-visualization/cdsw/howto-customize-visuals/topics/viz-flip-compare.html
2021-07-24T07:49:14
CC-MAIN-2021-31
1627046150134.86
[array(['../images/viz-marks-flip-comparison.png', None], dtype=object) array(['../images/viz-marks-kpi-flipped-compare-vis.png', None], dtype=object) ]
docs.cloudera.com
'; Gets the device's current IPv4 address. Returns `0.0.0.0`` if the IP address could not be retrieved. On web, this method uses the third-party ipify service to get the public IP address of the current device. Promise<string> A Promise that fulfils with a string of the current IP address of the device's main network interface. Can only be IPv4 address. await Network.getIpAddressAsync(); // "92.168.32.44" This method is deprecated and will be removed in a future SDK version. string | null) - A string representing interface name ( eth0, wlan0) or null(default), meaning the method should fetch the MAC address of the first available interface. Gets the specified network interface's MAC address. Beginning with iOS 7 and Android 11, non-system applications can no longer access the device's MAC address. In SDK 41 and above, this method will always resolve to a predefined value that isn't useful. If you need to identify the device, use the getIosIdForVendorAsync() method / androidId property of the expo-application unimodule instead. Gets the device's current network connection state. On web, navigator.connection.type is not available on browsers. So if there is an active network connection, the field type returns NetworkStateType.UNKNOWN. Otherwise, it returns NetworkStateType.NONE. Promise<NetworkState> A Promise that fulfils with a NetworkState object. await Network.getNetworkStateAsync(); // { // type: NetworkStateType.CELLULAR, // isConnected: true, // isInternetReachable: true, // } Android only. Tells if the device is in airplane mode. An enum of the different types of devices supported by Expo. NetworkStateType.BLUETOOTH- Active network connection over Bluetooth. NetworkStateType.CELLULAR- Active network connection over mobile data or DUN-specificmobile connection when setting an upstream connection for tethering. NetworkStateType.ETHERNET- Active network connection over Ethernet. NetworkStateType.NONE- No active network connection detected. NetworkStateType.OTHER- Active network connection over other network connection types. NetworkStateType.UNKNOWN- The connection type could not be determined. NetworkStateType.VPN- Active network connection over VPN. NetworkStateType.WIFI- Active network connection over WiFi. NetworkStateType.WIMAX- Active network connection over Wimax.
https://docs.expo.io/versions/v42.0.0/sdk/network/
2021-07-24T07:16:54
CC-MAIN-2021-31
1627046150134.86
[]
docs.expo.io
RT have to worry about if you have customizations for your current setup. - Mega menus, top bar, side panel, header widgets are only available for Layout 3 and Layout 4. - If you are planning to switch your design layout from one of the v1 layouts (Layout 1, Layout 2) to one of the to v2 layout (Layout 3, Layout 4) Please read this section for more details - If you have been using String Translation plugin of WPML to translate your logo image (the url) or the Footer copyright text, you may need to re-translate it. The version 2.0 of the theme only supports the latest version of the String Translation plugin. - If your logo looks bigger after the update, you need to adjust the logo box settings. please read this Please read this section for more details - Remember that it is always a good practice to have a backup before updating a theme or a plugin.
https://docs.rtthemes.com/document/updating-to-version-2-0/
2021-07-24T07:04:19
CC-MAIN-2021-31
1627046150134.86
[]
docs.rtthemes.com
Dalton CPP-LR parallelization This project is the result of a request for application expert support. See here for the original request (PDF). Requestors and collaborators: - Prof. Patrick Norrman, Computational Physics, IFM - Linköping University - Dr. Joanna Kauczor, Computational Physics, IFM - Linköping University Project is established in two phases. Phase 1 (~10 work days): - Meet with requestors to discuss the proposal ✓ - Profile and analyze the code to figure out the time consuming parts ✓ - Suggest a parallelization strategy ✓ Report of work in phase 1 (PDF). Phase 2: - Work on the parallelization with the requestors
https://docs.snic.se/wiki/Dalton_CPP-LR_parallelization
2021-07-24T07:23:56
CC-MAIN-2021-31
1627046150134.86
[]
docs.snic.se
Step 4. Add Sign-in with a SAML Identity Provider to a User Pool (Optional) You can enable your app users to sign in through a SAML identity provider (IdP). Whether your users sign in directly or through a third party, all users have a profile in the user pool. Skip this step if you don't want to add sign in through a SAML identity provider. You need to update your SAML identity provider and configure your user pool. See the documentation for your SAML identity provider for information about how to add your user pool as a relying party or application for your SAML 2.0 identity provider. You also need to provide an assertion consumer endpoint to your SAML identity provider. Configure this endpoint for SAML 2.0 POST binding in your SAML identity provider: https:// <yourDomainPrefix>.auth. <region>.amazoncognito.com/saml2/idpresponse You can find your domain prefix and the region value for your user pool on the Domain name tab of the Amazon Cognito console. For some SAML identity providers, you also need to provide the SP urn / Audience URI / SP Entity ID, in the form: urn:amazon:cognito:sp: <yourUserPoolID> You can find your user pool ID on the App client settings tab in the Amazon Cognito console. You should also configure your SAML identity provider to provide attribute values for any attributes that are required in your user pool. Typically,. To configure a SAML 2.0 identity provider in your user pool Go to the Amazon Cognito console. You might be prompted for your AWS credentials. Choose Manage your. For more information, see Adding SAML Identity Providers to a User Pool. Next Step Step 5. Install an Amazon Cognito User Pools SDK
https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-pools-configuring-federation-with-saml-2-0-idp.html
2019-02-15T23:45:48
CC-MAIN-2019-09
1550247479627.17
[]
docs.aws.amazon.com
# At this point, you should install Docker on all master and node hosts. This allows you to configure your Docker storage options before installing OKD. For RHEL 7 systems, install Docker 1.12: # yum install docker-1.12.6 After the package installation is complete, verify that version 1.12 was installed: # rpm -V docker-1.12.6 # docker version Containers and the images they are created from are stored in Docker’s storage back end. This storage is ephemeral and separate from any persistent storage allocated to meet the needs of your applications. OKD.. OKD is capable of cryptographically verifying images are from trusted sources. The Container Security Guide provides a high-level description of how image signing works. You can configure image signature verification using the atomic command line interface (CLI), version 1.12.5 or greater. OKD, The advanced installation method requires. If the /etc/environment file on your nodes contains either an http_proxy or https_proxy value, you must also set a no_proxy value in that file to allow open communication between OKD.
https://docs.okd.io/3.6/install_config/install/host_preparation.html
2019-02-15T23:51:29
CC-MAIN-2019-09
1550247479627.17
[]
docs.okd.io
You can configure a virtual machine that runs on an ESXi host 6.5 and later to have up to 128 CPUs. You can change the number of virtual CPUs while your virtual machine is powered off. If virtual CPU hotplug is enabled, you can increase the number of virtual CPUs while the virtual machine is running. About this task Virtual CPU hot add is supported for virtual machines with multicore CPU support and ESXi 5.0 and later compatibility. When the virtual machine is turned on and CPU hot add is enabled, you can hot add virtual CPUs to running virtual machines. The number of CPUs that you add must be a multiple of the number of cores that exist on each socket. When you configure your virtual machine for multicore virtual CPU settings, you must ensure that your configuration complies with the requirements of the guest operating system EULA. Prerequisites If virtual CPU hot add is not enabled, turn off the virtual machine before adding virtual CPUs. To hot add multicore CPUs, verify that the virtual machine is compatible with ESXi 5.0 and later. Verify that you have theprivilege. Procedure - Click Virtual Machines in the VMware Host Client inventory. - Right-click a virtual machine in the list and select Edit settings from the pop-up menu. - On the Virtual Hardware tab, expand CPU, and select the number of cores from the CPU drop-down menu. - Select the number of cores per socket from the Cores Per Socket drop-down menu. - Click Save.
https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.html.hostclient.doc/GUID-76FC7E9F-8037-4C8E-BEB9-91C266C1EA9A.html
2019-02-15T23:09:49
CC-MAIN-2019-09
1550247479627.17
[]
docs.vmware.com
No matches found Try choosing different filters or resetting your filter selections. Sales: Intelligent Forecasting, Improved Campaign Management, and Lightning Dialer Enhancements Get smarter with Einstein Forecasting, now generally available. Control access to campaign members, and add them by account. Improve your reps’ performance with Lightning Dialer features and other tools. - Sales Cloud Einstein: Forecasting General Availability, More Reporting, and Improvements to Insights Improve your forecasting accuracy with Einstein Forecasting. Create reports based on opportunity scores, and get more control over how leads are scored. Plus, sales reps no longer have to connect their email account through Einstein Activity Capture to get Opportunity Insights. And there’s more reasons to use Einstein Activity Capture now that we’ve added Email Insights. - Core Sales Features: Improved Campaign Management, More Forecast Types, and Enhancements for Lead Conversion and Product Schedules Control access to campaign members, and add them by account. Forecast sales by product and schedule dates. Get improved support for Contacts to Multiple Accounts, and let sales reps add opportunities faster during lead conversion. And welcome product schedules to Lightning Experience. - Productivity Features: Lightning Dialer Improvements, Email Insights, and More List Email Features We’re introducing a ton of new features for Lightning Dialer, including call monitoring and multiple voicemail messages. Your reps can also now see important sales context alongside relevant emails, and send list emails to campaigns. - Data Quality: Duplicate Management, Lightning Data Identify duplicate records by running duplicate jobs on custom objects. And with Lightning Data, benefit from usability improvements, higher match rates, more accurate status, and currency conversion. - Lightning for Gmail: Focused Design and Email Logging Improvements Get your reps working faster with Salesforce records in Gmail™ and Google Calendar™. The application features and Salesforce data are organized to maximize space and give easier access. The improved log email experience makes it easier to log emails to Salesforce records. - Microsoft® Integration: Focused Design, Email Logging Improvements, and Salesforce for Outlook Announcements Get your reps working their Salesforce deals directly from Microsoft® Outlook®. Lightning for Outlook provides reps with an updated product design and improvements when logging emails. If you’re still using Salesforce for Outlook, check out the latest bug fixes. If you’re not using Salesforce for Outlook regularly, starting next release, the product won’t be available to download or use. But that’s okay: Lightning for Outlook and Lightning Sync are available to you and offer improved Microsoft integration features. - Salesforce CPQ and Billing Deliver quotes, proposals, and contracts quickly and accurately. Automate billing and payment processes with flexible tools and terms. Salesforce CPQ and Billing offer an end-to-end solution for creating quotes, closing deals, settling invoices, and reporting revenue. - Pardot: Permanent Deletion of Prospects, Responsive Layouts, Preview As, Repeating Engagement Programs, and More Goodies Permanently delete prospects from the recycle bin, insert Salesforce files in Engage emails, use new prebuilt layouts to create responsive landing pages, and get better match rates when you use the Matched Leads component. Plus, preview emails as specific prospects and let prospects flow through engagement programs more than once. - Other Changes in the Sales Cloud Filter account list views by territory in Lightning Experience. Event subjects display below event times in Calendar. And organizer, attendee, and attendee status fields are no longer exported with events.
http://docs.releasenotes.salesforce.com/en-us/summer18/release-notes/rn_sales.htm
2019-02-16T00:00:54
CC-MAIN-2019-09
1550247479627.17
[]
docs.releasenotes.salesforce.com
Administrators Inviting New Clinicians, Managing Reporting, Assigning Content to Users - Resetting Your Password - Using the Dashboard Screen - Using the Reporting Screen - Managing Networks (Grouping Users by Hospital, Facility, or Other) - Setting Up Your Administrator Account - Enabling Patient Surveys - Creating New Clinician Users - Featuring Content for Clinicians - Customizing Content for Clinicians - Creating New Administrator Logins
https://docs.app.acpdecisions.org/category/399-administrators
2019-02-16T00:18:49
CC-MAIN-2019-09
1550247479627.17
[]
docs.app.acpdecisions.org
Linear Learner Algorithm Linear models are supervised learning algorithms used for solving either classification or regression problems. For input, you give the model labeled examples (x, y). x is a high-dimensional vector and y is a numeric label. For binary classification problems, the label must be either 0 or 1. For multiclass classification problems, the labels must be from 0 to num_classes - 1. For regression problems, y is a real number. The algorithm learns a linear function, or, for classification problems, a linear threshold function, and maps a vector x to an approximation of the label y. The Amazon SageMaker linear learner algorithm provides a solution for both classification and regression problems. With the Amazon SageMaker algorithm, you can simultaneously explore different training objectives and choose the best solution from a validation set. You can also explore a large number of models and choose the best. The best model optimizes either of the following: Continuous objective, such as mean square error, cross entropy loss, absolute error, and so on Discrete objectives suited for classification, such as F1 measure, precision@recall, or accuracy Compared with methods that provide a solution for only continuous objectives, the Amazon SageMaker linear learner algorithm provides a significant increase in speed over naive hyperparameter optimization techniques. It is also more convenient. The linear learner algorithm requires a data matrix, with rows representing the observations, and columns representing the dimensions of the features. It also requires an additional column that contains the labels that match the data points. At a minimum, Amazon SageMaker linear learner requires you to specify input and output data locations, and objective type (classification or regression) as arguments. The feature dimension is also required. For more information, see CreateTrainingJob. You can specify additional parameters in the HyperParameters string map of the request body. These parameters control the optimization procedure, or specifics of the objective function that you train on. For example, the number of epochs, regularization, and loss type. Topics Input/Output Interface for the Linear Learner Algorithm The Amazon SageMaker linear learner algorithm supports three data channels: train, validation (optional), and test (optional). If you provide validation data, it should be FullyReplicated. The algorithm logs validation loss at every epoch, and uses a sample of the validation data to calibrate and select the best model. If you don't provide validation data, the algorithm uses a sample of the training data to calibrate and select the model. If you provide test data, the algorithm logs include the test score for the final model. For training, the linear learner algorithm supports both recordIO-wrapped protobuf and CSV formats. For the application/x-recordio-protobuf input type, only Float32 tensors are supported. For the text/csv input type, the first column is assumed to be the label, which is the target variable for prediction. You can use either File mode or Pipe mode to train linear learner models on data that is formatted as recordIO-wrapped-protobuf or as CSV. For inference, the linear learner algorithm supports the application/json, application/x-recordio-protobuf, and text/csv formats. For binary classification models, it returns both the score and the predicted label. For regression, it returns only the score. For more information on input and output file formats, see Linear Learner Response Formats for inference, and the Linear Learner Sample Notebooks. EC2 Instance Recommendation for the Linear Learner Algorithm You can train the linear learner algorithm on single- or multi-machine CPU and GPU instances. During testing, we have not found substantial evidence that multi-GPU computers are faster than single-GPU computers. Results can vary, depending on your specific use case. Linear Learner Sample Notebooks For a sample notebook that uses the Amazon SageMaker linear learner algorithm to analyze the images of handwritten digits from zero to nine in the MNIST dataset, see An Introduction to Linear Learner with MNIST. For instructions on how to create and access Jupyter notebook instances that you can use to run the example in Amazon SageMaker, see Use Notebook Instances. After you have created a notebook instance and opened it, choose the SageMaker Examples tab to see a list of all of the Amazon SageMaker samples. The topic modeling example notebooks using the NTM algorithms are located in the Introduction to Amazon algorithms section. To open a notebook, choose its Use tab and choose Create copy.
https://docs.aws.amazon.com/sagemaker/latest/dg/linear-learner.html
2019-02-15T23:48:07
CC-MAIN-2019-09
1550247479627.17
[]
docs.aws.amazon.com
In this step you associate the business objects with the application. 1. Select the Employees and the Statistical Reporting business objects. 2. Drag them to the iiiHR application: Note that instead of manually defining your application as you did in VLF001 - Defining Your HR Application, you could have added it on this screen to the list of existing applications. It is important to realize that you can prototype an entire system of many applications using the Instant Prototyping Assistant. 3. Click Next. A summary of your application prototype is shown. 4. Click Finish to create the prototype. You can now see your application and business objects in the Framework:
https://docs.lansa.com/14/en/lansa048/content/lansa/lansa048_4460.htm
2019-02-15T23:38:41
CC-MAIN-2019-09
1550247479627.17
[]
docs.lansa.com
Client libraries Waves Full Node provides access to the REST API also there are community-driven open source libraries for different programming languages: - Python: PyWaves. - Java: WavesJ. - C#: WavesCS. - TypeScript/JavaScript: Waves Signature Adapter and Waves Transactions. - C: WavesC. - Community Libraries. All libraries are open for contribution and testing. Note: Libraries above can upgrade later than REST API, use them wisely. If you want to use the latest features, please, use REST API directly. To use all features of REST API you have to set up a Full Node. If you want to use basic features you can use public nodes, for example,. The full list of public nodes is available here.
https://docs.wavesplatform.com/en/development-and-api/client-libraries.html
2019-02-16T00:15:14
CC-MAIN-2019-09
1550247479627.17
[]
docs.wavesplatform.com
How-to articles Following are some configuration instructions or functional use cases available as “How to” articles to help you manage your SWG deployment. URL filteringURL filtering How to create a URL categorization policy How to create a URL list policy How to whitelist an exceptional URL How to block adult category web sites Support: Feedback and forums:
https://docs.citrix.com/en-us/netscaler-secure-web-gateway/12-1/how-to-articles.html
2019-02-16T00:16:19
CC-MAIN-2019-09
1550247479627.17
[]
docs.citrix.com
A. access_key, an access credential, for client identities granted share access. access_keyto the JSON response of access_listAPI. cephfs_nativedriver’s update_access() to, access_keysof ceph auth IDs that are allowed access. driver_handles_share_servers = Truemode applies the Maximum Transmission Unit (MTU) from the network provider where available when creating Logical Interfaces (LIFs) for newly created share servers.. driver_handles_share_servers = True.. tox -e genconfig), [cinder], [nova] and [neutron] options are now generated in the right groups instead of [default]. thin_provisioningextra-spec in the share type of the share being created. dedupecapability and it can be used in extra-specs to choose a HNAS file system that has dedupe enabled when creating a manila share on HNAS. manage_errorstatus works as expected. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.
https://docs.openstack.org/releasenotes/manila/newton.html
2019-02-15T23:55:49
CC-MAIN-2019-09
1550247479627.17
[]
docs.openstack.org
All Files modify the column timing independently from each other, but the drawings remain linked. You can copy the columns timing, so drawings and timings remain linked. You can clone selected nodes from the Node view in the same way. This is useful when you want to reuse a hand-drawn animation but have different timings. How to clone an element In the Timeline or Xsheet view, select the layer or column to clone. Do one of the following: From the top menu, select Edit > Clone: Drawings Only to clone only the layer or column drawings. The new cloned layer or column appears.
https://docs.toonboom.com/help/harmony-15/essentials/timing/clone-layer-column.html
2019-02-15T22:58:02
CC-MAIN-2019-09
1550247479627.17
[]
docs.toonboom.com
Installing tracking code in Single Page Applications is slightly more complicated in that you cannot simply copy the tracking code and paste it into your page <head>. This is because the idea of an SPA is that the page head is not reloaded all the time, but rather just parts of the page are updated based on data-only (not full page html) requests made in the background. If you have developers implementing custom event tracking, you shouldn't have a problem with this, as an SPA developer should be intimately familiar with this issue as it's core to the concept of SPA's. Suggestions The Snippet and the Config: in the <head> You'll probably still want to load the The Javascript Snippet in your page <head>, along with your tracker configuration (e.g.: woopra.config()). You'll be unable to track all of your visitor's behavior if you simply put your tracking code (e.g.: woopra.track()) in the <head> area though because it will only run once at page load, and subsequent actions taken by the user in your web-app will not be automatically re-tracked. Where to Track This means you will have to implement some level of custom tracking, even if you aren't creating custom events, you will still need to put your calls to woopra.track() into the event handler functions that are relevant. For instance, if you are a media content site, and you have a handler function that runs when a new article is loaded, THAT is where you want to put your call to woopra.track(), maybe even while you're at it, adding a custom event to the call: woopra.track('article view', { author: 'Ralph Samuel', topic: 'Tracking Implementations', title: document.title, url: window.location.href })
https://docs.woopra.com/docs/spas
2019-02-15T23:40:11
CC-MAIN-2019-09
1550247479627.17
[]
docs.woopra.com
Disk encryption on GCP Cloudbreak supports encryption options available on Google Cloud’s Compute Engine. Refer to this section if you would like to encrypt key encryption keys used for cluster storage on Google Cloud. As stated in Protecting resources with Cloud KMS Keys in Google Cloud documentation, “By.” Google Cloud’s Compute Engine offers two options for these key encryption keys: Using the Cloud Key Management Service to create and manage encryption keys, known as "customer-managed encryption keys" (CMEK). Creating and managing your own encryption keys, known as "customer-supplied encryption keys" (CSEK). When Cloudbreak provisions resources in Compute Engine on your behalf, Compute Engine applies data encryption as usual and you have an option to configure one of these two methods to encrypt the encryption keys that are used for data encryption. Since an encryption option must be specified for each host group, it is possible to either have one encryption key for multiple host groups or to have a separate encryption key for each host group. Once the encryption is configured for a given host group, it is automatically applied to any new devices added as a result of cluster scaling. Overview of configuring key encryption In order to configure encryption key encryption by using a KMS key (CMEK) or a custom key (CSEK): You must enable all required APIs and permissions as described in Google Cloud documentation. Your encryption key must be in the same project and location where you would like to create clusters. The service account used for the Cloudbreak credential must have the minimum permissions. When creating a cluster, you must explicitly select an existing encryption option for each host group on which you would like to configure disk encryption. These requirements are described in detail in the following sections.
https://docs.hortonworks.com/HDPDocuments/Cloudbreak/Cloudbreak-2.9.0/advanced-cluster-options/content/cb_gcp-ce-encryption.html
2019-02-16T00:21:36
CC-MAIN-2019-09
1550247479627.17
[]
docs.hortonworks.com
If you select this option and the Keep XML File Versions option is checked, old versions of the XML will be stored in a subfolder of the usual storage location, called ....\VF_Versions_\. Usually this is the <<partition execute directory>>\VF_Versions_\ folder. This can help to keep your partition execute directory tidy. Note that if you change this option: This property is in the Framework Details tab.
https://docs.lansa.com/14/en/lansa048/content/lansa/lansa048_4220.htm
2019-02-15T23:51:22
CC-MAIN-2019-09
1550247479627.17
[]
docs.lansa.com
No matches found Try choosing different filters or resetting your filter selections. Salesforce Spring ’17 Release Notes : Favorites, Console Apps, and More Actions Spring ’17 gives you more reasons to love Lightning Experience. Customize your navigation experience with favorites, see multiple records on one screen with console apps, and access more global actions from anywhere, marketing, and communities—and enables anyone to use clicks or code to build AI-powered apps that get smarter with every interaction. Now, everyone in every role and industry can use AI to be their best. - Sales: Artificial Intelligence, Expanded Sales Path, and Smarter Email: A New Name and New Features: Lightning Hits the Console and Knowledge; Field Service Comes to iOS: Lightning Enhancements, One-Stop Data Manager, Personal Wave Home, New Charts, and More: Community Workspaces, Criteria-Based Audiences, Mobile Actions, and More: Company Highlights Feed, Create Custom Feeds, Customize Groups: Folders in Libraries (Beta), Attach Salesforce Files to Records, and Rename Files from the Related List: Do More on the Go: Client and Household Relationship Mapping, Alerts, Client Service Enhancements Advisors can now create, maintain, and visualize clients and households through new relationship groups. Get new client service enhancements, including alerts on a client’s profile page and financial accounts to help advisors keep up with changes to client’s financial accounts. - Health Cloud: Wave for Health Cloud: Risk Stratification, Lead-to-Patient Conversion, and More: More Control Over Record Page Assignments and Flow Screens, Connect to External Services:. - Other Salesforce Products
https://docs.releasenotes.salesforce.com/en-us/spring17/release-notes/salesforce_release_notes.htm
2019-02-16T00:04:39
CC-MAIN-2019-09
1550247479627.17
[]
docs.releasenotes.salesforce.com
This section consists of few examples for the supported NAT flow in Network Insight. Example 1 In the above topology, E2, E3, LDRs, VMs ( VM1, VM2, VM3, VM4) are part of NAT domain E1. Anything above E1 such as uplink of E1 is part of default NAT domain. The above topology consists of the following: The flow from VM1 to VM2 and vice versa is reported in Network Insight. Similarly the flow from VM3 to VM4 and vice versa is reported. Example 2 The above topology consists of the following: VM1 and VM2 are part of E2 domain. VM3 and VM4 are part of E2 domain. E2 and E3 NAT domains are child domains of E1 NAT domain. E1 is the single child of default NAT domain. VM5 and VM6 are part of E1 NAT domain. In the above topology, the following flows are reported in Network Insight: Flow from VM5 to VM6 Flow from (VM1, VM2) to (VM3, VM4)
https://docs.vmware.com/en/VMware-Network-Insight/services/Using-VMware-Network-Insight/GUID-F9F29FB1-88A8-40E9-9A70-8C45FB913D12.html
2019-02-15T23:43:40
CC-MAIN-2019-09
1550247479627.17
[array(['images/GUID-3F532060-E3E5-426D-A7AC-B3DBF9BE82D5-low.png', None], dtype=object) array(['images/GUID-FB934A28-BA7D-4B22-B3C9-000358323DAE-low.png', None], dtype=object) ]
docs.vmware.com
Welcome to ACHE’s Documentation!¶ ACHE is a focused web crawler. It collects web pages that satisfy some specific criteria, e.g., pages that belong to a given domain or that contain a user-specified pattern. ACHE differs from generic crawlers in sense that it uses page classifiers to distinguish between relevant and irrelevant pages in a given domain. A page classifier can be through automatic link prioritization - Configuration of different types of pages classifiers (machine-learning, regex, etc.) - Continuous re-crawling of sitemaps to discover new pages - Indexing of crawled pages using Elasticsearch - Web interface for searching crawled pages in real-time - REST API and web-based user interface for crawler monitoring - Crawling of hidden services using TOR proxies Contents: - Installation - Running a Focused Crawl - Running a In-Depth Website Crawl - Running a In-Depth Website Crawl with Cookies - Crawling Dark Web Sites on the TOR network - Target Page Classifiers - Crawling Strategies - Data Formats - Link Filters - Web Server &?
https://ache.readthedocs.io/en/latest/
2019-02-15T23:41:55
CC-MAIN-2019-09
1550247479627.17
[]
ache.readthedocs.io
DescribeRegions Describes one or more regions that are currently available to you. For a list of the regions supported by Amazon EC2, see Regions and Endpoints.. endpoint- The endpoint of the region (for example, ec2.us-east-1.amazonaws.com). region-name- The name of the region (for example, us-east-1). Type: Array of Filter objects Required: No - RegionName.N The names of one or more regions. Type: Array of strings Required: No Response Elements The following elements are returned by the service. Errors For information about the errors that are common to all actions, see Common Client Errors. Examples Example 1 This example displays information about all regions. Sample Request &AUTHPARAMS Example 2 This example displays information about the specified regions only. Sample Request &RegionName.1=us-east-1 &RegionName.2=eu-west-1 &AUTHPARAMS Sample Response <DescribeRegionsResponse xmlns=""> <requestId>59dbff89-35bd-4eac-99ed-be587EXAMPLE</requestId> <regionInfo> <item> <regionName>us-east-1</regionName> <regionEndpoint>ec2.us-east-1.amazonaws.com</regionEndpoint> </item> <item> <regionName>eu-west-1</regionName> <regionEndpoint>ec2.eu-west-1.amazonaws.com</regionEndpoint> </item> </regionInfo> </DescribeRegionsResponse> See Also For more information about using this API in one of the language-specific AWS SDKs, see the following:
https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeRegions.html
2019-02-15T23:30:28
CC-MAIN-2019-09
1550247479627.17
[]
docs.aws.amazon.com
Deleting Paper Textures You can delete unnecessary paper textures from your preset list. - In the Paper Texture library, select a texture. - Do one of the following: NOTE: You can delete any texture in the texture library as long as there is no brush preset using it.
https://docs.toonboom.com/help/harmony-14/paint/drawing/delete-paper-texture.html
2019-02-15T22:58:37
CC-MAIN-2019-09
1550247479627.17
[array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../Resources/Images/HAR/Stage/Character_Design/Brush_Tool/HAR12/HAR12_DeleteTexture.png', None], dtype=object) ]
docs.toonboom.com
Using Trello to manage the launch process¶ To manage the overall launch process, including testing and validation, Microsoft uses a service called Trello. Once you have started the launch process, Microsoft will create a dedicated Trello board to track issues and provide a common communication channel between your team and Microsoft. The board will be pre-populated with cards tracking various questions about your WOPI implementation as well as discussion cards to determine launch dates, etc. If you are new to Trello, you can learn more about it at. Important You should use the Trello board to communicate with Microsoft throughout the launch process. This will ensure that all Office Online team members are aware of the communications, and it provides a straightforward way to isolate conversations about specific issues. Adding people to the board¶ You can invite other relevant people to the board as needed; only people from the Office Online team and people you explicitly invite will have access to see or edit the content of the board. We recommend that you add relevant engineers from your team to the board as well, since many of the discussions will be engineering-focused. You might add designers or business people to the board as well; simply add whomever makes sense for your team. See also Learn how to add more people to your board at. Board structure¶ Figure 3 Example partner launch board in Trello The board structure is fairly basic. There are seven lists, and you can move cards between the lists as needed. The lists serve two main purposes. First, they keep issues organized at a high level, so it is easy to see what issues are being investigated and what has been resolved, etc. In addition, the lists provide a simple way to configure Trello’s notifications such that both you and Microsoft are aware of what requires attention. - Reference: This list contains cards that have reference information, such as current test accounts for the business user flow or known issues that may affect your testing. - New: Microsoft: This list contains new cards that Microsoft needs to be aware of. You should add cards to this list to ensure it is brought to Microsoft’s attention. Any card on this list represents something that Microsoft has not yet acknowledged or taken action on. Once Microsoft is aware of the card, it will be moved to another list like Under Discussion/Investigation for action. - New: Partner: This list contains new cards that you, the Office Online partner, need to take action on. Initially, this list will contain a number of cards tracking various questions about your WOPI implementation or launch plans. As testing is done, Microsoft will create new cards to track implementation issues or additional questions that arise during testing. Like the New: Microsoft list, cards should be moved from this list once they are acknowledged. - Under Discussion/Investigation: This list contains cards that are being discussed or investigated, either by you or Microsoft. Once a resolution is reached on the particular card, it should be moved to the Fix In Progress or Re-verify list. - Fix In Progress: This list contains cards that are in the process of being addressed. These cards may represent a bug fix by you or Microsoft, or a settings change that is in progress, etc. Once the issue is addressed, the card should be moved to the Re-verify list. - Re-verify: This list contains cards that are ready to be verified. For example, you may have answered a question about your WOPI implementation, at which point you can move the card to the Re-verify list. Once it has been verified, it can be moved to the Resolved list. If there are follow-up questions or further discussion is needed, the card might be moved back to the Under Discussion/Investigation list. - Resolved: This list contains cards that are resolved, either because the issue has been fixed and verified, or a question has been answered and verified. Card flow¶ With the exception of the left-most Reference list, the lists represent a process flow that issues will go through as they are discussed and addressed. Cards will typically move from left to right, starting at either the New: Microsoft or New: Partner lists, then moving right through the other relevant lists. In some cases, a card might be moved back to a previous list. For example, if a card in the Re-verify list is found to not be resolved, it may be moved back to the Under Discussion/Investigation or Fix In Progress lists. Tip You should always create new cards in either the New: Microsoft or New: Partner lists. That ensures that people are notified about the new cards. See Notifications for more information. Labels¶ Figure 4 Default labels configured for partner boards Labels are used to help flag particular cards for easy filtering. You can filter the board based on the label colors, so it’s easy to focus on items that need to resolved before you can be enabled in the production environment, for example, by filtering to just the red “Production Blocker” cards. Four labels are defined initially: - Production Blocker - Implementation Question - Launch Planning - Resources You should feel empowered to add new labels to your board if you wish. Notifications¶ Trello supports a wide variety of notification options. You can be notified of activity on your board by subscribing to individual cards, lists, or even the whole board. You’ll receive notifications when things that you’re subscribed to are changed. You can configure how these notifications behave in your Trello settings. Tip You can subscribe to an individual card yourself, but you can also be ‘added’ to a card by someone else. When you are added to a card you are automatically subscribed to it. See for more information. See also Learn how to subscribe to items in Trello at. Recommended configuration and best practices¶ By default, Office Online team members will subscribe to the New: Microsoft list. This ensures that they will be notified any time a card is added or moved to that list. We recommend that your team members similarly subscribe to the New: Partner list for the same reason. In addition, we recommend the following: - When you create a new card, subscribe to it so you are notified when it is updated. - The board is pre-populated with cards. Consider subscribing to the cards that you’d like to explicitly track. - You might also choose to subscribe to the entire board, though this can result in ‘notification overload,’ especially early on in the validation process. However, it can be useful after the board activity has lessened to ensure you don’t miss any changes.
https://wopi.readthedocs.io/en/latest/build_test_ship/trello.html
2019-02-15T23:15:36
CC-MAIN-2019-09
1550247479627.17
[array(['../_images/trello_initial.png', 'An example partner launch board in Trello.'], dtype=object) array(['../_images/trello_labels.png', 'The default labels configured for partner boards'], dtype=object)]
wopi.readthedocs.io
Welcome to Octavia! Octavia is an open source, operator-scale load balancing solution designed to work with OpenStack. Octavia was borne..
https://docs.openstack.org/octavia/queens/reference/introduction.html
2019-02-15T23:59:44
CC-MAIN-2019-09
1550247479627.17
[]
docs.openstack.org
Contents HR Service Management Previous Topic Next Topic Configure HR Service Management Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Configure HR Service Management You can set the configuration of HR Service Management to determine how to handle day-to-day operations. Before you beginRole required: admin or hr_admin About this taskYou must be in the global domain to set HR configuration options. Administrators in domains lower than the global domain can view the Configurations page, but cannot modify the settings. Procedure Navigate to HR -An option in Human Resources Configuration provides a list of HR profile fields that can be enabled for edit. Understand the difference between how the personal and the employment information fields are updated in the HR profile based on this configuration.Related ReferenceBusiness rules installed with HR Service ManagementRelated TopicsCreate a request from an inbound emailAgent auto assignment using time zonesAgent auto-assignment using locationAgent auto-assignment using skills On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/helsinki-hr-service-delivery/page/product/human-resources/task/t_ConfigureHRServiceManagement.html
2019-02-15T23:51:38
CC-MAIN-2019-09
1550247479627.17
[]
docs.servicenow.com
Creating a Colour Palette T-ANIMPA-003-002. If you are using Harmony Server, see.
https://docs.toonboom.com/help/harmony-14/paint/colour/create-colour-palette.html
2019-02-15T23:31:08
CC-MAIN-2019-09
1550247479627.17
[array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../Resources/Images/HAR/Trad_Anim/004_Colour/HAR_palette_browser_adv.png', None], dtype=object) array(['../Resources/Images/HAR/Trad_Anim/004_Colour/HAR11_create_palette.png', None], dtype=object) ]
docs.toonboom.com
Knowledge Base - How to access DeepFactor Portal in different AWS subnet types - Sensitive Information and Secrets in Process Environment Remediation - Managing DeepFactor API Tokens - Hybrid libc environments - Privilege Separation and Privilege Dropping - How the DeepFactor Management Portal Communicates With The Outside World - DeepFactor Pre-install Checklist - Creating Multiple Environments - Running HAProxy with DeepFactor
https://docs.deepfactor.io/hc/en-us/sections/360009479054-Knowledge-Base
2021-07-24T00:16:47
CC-MAIN-2021-31
1627046150067.87
[]
docs.deepfactor.io
EFR32xG22-Power-Consumption-Optimization Introduction One of the new features highlighted in EFR32xG22 is the Low System Energy Consumption which can reach 1.40 µA EM2 DeepSleep current with 32 kB RAM retention and RTC running from LFRCO. This document discusses how to measure the minimum current consumption in EFR32xG22 EM2, as well how to reduce current consumption. Discussion According to the EFR32MG22 data sheet, the typical test conditions should be: VREGVDD = 3.0 V. AVDD = DVDD = IOVDD = RFVDD = PAVDD = 1.8 V from DCDC Voltage scaling level VSCALE0 in EM2 with TA = 25 °C. When creating an "SoC - Empty" project, the initial sleep current can be measured at around 2 µA in EM2 which is not 1.4 µA mentioned in the data sheet. This is mainly because certain peripherals, such as VCOM and Debug mode are enabled in SoC Empty project for development convenience. Users can disable these functions to reduce consumption. The figure below shows the result tested from Energy Profiler in Simplicity Studio with two current measuring results. The left one represents the total average current which includes the significant current increase during reset. The right one is calculated from the selected range from the user read values. In accordance with "AEM Accuracy and Performance" section from UG172,, Energy Profiler is still not accurate enough to measure low-power consumption especially in Deep Sleep mode. As seen in the figure above, the radio board voltage is at around 3.3 V which does not correspond to 3.0 V mentioned in the data sheet because, when using AEM mode, a low noise 3.3 V LDO on the main board is used to power the radio board. To get more accurate results, the following discussion and test will strictly obey the testing conditions in the data sheet as well as using a high-accuracy DC analyzer instead of Energy Profiler. The DC Power Analyzer used in this article is N6705B from Agilent, whose ammeter accuracy is up to 0.025% + 8 nA. It also provides Data logger function with Measurement Interval from 20 µs to 60 s so that the average value of the current consumption can be easily calculated. The following section discusses how different testing conditions and peripherals affect current consumption. Supply Voltage First, input voltage will be compared when supply voltage is 3.0 V and 3.3 V. The figure below shows testing results of "SoC Empty Project" with different supply voltages. The upper line is the current consumption with 3.0 V supply voltage while the lower line is with 3.3 V. You can find from the table below that with 3.0 V supply voltage will consume higher current consumption than with 3.3 V supply voltage. This is because the device will maintain constant power in EM2. According to the formula P = U x I, voltage is inversely proportional to current under constant power. Debugger Debug connectivity can be enabled by setting the EM2DBGEN field on the EMU_CTRL register and will consume about 0.5 µA extra supply current. To reduce current consumption, comment out the line below. //Force PD0B to stay on EM2 entry. This allow debugger to remain connected in EM2 //EMU->CTRL |= EMU_CTRL_EM2DBGEN; DCDC A DC-DC buck converter is a type of switching regulator that efficiently converts a high-input voltage to a lower-output voltage, it covers a wide range of load currents, and provides high efficiency in energy modes EM0, EM1, EM2 and EM3. For more information about DCDC, see AN0948. // Enable DC-DC converter EMU_DCDCInit_TypeDef dcdcInit = EMU_DCDCINIT_DEFAULT; EMU_DCDCInit(&dcdcInit); DCDC is enabled by default in the SoC Empty project. The figure below shows the current curve comparison with and without DCDC usage after disabling debug mode in the SoC empty project. You can see from the average current that using DCDC can result in current savings. External Flash The external flash “MX25R8035F” equipped in the BRD4182A radio board is standby by default. Typical current draw in standby mode for the MX25R8035F device used on EFR32 radio boards is 5 µA, which makes observing the difference between VS2 and VS0 voltage scaling levels difficult. Fortunately, JEDEC standard SPI flash memories have a lower current deep power-down mode, in which the typical current draw can be up to 0.35 µA is typically 0.007 µA. With the command below will put the MX25 into deep power down mode. /* Disable external flash memory*/ MX25_init(); MX25_DP(); MX25_init initializes the SPI Flash and calling MX25_DP sends the byte necessary to put the Flash into DP mode. Voltage Scaling Voltage scaling helps to optimize the energy efficiency of the system by operating at lower voltages when possible. Three supply voltage operating points are available, as shown below: The voltage sale level for EM2 and EM3 is set using the EMU_CTRL_EMU23VSCALE field. The lowest sleep current will be obtained by setting EMU23VSCALE to VSCALE0. EMU_EM23Init_TypeDef em23_init = EMU_EM23INIT_DEFAULT; em23_init.vScaleEM23Voltage = emuVScaleEM23_LowPower; EMU_EM23Init(&em23_init); emuVScaleEM23_LowPower mode (vscale0) and emuVScaleEM23_FastWakeup (vscale2) are two voltage scaling modes in EM2 and EM3. Current reduction between different scaling modes will be shown in the subsequent sections. Radio RAM Retention The EFR32xG22 device contains several blocks of SRAM for various purposes including general data memory (RAM) and various RF subsystem RAMS (SEQRAM, FRCRAM). Frame Rate Controller SRAM(FRCRAM) and all part of Sequencer SRAM(SEQRAM) may be powered down in EM2/EM3 if not required. To control retention of these areas, set FRCRAMRETNCTRL or SEQRAMRETNCTRL in SYSCFG_RADIORAMRETNCTRL to the desired value. /* Disable Radio RAM memories (FRC and SEQ) */ CMU_ClockEnable(cmuClock_SYSCFG, true); SYSCFG->RADIORAMRETNCTRL = 0x103UL; Note : The command above can only be implemented in a MCU project. The wireless stacks won't work as expected if the FRCRAM and SEQRAM are disabled. Disable different Radio RAM will result in different reductions. The following table lists the current draw measured in the MCU project with different RADIO RAM retention as well as 32 KB RAM. GPIO All unconnected pins on the EFR32 should be configured to Disabled mode (high impedance, no pull resistor) where the reset state of the I/O pins is disabled as well, which is done by setting the GPIO gpioModeDisabled. See the GPIO setting in MX25_deinit(), which is used to disable SPI communication. MX25_deinit(); Also, if you are reproducing the EM2 current consumption test using some of the example come with our SDK (either MCU or Wireless), also check the status of the VCOM. Enabling VCOM will increase the current consumption. De-assert the VCOM Enable pin and the TX and RX pins when not needed. //initVcomEnable(); Power Domain The EFR32xG22 implements several independent power domains which are powered down to minimize supply current when not in use. Power domains are managed automatically by the EMU. It includes lowest-energy power domain (PDHV), low power domain (PD0), low power domain A (PD0A) and auxiliary PD0 power domains (PD0B, PD0C, and so on). When entering EM2 or EM3, if any peripheral on an auxiliary low power domain (PD0B, PD0C, etc.) is enabled, that auxiliary low power domain will be powered causing higher current draw. Otherwise, the auxiliary power domain will be powered down. The entire PD0B will be kept on in EM2/EM3 if any module in PD0B is enabled on EM2/EM3 entry. Therefore, ensure that the High Power peripherals are disabled when entering EM2. Heating Impact Note that the temperature has a major impact on the consumption. The recommended ambient temperature for this test is 25°C, as documented in the data sheet. Note that you don't have to follow the condition to reserve full RAM and use LFRCO. You can either disable RAM retention or use a different oscillator for even lower power consumption. SRAM Retention RAM is divided into two 24 KB and 8 KB banks, beginning at address 0x20000000 and 0x20006000 respectively. By default, both banks are retained in EM2/EM3. Sleep mode current can be significantly reduced by powering down a bank that does not need to be retained. RAMRETENCTRL in the SYSCFG_DMEM_RETNCTRL register controls which banks are retained in EM2/EM3. /* Disable MCU RAM retention */ // EMU_RamPowerDown(SRAM_BASE, SRAM_BASE + SRAM_SIZE); /* Power down BLK0 0x20000000 - 0x20006000: 0x01; BLK1 0x20006000 - 0x20008000: */ CMU_ClockEnable(cmuClock_SYSCFG, true); SYSCFG->DMEM0RETNCTRL = 0x01UL; Disabling different RAM will result in different reductions. The following table lists the current draw measured in the MCU project with different RAM retention and no RADIO RAM retention. Note: No RAM retention does not make sense (achievable but wake up fails). Low Frequency Oscillator Setting The LFRCO is an integrated low-frequency 32.768 kHz RC oscillator for low-power operation without an external crystal. It provides precision mode on certain part numbers which enable hardware that periodically recalibrates the LFRCO against the 38.4 MHz HFXO crystal when temperature changes to provide a fully internal 32.768 kHz clock source with +/- 500 ppm accuracy. With temperature variations, PLFRCO(LFRCO in precision mode) will autonomously run frequent calibrations which results in consumption increase. The Low-Frequency Crystal Oscillator (LFXO) uses an external 32.768 kHz crystal to provide an accurate low-frequency clock. Using LFXO instead of PLFRCO will reduce the current consumption. CMU_LFXOInit_TypeDef lfxoInit = CMU_LFXOINIT_DEFAULT; CMU_LFXOInit(&lfxoInit); CMU_OscillatorEnable(cmuOsc_LFRCO, false, false); CMU_OscillatorEnable(cmuOsc_LFXO, true, true); CMU_ClockSelectSet(cmuClock_LFXO, cmuSelect_LFXO); According to the EFR32xG22 data sheet, MCU current consumption using DC-DC at 3.0 V input in EM2 mode, VSCALE0 is shown below: Note : Entering EM2 mode immediately after reset may brick the device and the debugger may no longer be attached. To fix this, set the WSTK switch next to the battery holder to USB (powers down the EFR). Execute Simplicity Commander with command line parameters "./commander.exe device recover" and then immediately move the switch to the AEM position. Reference - GitHub Peripheral Example - AN969: Measuring Power Consumption on Wireless Gecko Devices - Enabling sleep mode of the MX25 series SPI flash Setting up The example project adopted most of the strategies mentioned above to reduce energy consumption. Because the low-power methods implemented in the MCU project and wireless project are quite different, the experiment will be run separately in these two domains. Hardware Environment 1 WSTK Main Development Board 1 EFR32xG22 2.4GHz 6 dBm Radio Board (BRD4182A Rev B04) Software Environment Simplicity Studio SV4.x Gecko SDK v2.7.x Note : This document focuses on the strategies to reduce current consumption. The low power strategies can also be implemented in Simplicity Studio SV5.x with Gecko SDK 3.x. BLE Project Example Example Experiment 1.Create a new SoC - Empty application project with Bluetooth SDK using version 2.13.6 or newer. 2.Open app.c and comment out the code in system_boot to ban advertising to measure the sleep current in EM2. case gecko_evt_system_boot_id: // bootMessage(&(evt->data.evt_system_boot)); // printLog("boot event - starting advertising\r\n"); // // /* Set advertising parameters. 100ms advertisement interval. // * The first parameter is advertising set handle // * The next two parameters are minimum and maximum advertising interval, both in // * units of (milliseconds * 1.6). // * The last two parameters are duration and maxevents left as default. */ // gecko_cmd_le_gap_set_advertise_timing(0, 160, 160, 0, 0); // // /* Start general advertising and enable connections. */ // gecko_cmd_le_gap_start_advertising(0, le_gap_general_discoverable, le_gap_connectable_scannable); break; 3.Comment out EMU_CTRL_EM2DBGEN in init_mcu.c to disable debug in EM2. // EMU->CTRL |= EMU_CTRL_EM2DBGEN; 4.Comment out VCOM in main.c to disable VCOM. // initVcomEnable(); 5.Build the project and download to your radio board xG22. Experiment Results The experiment results show the sleep current consumption in two minutes. You can see from the overall statistics in the table at the bottom and the average current consumption is about 1.65 µA. Because the testing is done in a wireless BLE project (SoC empty project), Radio RAM (both FRC and SEQ) should be retained even in EM2, which consumes about 0.25 µA extra supply current. Therefore, the testing result will be higher than 1.4 µA. If the wireless radio functions are not required, xG22 can reach consumption lower than 1.4 µA in the MCU project. MCU Project Example Example Experiment Import a MCU project from the GitHub example. Choose "File -> import" and browse to import the project below. C:\SiliconLabs\SimplicityStudio\v4\developer\sdks\gecko_sdk_suite\v2.7\peripheral_examples\series2\emu\em23_voltage_scaling\SimplicityStudio Replace main.c with the file attached in this article. Build the project and download it to your radio board xG22. Experiment results You can see from the testing results in the MCU project that the current consumption can reach lower than 1.4 µA. Usage Enabling or disabling different peripherals has different impacts on the current consumption. To reduce current draw, use the adjusted voltage to optimize the energy efficiency of the system. Additionally, adopt different strategies depending on your requirements to reach the minimum consumption. To reproduce and check the test results, see the above example section. Note: Although Energy Profiler is not accurate enough for low-power measurements, it is able to detect changes in the current consumption as small as 100 nA. It is always recommended to use higher accuracy equipment if applicable. Source main.c
https://docs.silabs.com/bluetooth/2.13/code-examples/peripheral/EFR32xG22-Power-Consumption-Optimization
2021-07-24T00:17:12
CC-MAIN-2021-31
1627046150067.87
[array(['/resources/bluetooth/code-examples/peripheral/EFR32xG22-Power-Consumption-Optimization/images/soc-empty-energy-profiler.png', None], dtype=object) array(['/resources/bluetooth/code-examples/peripheral/EFR32xG22-Power-Consumption-Optimization/images/agilent-n6705b.jpg', None], dtype=object) array(['/resources/bluetooth/code-examples/peripheral/EFR32xG22-Power-Consumption-Optimization/images/input-voltage-comparison.png', None], dtype=object) array(['/resources/bluetooth/code-examples/peripheral/EFR32xG22-Power-Consumption-Optimization/images/dcdc-comparison.png', None], dtype=object) array(['/resources/bluetooth/code-examples/peripheral/EFR32xG22-Power-Consumption-Optimization/images/datasheet-mcu-current-consumption.png', None], dtype=object) array(['/resources/bluetooth/code-examples/peripheral/EFR32xG22-Power-Consumption-Optimization/images/soc-empty-disable-debug.png', None], dtype=object) array(['/resources/bluetooth/code-examples/peripheral/EFR32xG22-Power-Consumption-Optimization/images/mcu-noradioram-32ram-v0.png', None], dtype=object) ]
docs.silabs.com
Anveo Mobile App Version 10 Released The Anveo Development team has released Anveo Mobile App version 10 on December 14, 2018. Please find the release for download in Anveo Partner Portal now. The new release includes a lot of interesting new features like: - - Self-compilable Dynamics objects for versions 2013 to 2018 and BC If you upgrade from a previous version, make sure to request a new Anveo license before upgrading. Please provide a list of company names you require in your installation. Find more information about the new Anveo license here. We are looking forward to your feedback. If you need support on the new features or on updating your existing installation, please have a look at our new documentation and knowledge base on this website. Of course, we would like to help you personally if required. Please contact our team at [email protected].
https://docs.anveogroup.com/en/anveo-mobile-app/anveo-mobile-app-version-10-released/
2021-07-24T01:47:21
CC-MAIN-2021-31
1627046150067.87
[]
docs.anveogroup.com
Page Tree Software Knowledgebases Remote Engineering and Multi-User Collaborative Development is a powerful tool that helps shorten a project's development time. By utilizing this feature, a project development team can work together at the same time on the same project without the need to do any sort of merging, importing, etc. In addition, this feature can also be used to centralize projects in a server and edit projects remotely. To use this feature, the following system requirements need to be met: Engineering users can configure the server project by using workstations, which are attached to the network, instead of having to work on the server itself. To use this feature on all computers connected to the same network, the following configurations are required: Open the welcome screen and navigate to the License tab. You should see a square box with information on engineering. Under the Engineering User field, you will find the amount of concurrent engineering users that are supported by the current license. Make sure you have the TWebServer (or IIS with services installed) running and make notice of which port it is running on. This information is available in the icon tray. A project cannot be opened local and remote simultaneously. If this occurs, anyone that remotely connects to the project will view the project as ReadOnly. For multiple engineering users to edit the same project, the project server must also be connected to itself. Open the welcome screen and navigate to the Server tab. Under Project Server, click on Remote and insert the remote server's IP address and TWebServer port number using the following syntax: http://<Server IP Address>:<Port Number>/ Once the connection is established, navigate to the Projects tab to see the projects that are on the server computer. After a project is opened and edited, it can be executed using a few different settings. In Run-Startup, the Startup Computer can be configured either as Local or Project Server. In Run-Modules, you should see two options regarding the displays: The diagnostic tools (Property Watch, Trace Window and Module Information) can be accessed through the TStartup buttons or through the command line (Server Domain). Table configurations for elements like Tags, Alarm, Dataset, etc are accessed without any semaphore or lock control. The current, valid configuration will be whatever configuration was applied by the last user. Around every 10 seconds, the system will synchronize any modified configurations with every other client. For example, if someone creates a tag, this tag will be available for every other remote user after a few seconds. This will also happen with any other sort of spreadsheet configuration. In documents like Displays, Scripts, and Reports, there is a LockState and LockOwner property that shows if the page is locked and who it is locked by. Only one user can edit these types of documents at a time. There will be an indicator at the top right whenever someone is editing a document. In the display, scripts and reports sections, the user can see the LockState and LockOwner, including the remote computer where the document is already opened. If a document is locked and you cannot edit it, you can force the document to be unlocked. In FactoryStudio's Welcome display, you can close the current editors connection with the server by clicking on the drop connection button.
https://docs.tatsoft.com/display/DOC91/Remote+Engineering+and+Multi-User+Collaborative+Development
2021-07-24T00:18:10
CC-MAIN-2021-31
1627046150067.87
[]
docs.tatsoft.com
Ring Group To set up a Ring Group, - Define the extension number that you want to designate as the Ring Group number at [PBX admintool]>[Users]>[New user], enter ring group number and click OK. - Enter the extensions of the group in the [Call forwarding settings]>[Forwarding destinations*] field. If extensions in the Ring Group also have their own forwarding destinations, the destination phones also ring when there is call to the Ring Group extension.
https://docs.brekeke.com/pbx/ring-group
2021-07-24T02:30:55
CC-MAIN-2021-31
1627046150067.87
[]
docs.brekeke.com
How can I populate data from Google Spreadsheet? Step 1 Go to your Google Drive and create a new spreadsheet document. Step 2 The first line of the spreadsheet should contain the series titles. The second line of the spreadsheet must contain the type of series’ data. You may use one of six data types: string, number, boolean, date, time date, and time of day. The other lines need to contain data values according to the data type chosen. Step 3 After you enter your data, save the spreadsheet and click on "File" > "Publish to the Web…" menu item. Step 4 In the new dialog, you need to click the "Start publishing" button. A link to the published item appears in the publishing dialog. Choose "CSV (comma-separated values)" type and select the box below "Get a link to published data" sub header and copy link to the published spreadsheet at the text area below. Step 5 After you've published your spreadsheet, create a new chart or edit existing one and go to the second step, which allows you to upload your CSV file. Expand the "Import data from URL" sidebar menu and click on the "One time import" menu. Then enter the published link into the URL field and click "Import". After you have entered the link, it will be automatically uploaded and a chart preview with the data will be updated. Use a subset of the spreadsheet Let's say you have a document with many columns but you only want to use a subset of the data. You can fetch a subset of the data using the Google Query Language. Let's assume you have the following columns in the sheet and want to extract only columns A, C and E from it. Step 1 Click on "File" > "Publish to the Web…" menu item. Step 2 In the new dialog, you need to click the "Start publishing" button. A link to the published item appears in the publishing dialog. Do not choose Choose "CSV (comma-separated values)" type. Step 3 Now that your spreadsheet is published, it will be accessible to Visualizer once the link is provided. To obtain the correct link for the next steps, click the share button in the top right: Then, on the popup, click "copy link" and save this link. It should look like this: Step 4 Using Google Query Language write down the query that most accurately fetches the subset of data. In this example, the query would be select A,C,E Use the encoder (found on that same page) to get the correctly encoded query and copy it. Put the query into this string: gviz/tq?tq=<QUERY>&tqx=out:csv In our example, we would obtain this: gviz/tq?tq=SELECT%20A%2CC%2CE&tqx=out:csv Step 5 Now all we have to do is put things together to create the final link This was our link from step 3: And this was our query URL from step 4: gviz/tq?tq=SELECT%20A%2CC%2CE&tqx=out:csv Now, let's remove the last part of the url (edit?usp=sharing) and replace it with our new query string. The result should look like this: You can now provide this link within Visualizer for its data to be imported, as it is done in Step 5 of the previous section.
https://docs.themeisle.com/article/607-how-can-i-populate-data-from-google-spreadsheet
2021-07-24T02:11:58
CC-MAIN-2021-31
1627046150067.87
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55192029e4b0221aadf23f55/images/5ad8c7cc2c7d3a0e93677e6b/file-LtDbTWU8xM.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55192029e4b0221aadf23f55/images/5ad8c95f2c7d3a0e93677e7a/file-PlbKfTBfWE.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55192029e4b0221aadf23f55/images/5ad8c9ec0428630750929938/file-Y5HGXA8Vgt.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55192029e4b0221aadf23f55/images/5ad8cac32c7d3a0e93677e8c/file-D6DKUXx8HF.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55192029e4b0221aadf23f55/images/5ad8cae70428630750929947/file-iykdGMPLQP.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55192029e4b0221aadf23f55/images/58eb9949dd8c8e5c57314133/file-rB8piswVAi.gif', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55192029e4b0221aadf23f55/images/5c61b248042863543cccd651/file-FtptlPAyUz.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55192029e4b0221aadf23f55/images/5ad8c9ec0428630750929938/file-Y5HGXA8Vgt.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55192029e4b0221aadf23f55/images/5ad8cac32c7d3a0e93677e8c/file-D6DKUXx8HF.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55192029e4b0221aadf23f55/images/5c61b3af042863543cccd667/file-fLTf8Czvbr.png', None], dtype=object) ]
docs.themeisle.com
Registries Registration is the process of taking the objects of a mod (such as items, blocks, sounds, etc.) and making them known to the game. Registering things is important, as without registration the game will simply not know about these objects, which will cause unexplainable behaviors and crashes. Most things that require registration in the game are handled by the Forge registries. A registry is an object similar to a map that assigns values to keys. Forge uses registries with ResourceLocation keys to register objects. This allows the ResourceLocation to act as the “registry name” for objects. The registry name for an object may be accessed with #getRegistryName/ #setRegistryName. The setter can only be called once; calling it twice results in an exception. Every type of registrable object has its own registry. To see all registries supported by Forge, see the ForgeRegistries class. All registry names within a registry must be unique. However, names in different registries will not collide. For example, there’s a Block registry, and an Item registry. A Block and an Item may be registered with the same name example:thing without colliding; however, if two different Blocks or Items were registered with the same exact name, the second object will override the first. Methods for Registering There are two proper ways to register objects: the DeferredRegister class, and the RegistryEvent$Register lifecycle event. DeferredRegister DeferredRegister is the newer and documented way to register objects. It allows the use and convenience of static initializers while avoiding the issues associated with it. It simply maintains a list of suppliers for entries and registers the objects from those suppliers during the proper RegistryEvent$Register event. An example of a mod registering a custom block: private static final DeferredRegister<Block> BLOCKS = DeferredRegister.create(ForgeRegistries.BLOCKS, MODID); public static final RegistryObject<Block> ROCK_BLOCK = BLOCKS.register("rock", () -> new Block(AbstractBlock.Properties.of(Material.STONE))); public ExampleMod() { BLOCKS.register(FMLJavaModLoadingContext.get().getModEventBus()); } Register events The RegistryEvents are the second and more flexible way to register objects. These events are fired after the mod constructors and before the loading of configs. The event used to register objects is RegistryEvent$Register<T>. The type parameter T should be set to the type of the object being registered. Calling #getRegistry will return the registry, upon which objects are registered with #register or #registerAll. Here is an example: (the event handler is registered on the mod event bus) @SubscribeEvent public void registerBlocks(RegistryEvent.Register<Block> event) { event.getRegistry().registerAll(new Block(...), new Block(...), ...); } Note Some classes cannot by themselves be registered. Instead, *Type classes are registered, and used in the formers’ constructors. For example, TileEntity has TileEntityType, and Entity has EntityType. These *Type classes are factories that simply create the containing type on demand. These factories are created through the use of their *Type$Builder classes. An example: ( REGISTER refers to a DeferredRegister<TileEntityType>) public static final RegistryObject<TileEntityType<ExampleTile>> EXAMPLE_TILE = REGISTER.register( "example_tile", () -> TileEntityType.Builder.of(ExampleTile::new, EXAMPLE_BLOCK.get()).build(null) ); Referencing Registered Objects Registered objects should not be stored in fields when they are created and registered. They are to be always newly created and registered whenever their respective RegistryEvent$Register event is fired. This is to allow dynamic loading and unloading of mods in a future version of Forge. Registered objects must always be referenced through a RegistryObject or a field with @ObjectHolder. Using RegistryObjects RegistryObjects can be used to retrieve references to registered objects once they are available. These are used by DeferredRegister to return a reference to the registered objects. Their references are updated after their corresponding registry’s RegistryEvent$Register event is fired, along with the @ObjectHolder annotations. To get a RegistryObject, call RegistryObject#of with a ResourceLocation and the IForgeRegistry of the registrable object. Custom registries can also be used by giving a supplier of the object’s class. Store the RegistryObject in a public static final field, and call #get whenever you need the registered object. An example of using RegistryObject: public static final RegistryObject<Item> BOW = RegistryObject.of(new ResourceLocation("minecraft:bow"), ForgeRegistries.ITEMS); // assume that ManaType is a valid registry, and 'neomagicae:coffeinum' is a valid object within that registry public static final RegistryObject<ManaType> COFFEINUM = RegistryObject.of(new ResourceLocation("neomagicae", "coffeinum"), () -> ManaType.class); Using @ObjectHolder Registered objects from registries can be injected into the public static fields by annotating classes or fields with @ObjectHolder and supplying enough information to construct a ResourceLocation to identify a specific object in a specific registry. The rules for @ObjectHolder are as follows: - If the class is annotated with @ObjectHolder, its value will be the default namespace for all fields within if not explicitly defined - If the class is annotated with @Mod, the modid will be the default namespace for all annotated fields within if not explicitly defined - A field is considered for injection if: - it has at least the modifiers public static; - one of the following conditions are true: - the enclosing class has an @ObjectHolderannotation, and the field is final, and: - the name value is the field’s name; and - the namespace value is the enclosing class’s namespace - An exception is thrown if the namespace value cannot be found and inherited - the field is annotated with @ObjectHolder, and: - the name value is explicitly defined; and - the namespace value is either explicitly defined or the enclosing class’s namespace - the field type or one of its supertypes corresponds to a valid registry (e.g. Itemor ArrowItemfor the Itemregistry); - An exception is thrown if a field does not have a corresponding registry. - An exception is thrown if the resulting ResourceLocationis incomplete or invalid (non-valid characters in path) - If no other errors or exceptions occur, the field will be injected - If all of the above rules do not apply, no action will be taken (and a message may be logged) @ObjectHolder-annotated fields are injected with their values after their corresponding registry’s RegistryEvent$Register event is fired, along with the RegistryObjects. Note If the object does not exist in the registry when it is to be injected, a debug message will be logged and no value will be injected. As these rules are rather complicated, here are some examples: @ObjectHolder("minecraft") // Inheritable resource namespace: "minecraft" class AnnotatedHolder { public static final Block diamond_block = null; // No annotation. [public static final] is required. // Block has a corresponding registry: [Block] // Name path is the name of the field: "diamond_block" // Namespace is not explicitly defined. // So, namespace is inherited from class annotation: "minecraft" // To inject: "minecraft:diamond_block" from the [Block] registry @ObjectHolder("ambient.cave") public static SoundEvent ambient_sound = null; // Annotation present. [public static] is required. // SoundEvent has a corresponding registry: [SoundEvent] // Name path is the value of the annotation: "ambient.cave" // Namespace is not explicitly defined. // So, namespace is inherited from class annotation: "minecraft" // To inject: "minecraft:ambient.cave" from the [SoundEvent] registry // Assume for the next entry that [ManaType] is a valid registry. @ObjectHolder("neomagicae:coffeinum") public static final ManaType coffeinum = null; // Annotation present. [public static] is required. [final] is optional. // ManaType has a corresponding registry: [ManaType] (custom registry) // Resource location is explicitly defined: "neomagicae:coffeinum" // To inject: "neomagicae:coffeinum" from the [ManaType] registry public static final Item ENDER_PEARL = null; // No annotation. [public static final] is required. // Item has a corresponding registry: [Item]. // Name path is the name of the field: "ENDER_PEARL" -> "ender_pearl" // !! ^ Field name is valid, because they are // converted to lowercase automatically. // Namespace is not explicitly defined. // So, namespace is inherited from class annotation: "minecraft" // To inject: "minecraft:ender_pearl" from the [Item] registry @ObjectHolder("minecraft:arrow") public static final ArrowItem arrow = null; // Annotation present. [public static] is required. [final] is optional. // ArrowItem does not have a corresponding registry. // ArrowItem's supertype of Item has a corresponding registry: [Item] // Resource location is explicitly defined: "minecraft:arrow" // To inject: "minecraft:arrow" from the [Item] registry public static Block bedrock = null; // No annotation, so [public static final] is required. // Therefore, the field is ignored. public static final ItemGroup group = null; // No annotation. [public static final] is required. // ItemGroup does not have a corresponding registry. // No supertypes of ItemGroup has a corresponding registry. // Therefore, THIS WILL PRODUCE AN EXCEPTION. } class UnannotatedHolder { // Note the lack of an @ObjectHolder annotation on this class. @ObjectHolder("minecraft:flame") public static final Enchantment flame = null; // Annotation present. [public static] is required. [final] is optional. // Enchantment has corresponding registry: [Enchantment]. // Resource location is explicitly defined: "minecraft:flame" // To inject: "minecraft:flame" from the [Enchantment] registry public static final Biome ice_flat = null; // No annotation on the enclosing class. // Therefore, the field is ignored. @ObjectHolder("minecraft:creeper") public static Entity creeper = null; // Annotation present. [public static] is required. // Entity does not have a corresponding registry. // No supertypes of Entity has a corresponding registry. // Therefore, THIS WILL PRODUCE AN EXCEPTION. @ObjectHolder("levitation") public static final Potion levitation = null; // Annotation present. [public static] is required. [final] is optional. // Potion has a corresponding registry: [Potion]. // Name path is the value of the annotation: "levitation" // Namespace is not explicitly defined. // No annotation in enclosing class. // Therefore, THIS WILL PRODUCE AN EXCEPTION. } Creating Custom Registries Custom registries are created by using RegistryBuilder during the RegistryEvent$NewRegistry event. The class RegistryBuilder takes certain parameters (such as the name, the Class of its values, and various callbacks for different events happening on the registry). Calling RegistryBuilder#create will result in the registry being built, registered to the RegistryManager, and returned to the caller for additional processing. The Class of the value of the registry must implement IForgeRegistryEntry, which defines that #setRegistryName and #getRegistryName can be called on the objects of that class. It is recommended to extend ForgeRegistryEntry, the default implementation instead of implementing the interface directly. When #setRegistryName(String) is called with a string, and that string does not have an explicit namespace, its namespace will be set to the current modid. The Forge registries can be accessed through the ForgeRegistries class. Any other registries can be stored and cached during their associated RegistryEvent$Register.
https://mcforge.readthedocs.io/en/1.16.x/concepts/registries/
2021-07-24T02:38:51
CC-MAIN-2021-31
1627046150067.87
[]
mcforge.readthedocs.io
The Engagement Planner to handle Sequences interactions 2 min read - updated few hours ago The Engagement Planner page is the primary window for Revenue Grid (RG) users’ reference, where you can monitor and manage outcomes of the Sequences you initiate. Here you receive replies to your automated emails, system notifications, To-Do tasks. Every item is logged in coherent chronological order as a node in the Sequence flow visualization. Important Note that you can not use the Engagement Planner page for continuous email communication like you would use an email box: through the Planner you can only send emails to a prospect as email type of steps within an established Sequence or Reply to a standalone email from a prospect. Search through and Filter by¶ Over time, you may have too many items on the Engagement Planner. For faster navigation through, use: - the Search field, to search specific items by recipient's name, email or by subject of a message. - the Filter by drop-down to pick all items of a specific Sequences. - the Filter by type and label to narrow down items shown in the planner. Types you can pick are: Email drafts,Other email, Auto-reply, Bounce, Call, SMS, Misc. Label Colleague's reply, Owner change, Lead conversion, Sending failure, Field merge issue. To filter the planner only to past and due items, select Past and due items in the Filter drop-down. Tabs of Engagement Planner¶ The items are categorized between the following tabs: - Sequence replies - the prospects’ email replies to your sequence steps. Note that automatic replies and email delivery failure notifications will also end up in this category. - Notifications - here you receive standalone emails from recipients who exist on your Revenue Grid Audience page. Note that late correspondence from a prospect who had been enrolled into a sequence, but that sequence have been ended already, will also end up on this tab. - To-Dos (Tasks) - items which require your actions. What does the green icon means? When you see a green icon in the planner, it means that the item is associated with your delegator’s Sequence. Was this article helpful? We would love to hear from you
https://docs.revenuegrid.com/articles/Planner/
2021-07-24T01:55:07
CC-MAIN-2021-31
1627046150067.87
[array(['../../assets/images/Planner/Planner.png', 'Engagement Planner'], dtype=object) array(['../../assets/images/Planner/planner-folders.png', None], dtype=object) array(['../../assets/images/faq/fb.png', None], dtype=object)]
docs.revenuegrid.com
Reading and Writing values¶ Some instruments allow to transfer to and from the computer larger datasets with a single query. A typical example is an oscilloscope, which you can query for the whole voltage trace. Or an arbitrary wave generator to which you have to transfer the function you want to generate. Basically, data like this can be transferred in two ways: in ASCII form (slow, but human readable) and binary (fast, but more difficult to debug). PyVISA Message Based Resources have different methods for this called read_ascii_values(), query_ascii_values() and read_binary_values(), query_binary_values(). Reading ASCII values¶ If your oscilloscope (open in the variable inst) has been configured to transfer data in ASCII when the CURV? command is issued, you can just query the values like this: >>> values = inst.query_ascii_values('CURV?') values will be list containing the values from the device. In many cases you do not want a list but rather a different container type such as a numpy.array. You can of course cast the data afterwards like this: >>> values = np.array(inst.query_ascii_values('CURV?')) but sometimes it is much more efficient to avoid the intermediate list, and in this case you can just specify the container type in the query: >>> values = inst.query_ascii_values('CURV?', container=numpy.array) In container, you can have any callable/type that takes an iterable. Note When using numpy.array or numpy.ndarray, PyVISA will use numpy routines to optimize the conversion by avoiding the use of an intermediate representation. Some devices transfer data in ASCII but not as decimal numbers but rather hex or oct. Or you might want to receive an array of strings. In that case you can specify a converter. For example, if you expect to receive integers as hex: >>> values = inst.query_ascii_values('CURV?', converter='x') converter can be one of the Python string formatting codes. But you can also specify a callable that takes a single argument if needed. The default converter is 'f'. Finally, some devices might return the values separated in an uncommon way. For example if the returned values are separated by a '$' you can do the following call: >>> values = inst.query_ascii_values('CURV?', separator='$') You can provide a function to takes a string and returns an iterable. Default value for the separator is ',' (comma) Reading binary values¶ If your oscilloscope (open in the variable inst) has been configured to transfer data in BINARY when the CURV? command is issued, you need to know which type datatype (e.g. uint8, int8, single, double, etc) is being used. PyVISA use the same naming convention as the struct module. You also need to know the endianness. PyVISA assumes little-endian as default. If you have doubles d in big endian the call will be: >>> values = inst.query_binary_values('CURV?', datatype='d', is_big_endian=True) You can also specify the output container type, just as it was shown before. By default, PyVISA will assume that the data block is formatted according to the IEEE convention. If your instrument uses HP data block you can pass header_fmt='hp' to read_binary_values. If your instrument does not use any header for the data simply header_fmt='empty'. By default PyVISA assumes, that the instrument will add the termination character at the end of the data block and actually makes sure it reads it to avoid issues. This behavior fits well a number of devices. However some devices omit the termination character, in which cases the operation will timeout. In this situation, first makes sure you can actually read from the instrument by reading the answer using the read_raw function (you may need to call it multiple time), and check that the advertized length of the block match what you get from your instrument (plus the header). If it is so, then you can safely pass expect_termination=False, and PyVISA will not look for a termination character at the end of the message. If you can read without any problem from your instrument, but cannot retrieve the full message when using this method (VI_ERROR_CONN_LOST, VI_ERROR_INV_SETUP, or Python simply crashes), try passing different values for chunk_size``(the default is 20*1024). The underlying mechanism for this issue is not clear but changing ``chunk_size has been used to work around it. Note that using larger chunk sizes for large transfer may result in a speed up of the transfer. In some cases, the instrument may use a protocol that does not indicate how many bytes will be transferred. The Keithley 2000 for example always return the full buffer whose size is reported by the ‘trace:points?’ command. Since a binary block may contain the termination character, PyVISA need to know how many bytes to expect. For those case, you can pass the expected number of points using the data_points keyword argument. The number of bytes will be inferred from the datatype of the block. Writing ASCII values¶ To upload a function shape to arbitrary wave generator, the command might be WLISt:WAVeform:DATA <waveform name>,<function data> where <waveform name> tells the device under which name to store the data. >>> values = list(range(100)) >>> inst.write_ascii_values('WLISt:WAVeform:DATA somename,', values) Again, you can specify the converter code. >>> inst.write_ascii_values('WLISt:WAVeform:DATA somename,', values, converter='x') converter can be one of the Python string formatting codes. But you can also specify a callable that takes a single argument if needed. The default converter is 'f'. The separator can also be specified just like in query_ascii_values. >>> inst.write_ascii_values('WLISt:WAVeform:DATA somename,', values, converter='x', separator='$') You can provide a function to takes a iterable and returns an string. Default value for the separator is ',' (comma) Writing binary values¶ To upload a function shape to arbitrary wave generator, the command might be WLISt:WAVeform:DATA <waveform name>,<function data> where <waveform name> tells the device under which name to store the data. >>> values = list(range(100)) >>> inst.write_binary_values('WLISt:WAVeform:DATA somename,', values) Again you can specify the datatype and endianness. >>> inst.write_binary_values('WLISt:WAVeform:DATA somename,', values, datatype='d', is_big_endian=False) When things are not what they should be¶ PyVISA provides an easy way to transfer data from and to the device. The methods described above work fine for 99% of the cases but there is always a particular device that do not follow any of the standard protocols and is so different that cannot be adapted with the arguments provided above. In those cases, you need to get the data: >>> inst.write('CURV?') >>> data = inst.read_raw() and then you need to implement the logic to parse it. Alternatively if the read_raw call fails you can try to read just a few bytes using: >>> inst.write('CURV?') >>> data = inst.read_bytes(1) If this call fails it may mean that your instrument did not answer, either because it needs more time or because your first instruction was not understood.
https://pyvisa.readthedocs.io/en/1.10.0/introduction/rvalues.html
2021-07-24T00:22:52
CC-MAIN-2021-31
1627046150067.87
[]
pyvisa.readthedocs.io
Contents WS_FTP Server Overview What is WS_FTP Server? System requirements for WS_FTP Server How FTP works How SSH works Activating WS_FTP Server for new or upgraded licenses Sending feedback Learning about WS_FTP Server Manager Understanding the server architecture Understanding the relationship between listeners and hosts Accessing the WS_FTP Server Manager Managing WS_FTP Server remotely Navigating the WS_FTP Server Manager Using the top menu Configuring and Managing WS_FTP Server Setting global options Starting and stopping the server Changing the default host Changing the host separator Configuring DNS for hosts Activating FIPS Mode Configuring the WS_FTP Server database Configuring a PostgreSQL database Configuring a Microsoft SQL Server database Configuring Hosts About hosts Choosing host configuration Creating hosts Associating hosts with listeners Configuring an external user database Microsoft Windows user database ODBC user database Microsoft Active Directory user database LDAP user database Troubleshooting an LDAP connection and query Synchronizing external user databases Synchronizing external user databases from the command line Using Windows file permissions Changing user context via user impersonation Changing user context on the services Setting host options Setting folder listings to use local time Setting maximum number of connections Enabling anonymous access Controlling access by IP address Using firewalls with WS_FTP Server What is a NAT firewall? Enabling disabled users Using banner, welcome and exit messages Using message variables Disabling the default banner message Setting timeouts for connections Limiting connections to a host Deleting hosts Renaming hosts Managing hosts from the command line About Impersonation Settings Configuring SITE commands Creating a SITE command Securing SITE commands Configuring Listeners About listeners Creating Listeners Configuring listeners for SSH Configuring listeners for SSL Managing User Accounts How user accounts work Setting user options for hosts Configuring user settings Changing user passwords Enabling disabled users from the command line Resetting a user's failed login count Understanding administrator privileges Granting administrative privileges Creating user accounts Setting users' home folders Renaming a user account Deleting user accounts Disabling user accounts Managing users from the command line Managing User Groups How user groups work Creating user groups Adding users to user groups Removing users from a group Deleting user groups Managing Folders and Files Managing folders About virtual folders Creating, editing, and deleting virtual folders Understanding limitations of virtual folders Managing folder permissions Understanding folder permissions How WS_FTP Server determines permissions Setting Folder Permissions Using Windows permissions Checking file integrity Cleaning Up Old Files and Empty Subfolders Using Rules and Notifications Rules overview About bandwidth limits Creating bandwidth limits About failed login rules Creating failed login rules About folder action rules Creating folder action rules About quota limit rules Creating quota limit rules About notifications Configuring the Notification Server About email notifications Creating email notifications About pager notifications Creating a pager notification About SMS notifications Creating SMS notifications Using notification variables Using SSL What is SSL? Understanding SSL terminology SSL Terminology Choosing a type of SSL Configuring implicit SSL Common SSL configurations Selecting an SSL certificate Importing an SSL certificate Creating an SSL certificate Selecting an SSL Security Level Disabling SSL Requiring SSL for specific folders Requesting client certificates Using SSH What is SSH? How does SSH work? Understanding SSH terminology Selecting methods of authentication Configuring multi-factor authentication Selecting an SSH host key Creating an SSH host key Selecting SSH user keys Importing SSH user keys Creating SSH user keys Specifying MACs and ciphers Using SCP2 What is SCP? SCP2 support in WS_FTP Server with SSH Enabling SCP2 connections in WS_FTP SSH server Examples of SCP2 transfers Summary of supported SCP2 options Using the Log About the log Configuring log settings Viewing the log Logging multiple servers to one log server Managing Connections in Real-time Monitoring active sessions Terminating an active session Viewing server statistics Protecting against IP connection attacks About IP Lockouts Configuring IP Lockouts Maintaining the Server Backing up WS_FTP Server Restoring WS_FTP Server from backup Maintaining the WS_FTP Server data store Maintaining a WS_FTP Server failover cluster Backing up WS_FTP Server in a failover cluster Restoring WS_FTP Server in a failover cluster Server Modules About WS_FTP Server Web Transfer Module About Ad Hoc Transfer Module RFC 959 Highlights Overview of RFC 959 FTP commands FTP replies Index
https://docs.ipswitch.com/WS_FTP_Server77/Help/Server/toc.htm
2021-07-24T00:31:02
CC-MAIN-2021-31
1627046150067.87
[]
docs.ipswitch.com
Introduction For more information, please read the block description. Block type: PROCESSING This block computes the Normalized Difference Vegetation Index (NDVI) from Pléiades, SPOT or Hexagon images. This block can only process outputs from the data blocks Pléiades Reflectance (Download) or SPOT 6/7 Reflectance (Download). The data blocks need to first be converted to GeoTIFF with the processing blocks DIMAP -> GeoTIFF Conversion or Pan-sharpening SPOT/Pléiades. NDVI is used as an indicator for vegetation health and it is computed via the following formula: NDVI=NIR-Red/NIR+Red Supported parameters output_original_raster: Output the original reflectance raster file in addition to the NDVI image is supplied as well Example parameters using the data block Pléiades Reflectance (Download) and the processing block Pan-sharpening Pléiades/SPOT: { "oneatlas-pleiades-fullscene:1": { "ids": null, "bbox": [ 112.92237013578416, 0.8438737493332328, 112.93480049818756, 0.8715453455591357 ], "time": null, "limit": 1, "order_ids": null, "time_series": null }, "pansharpen:1": { "method": "SFIM", "include_pan": false }, "ndvi:1": { "output_original_raster": false } } Output format Output and input format are both GeoTIFF, but input bands are of data type unsigned integer, while the output is of type float. All metadata elements provided by the input dataset as properties are propagated to the output tiles.
https://docs.up42.com/blocks/processing/ndvi/
2021-07-24T00:22:21
CC-MAIN-2021-31
1627046150067.87
[]
docs.up42.com
{"metadata":{"image":[],"title":"","description":""},"api":{"url":"","auth":"required","results":{"codes":[]},"settings":"","params":[]},"next":{"description":"","pages":[]},"title":"ICGC metadata","type":"basic","slug":"icgc-metadata","excerpt":"","body":"##Overview\n\nMetadata is data that describes other data. On this page, we've detailed ICGC metadata that are available for viewing and filtering ICGC data in the Data Browser on the CGC. ICGC metadata on the CGC consists of **properties** which describe the **entities** of the ICGC dataset and their values.The ICGC PCAWG Study dataset includes data from 20 different research projects conducted at participating centers around the world, and differences exist in the ontologies used across centers. Note that all metadata values assigned by ICGC research projects are provided via the CGC without modification. When identifying patient cohorts for further study, researchers are encouraged to investigate the full set of available metadata values to ensure that queries return all relevant Cases, Samples, or similar.\n\n##Entities for ICGC\n\nThe following are entities for ICGC. They represent clinical data, biospecimen data, and data about ICGC files. Learn more about ICGC Data.\n\n * donor\n * exposure\n * family\n * file\n * project \n * sample\n * specimen\n * surgery\n * therapy\n\nBelow, each of these entities is followed by a table of their related properties.\n\n<div align=\"right\"><a href=\"#top\">top</a></div>\n\n##Donor\n\nThe ICGC **donor** entity represents the subject who has taken part in the investigation/program. Members of the **donor** entity can be identified by a Universally Unique Identifier (UUID). Find the properties of the **donor** entity below. Note that once you copy an ICGC file into a project on the CGC, metadata information pertaining to the **donor** entity will display under the **case** label on the file's page.\n[block:parameters]\n{\n \"data\": {\n \"h-0\": \"Property\",\n \"h-1\": \"Description\",\n \"0-0\": \"Age at diagnosis\",\n \"0-1\": \"Age at primary diagnosis in years.\",\n \"1-0\": \"Age at diagnosis group\",\n \"1-1\": \"Age at primary diagnosis group, range given in years.\",\n \"2-0\": \"Age at enrollment\",\n \"3-0\": \"Age at last follow up\",\n \"3-1\": \"Age (in years) at last followup.\",\n \"2-1\": \"Age (in years) at which first specimen was collected.\",\n \"4-0\": \"Cancer type prior malignancy\",\n \"5-0\": \"Disease status at last followup\",\n \"7-0\": \"ICD-10 diagnostic code\",\n \"8-0\": \"Gender\",\n \"9-0\": \"History of first degree relative\",\n \"10-0\": \"Interval of last follow up\",\n \"11-0\": \"Primary Site\",\n \"12-0\": \"Prior Malignancy\",\n \"13-0\": \"Relapse interval\",\n \"14-0\": \"Relapse type\",\n \"15-0\": \"Submitted donor ID\",\n \"16-0\": \"Survival time\",\n \"17-0\": \"Tumour stage at diagnosis\",\n \"18-0\": \"Tumour stage supplemental\",\n \"19-0\": \"Tumour staging system at diagnosis\",\n \"20-0\": \"Vital status\",\n \"21-0\": \"State\",\n \"22-0\": \"Study\",\n \"4-1\": \"ICD-10 diagnostic code for type of cancer in a prior malignancy.\",\n \"5-1\": \"Donor's last known disease status.\",\n \"7-1\": \"ICD-10 diagnostic code for donor.\",\n \"8-1\": \"Donor's biological sex. 'Other' has been removed from the controlled vocabulary due to identifiability concerns.\",\n \"9-1\": \"Indicates if the patient has a first degree relative with cancer\",\n \"10-1\": \"Interval from the primary diagnosis date to the last followup date, in days. ICGC requests that patients be followed up every 6 months while alive.\",\n \"11-1\": \"The anatomical site where the primary tumour is located in the organism.\",\n \"12-1\": \"Prior malignancy affecting patient.\",\n \"13-1\": \"If the donor was clinically disease free following primary therapy, and then relapse or progression (for liquid tumours) occurred afterwards, then donor_relapse_interval is the length of disease free interval, in days.\",\n \"14-1\": \"Type of relapse or progression (for liquid tumours), if applicable.\",\n \"15-1\": \"Usually a human-readable identifier, such as a number or a string that may contain metadata information.\",\n \"16-1\": \"How long has the donor survived since primary diagnosis, in days.\",\n \"17-1\": \"This is the pathological tumour stage classification made after the tumour has been surgically removed, and is based on the pathological results of the tumour and other tissues removed during surgery or biopsy. This information is not expected to be the same as donor's tumour stage at diagnosis since the pathological tumour staging information is the combination of the clinical staging information and additional information obtained during surgery. For this field, please indicate pathological tumour stage value using indicated staging system.\",\n \"18-1\": \"Optional additional staging at the time of diagnosis.\",\n \"19-1\": \"Clinical staging system used at time of diagnosis, if determined. This is supplementary to specimen’s pathological staging.\",\n \"20-1\": \"Donor's last known vital status.\",\n \"21-1\": \"Indicates the state of the donor.\",\n \"22-1\": \"The study the donor is involved in.\",\n \"6-0\": \"Donor analysis type\",\n \"6-1\": \"The type of analysis performed on the donor's sample.\"\n },\n \"cols\": 2,\n \"rows\": 23\n}\n[/block]\n\n##Exposure\n\nThe **exposure** entity represents details about a donor's antecedent environmental exposures, such as smoking history. See the table below for the clinical properties and descriptions of the **exposure** entity.\n[block:parameters]\n{\n \"data\": {\n \"0-0\": \"Alcohol history\",\n \"h-0\": \"Property\",\n \"h-1\": \"Description\",\n \"0-1\": \"A response to the question that asks whether the participant has consumed at least 12 drinks of any kind of alcoholic beverage in their lifetime. See CDE (Common Data Element) Public ID: 2201918. Also: A description of an individual's current and past experience with alcoholic beverage consumption. See NCI Thesaurus Code: C81229.\",\n \"1-0\": \"Alcohol history intensity\",\n \"1-1\": \"A category to describe the patient's current level of alcohol use as self-reported by the patient. See CDE (Common Data Element) Public ID: 3457767.\",\n \"2-0\": \"Exposure intensity\",\n \"2-1\": \"Extent of the exposure. Use this field to specify intensity of exposure submitted in 'Exposure type' field.\",\n \"3-0\": \"Exposure type\",\n \"4-0\": \"Tobacco smoking history indicator\",\n \"5-0\": \"Tobacco smoking intensity\",\n \"3-1\": \"Type of exposure. This field can be used if the donor was exposed to something other than tobacco or alcohol.\",\n \"4-1\": \"Donor's smoking history.\",\n \"5-1\": \"Smoking intensity in Pack Years: Number of pack years defined as the number of cigarettes smoked per day times (x) the number of years smoked divided (/) by 20.\"\n },\n \"cols\": 2,\n \"rows\": 6\n}\n[/block]\n \n##Family\n\nThe **family** entity represents details of the family history of the donor. Find the properties of the **family** entity below.\n[block:parameters]\n{\n \"data\": {\n \"h-0\": \"Property\",\n \"h-1\": \"Description\",\n \"0-0\": \"Relationship age\",\n \"0-1\": \"Age of the donor's relative at primary diagnosis (in years).\",\n \"1-0\": \"Relationship disease\",\n \"1-1\": \"Name of the donor'zs relative's disease.\",\n \"2-0\": \"Relationship disease ICD-10\",\n \"2-1\": \"ICD-10 code of disease affecting family member specified in the 'relationship type' field.\",\n \"3-0\": \"Relationship sex\",\n \"4-0\": \"Relationship type\",\n \"6-0\": \"Relative with cancer history\",\n \"3-1\": \"Biological sex of the donor's relative\",\n \"4-1\": \"Relationship to the donor, which can be parent, sibling, grandparent, uncle/aunt, cousin, other or unknown.\",\n \"6-1\": \"Indicates whether the donor has a relative with a history of cancer.\",\n \"5-0\": \"Relationship type other\",\n \"5-1\": \"Relationship to the donor, if the relationship type is ‘other’.\"\n },\n \"cols\": 2,\n \"rows\": 7\n}\n[/block]\n##File\n\nThe **file** entity represents the data files generated as part of this study. Members of the **file** entity can be identified by a Universally Unique Identifier (UUID). Find the properties of the **file** entity below.\n[block:parameters]\n{\n \"data\": {\n \"h-0\": \"Property\",\n \"h-1\": \"Description\",\n \"0-0\": \"File analysis type\",\n \"0-1\": \"The type of analysis applied to the sample from the donor.\",\n \"1-0\": \"Experimental strategy\",\n \"1-1\": \"The method or protocol used to perform the laboratory analysis. See NCI Thesaurus Code: C43622.\",\n \"2-0\": \"Genome build\",\n \"2-1\": \"The reference genome or assembly (such as HG19/GRCh37 or GRCh38) to which the nucleotide sequence of a case/subject/sample can be aligned.\",\n \"3-0\": \"File size\",\n \"3-1\": \"The size of a file measured in bytes (B), kilobytes (KB), megabytes (MB), gigabytes (GB), terabytes (TB), and larger values.\",\n \"4-0\": \"Study\",\n \"4-1\": \"The study the donor is involved in.\",\n \"9-0\": \"Last known disease status\",\n \"10-0\": \"Morphology\",\n \"11-0\": \"Primary diagnosis\",\n \"12-0\": \"Prior malignancy\",\n \"13-0\": \"Progression or recurrence\",\n \"14-0\": \"New tumor event after initial treatment\",\n \"15-0\": \"Site of resection or biopsy\",\n \"16-0\": \"Tissue or organ of origin\",\n \"17-0\": \"Tumor grade\",\n \"18-0\": \"Tumor stage\",\n \"19-0\": \"Vital status\",\n \"20-0\": \"Histological diagnosis\",\n \"21-0\": \"Histological diagnosis other\",\n \"22-0\": \"Year of diagnosis\",\n \"23-0\": \"Clinical T (TNM)\",\n \"24-0\": \"Clinical M (TNM)\",\n \"25-0\": \"Clinical N (TNM)\",\n \"26-0\": \"Clinical stage\",\n \"27-0\": \"Pathologic T (TNM)\",\n \"28-0\": \"Pathologic N (TNM)\",\n \"29-0\": \"Pathologic M (TNM)\",\n \"30-0\": \"Performance status scale: Timing\",\n \"31-0\": \"Performance status scale: Karnofsky score\",\n \"32-0\": \"Performance status scale: ECOG\",\n \"33-0\": \"Tumor status\",\n \"34-0\": \"Primary therapy outcome success\",\n \"9-1\": \"The state or condition of an individual's neoplasm at a particular point in time. See CDE (Common Data Element) Public ID: 3392464.\",\n \"10-1\": \"The morphology code which describes the characteristics of the tumor itself, including its cell type and biologic activity, according to the third edition of the International Classification of Diseases for Oncology (ICD-O). See CDE (Common Data Element) Public ID: 3226275.\",\n \"11-1\": \"Text term for the structural pattern of cancer cells used to define a microscopic diagnosis. See CDE (Common Data Element) Public ID: 3081934.\",\n \"12-1\": \"Text term to describe the patient's history of prior cancer diagnosis and the spatial location of any previous cancer occurrence. See CDE (Common Data Element) Public ID: 3081934.\",\n \"13-1\": \"Yes/No/Unknown indicator to identify whether a patient has had a new tumor event after initial treatment. See CDE (Common Data Element) Public ID: 3121376.\",\n \"14-1\": \"A Boolean value denoting whether a neoplasm developed after the initial treatment was finished.\",\n \"15-1\": \"The topography code which describes the anatomical site of origin of the neoplasm according to the third edition of the International Classification of Diseases for Oncology (ICD-O). See NCI Thesaurus Code: C37978. See CDE (Common Data Element) Public ID: 3226281.\",\n \"16-1\": \"The text term that describes the anatomic site of the tumor or disease. See CDE (Common Data Element) Public ID: 3226281.\",\n \"17-1\": \"The numeric value to express the degree of abnormality of cancer cells, a measure of differentiation and aggressiveness. See CDE (Common Data Element) Public ID: 2785839.\",\n \"18-1\": \"The extent of a cancer in the body. Staging is usually based on the size of the tumor, whether lymph nodes contain cancer, and whether the cancer has spread from the original site to other parts of the body. NCI Thesaurus Code: C16899; also see NCI Thesaurus Code: C28257 for Pathological stage.\",\n \"19-1\": \"The state of being living or deceased for Cases that are part of the investigation. See NCI Thesaurus Code: C25717.\",\n \"20-1\": \"The diagnosis of a disease based on the type of tissue as determined based on the microscopic examination of the tissue. See NCI Thesaurus Code: C61478.\",\n \"21-1\": \"Additional options for histologics diagnosis (see Histologic diagnosis), which have not been pre-determined in the listed values for histologic diagnosis.\",\n \"22-1\": \"The numeric value to represent the year of an individual's initial pathologic diagnosis of cancer. See CDE (Common Data Element) Public ID: 2896960.\",\n \"23253840.\",\n \"2425385.\",\n \"2525384.\",\n \"26-1\": \"The extent of a cancer in the body. Staging is usually based on the size of the tumor, whether lymph nodes contain cancer, and whether the cancer has spread from the original site to other parts of the body. See CDE (Common Data Element) Public ID: 5243162.\",\n \"2748739.\",\n \"2848740.\",\n \"2948741.\",\n \"30-1\": \"A time reference for the Karnofsky score and/or the ECOG score using the defined categories.\",\n \"31-1\": \"An index designed for classifying patients 16 years of age or older by their functional impairment. A standard way of measuring the ability of cancer patients to perform ordinary tasks. NCI Thesaurus Code: C28013.\",\n \"32-1\": \"A performance status scale designed to assess disease progression and its effect on the daily living abilities of the patient. NCI Thesaurus Code: C105721.\",\n \"34-1\": \"A value denoting the result of therapy for a given disease or condition in a patient or group of patients. See NCI Thesaurus Code: C18919.\",\n \"33-1\": \"The condition or state of the tumor at a particular time. See NCI Thesaurus Code: C96643.\",\n \"5-0\": \"Access level\",\n \"5-1\": \"A Boolean value indicating Controlled Data or Open Data. Controlled Data is data from public datasets that has limitations on use and requires approval. Open Data is data from public datasets that doesn't have limitations on its use.\",\n \"6-0\": \"File name\",\n \"6-1\": \"FIle name.\",\n \"7-0\": \"External file ID\",\n \"7-1\": \"An identifier pointing to an external file.\",\n \"8-0\": \"External object ID\",\n \"8-1\": \"An identifier pointing to an external object.\"\n },\n \"cols\": 2,\n \"rows\": 9\n}\n[/block]\n##Project\n\nThe **project** entity represents the project that generated the data. Members of the **project** entity can be identified by a Project Identifier which is generated from the project name (e.g. Breast Triple Negatice/Lobular Cander - UK BRCA-UK).\n\nFind the properties of the **project** entity below. Note that once you copy an ICGC file into a project on the CGC, metadata information pertaining to the **project** entity will display under the investigation label on the file's page.\n[block:parameters]\n{\n \"data\": {\n \"0-0\": \"Partner country\",\n \"h-0\": \"Property\",\n \"h-1\": \"Description\",\n \"0-1\": \"Partner country of the cancer project.\",\n \"1-0\": \"Primary country\",\n \"1-1\": \"Lead country of the cancer project.\",\n \"2-0\": \"Primary site\",\n \"2-1\": \"The anatomical site where the primary tumour is located in the organism.\",\n \"3-0\": \"Project name\",\n \"3-1\": \"Name of the project which generated the data.\",\n \"4-0\": \"Pubmed ID\",\n \"4-1\": \"ID of the publication at.\",\n \"5-0\": \"State\",\n \"5-1\": \"Indicates the state.\",\n \"6-0\": \"Tumour type\",\n \"6-1\": \"The type of the cancer studied.\",\n \"7-0\": \"Tumour subtype\",\n \"7-1\": \"Information about tumour type.\",\n \"8-0\": \"Shortest dimension\",\n \"8-1\": \"The shortest dimension of sample/specimen (in centimeters).\",\n \"9-0\": \"Initial weight\",\n \"9-1\": \"Initial sample/specimen weight (in grams).\",\n \"10-0\": \"Current weight\",\n \"10-1\": \"Current sample/specimen weight (in grams).\",\n \"11-0\": \"Freezing method\",\n \"11-1\": \"The freezing method for sample/specimen.\",\n \"12-0\": \"OCT embedded\",\n \"12-1\": \"A boolean value indicating whether the Optimal Cutting Temperature compound (OCT) is used to embed tissue samples prior to frozen sectioning on a microtome-cryostat.\",\n \"13-0\": \"Time between clamping and freezing\",\n \"13-1\": \"Time elapsed (in minutes) between clamping (supplying vessel) and freezing a sample.\",\n \"14-0\": \"Time between excision and freezing\",\n \"14-1\": \"Warm ischemia time, elapsed between clamping and freezing a sample, as denoted in minutes.\",\n \"15-0\": \"Days to collection\",\n \"15-1\": \"Days to sample collection. Sample can be collected can be prospectively or retrospectively. This can be a negative value for samples taken retrospectively.\",\n \"16-0\": \"Days to sample procurement\",\n \"16-1\": \"Number of days from the date the patient was initially diagnosed pathologically with the disease to the date of the procedure that produced the malignant sample for submission.\",\n \"17-0\": \"Is FFPE\",\n \"17-1\": \"A boolean value that denotes whether tissue samples used in the analysis were formalin-fixed paraffin-embedded (FFPE).\"\n },\n \"cols\": 2,\n \"rows\": 8\n}\n[/block]\n##Sample\n\nThe **sample** entity represents samples or specimen material taken from a biological entity for testing, diagnosis, propagation, treatment, or research purposes. For instance, samples include tissues, body fluids, cells, organs, embryos, and body excretory products. Members of the **sample** entity can be identified by a Universally Unique Identifier (UUID). Find the properties of the **sample** entity below.\n[block:parameters]\n{\n \"data\": {\n \"h-0\": \"Property\",\n \"h-1\": \"Description\",\n \"0-0\": \"Submitted sample ID\",\n \"0-1\": \"Usually a human-readable identifier, such as a number or a string that may contain metadata information. In some instances, this can also be a UUID. Note that once you copy an ICGC file into a project on the CGC, metadata information pertaining to the **Sample ID** property will display under the **Aliquot Sample ID** and **Portion Sample ID** labels on the file's page.\",\n \"1-0\": \"Analyzed sample interval\",\n \"1-1\": \"Interval from specimen acquisition to sample use in an analytic procedure (e.g. DNA extraction), in days.\",\n \"2-0\": \"Study\",\n \"2-1\": \"Study donor is involved in.\",\n \"3-0\": \"Level of cellularity\",\n \"3-1\": \"The proportion of tumour nuclei to total number of nuclei in a given specimen/sample. If exact percentage cellularity cannot be determined, the submitter has the option to use this field to specify a level that defines a range of percentage\",\n \"4-0\": \"Percentage of cellularity\",\n \"5-0\": \"Height\",\n \"6-0\": \"Weight\",\n \"7-0\": \"Years smoked\",\n \"4-1\": \"The ratio of tumour nuclei to total number of nuclei in a given specimen/sample.\",\n \"5-1\": \"The height of the patient in centimeters. See CDE (Common Data Element) Public ID: 649.\",\n \"7-1\": \"The numeric value (or unknown) to represent the number of years a person has been smoking. See CDE (Common Data Element) Public ID: 3137957.\",\n \"6-1\": \"The weight of the patient measured in kilograms. See CDE (Common Data Element) Public ID: 651.\"\n },\n \"cols\": 2,\n \"rows\": 5\n}\n[/block]\n###Specimen\n\nThe **specimen** entity represents information about a specimen that was obtained from a donor. There may be several specimens per donor that were obtained concurrently or at different times. Find the properties of the **specimen** entity below.\n[block:parameters]\n{\n \"data\": {\n \"h-0\": \"Property\",\n \"h-1\": \"Description\",\n \"0-0\": \"Digital image of stained section\",\n \"0-1\": \"Linkout(s) to digital image of a stained section, demonstrating a representative section of tumour.\",\n \"1-0\": \"Level of cellularity\",\n \"1-1\": \"The proportion of tumour nuclei to total number of nuclei in a given specimen/sample. If exact percentage cellularity cannot be determined, the submitter has the option to use this field to specify a level that defines a range of percentage.\",\n \"2-0\": \"Percentage of cellularity\",\n \"2-1\": \"The ratio of tumour nuclei to total number of nuclei in a given specimen/sample.\",\n \"3-0\": \"Submitted specimen ID\",\n \"3-1\": \"Usually a human-readable identifier, such as a number or a string that may contain metadata information. In some instances, this can also be a UUID. Note that once you copy an ICGC file into a project on the CGC, metadata information pertaining to the **Submitted specimen ID** property will display under the **Sample Submitter ID **label on the file's page.\",\n \"4-0\": \"Specimen available\",\n \"4-1\": \"Whether additional tissue is available for followup studies.\",\n \"5-0\": \"Specimen biobank\",\n \"5-1\": \"If the specimen was obtained from a biobank, provide the biobank name here.\",\n \"6-0\": \"Specimen biobank ID\",\n \"6-1\": \"If the specimen was obtained from a biobank, provide the biobank accession number here.\",\n \"7-0\": \"Specimen processing\",\n \"7-1\": \"Description of technique used to process specimen.\",\n \"8-0\": \"Specmen processing other\",\n \"8-1\": \"If other technique specified for specimen processing, may indicate technique here.\",\n \"10-0\": \"Specimen storage\",\n \"10-1\": \"Description of how the specimen was stored.\",\n \"12-0\": \"Specimen type\",\n \"12-1\": \"Controlled vocabulary description of specimen type.\",\n \"14-0\": \"Treatment type\",\n \"14-1\": \"Type of treatment the donor received prior to specimen acquisition.\",\n \"15-0\": \"Treatment type other\",\n \"15-1\": \"Freetext description of the treatment type.\",\n \"16-0\": \"Tumour confirmed\",\n \"16-1\": \"Whether tumour was confirmed in the specimen as malignant by histological examination.\",\n \"17-0\": \"Tumour grade\",\n \"18-0\": \"Tumour grading system\",\n \"19-0\": \"Tumour stage supplemental\",\n \"20-0\": \"Tumour histological type\",\n \"17-1\": \"Tumour grade using indicated grading system.\",\n \"18-1\": \"Name of the tumour grading system.\",\n \"19-1\": \"Optional additional staging. For donor, it should be at the time of diagnosis.\",\n \"20-1\": \"WHO International Histological Classification of Tumours code.\",\n \"9-0\": \"Specimen interval\",\n \"9-1\": \"Interval (in days) between specimen acquisition both for those that were obtained concurrently and those obtained at different times.\",\n \"11-0\": \"Specimen storage other\",\n \"11-1\": \"If other types of storage are specified for specimen storage, may indicate technique here.\",\n \"13-0\": \"Specimen type other\",\n \"13-1\": \"Free text description of the specimen type.\",\n \"21-0\": \"Tumour stage\",\n \"21-1\": \"This is the pathological tumour stage classification made after the tumour has been surgically removed, and is based on the pathological results of the tumour and other tissues removed during surgery or biopsy.\\n\\nThis information is not expected to be the same as the donor's tumour stage at diagnosis since the pathological tumour staging information is the combination of the clinical staging information and additional information obtained during surgery.\\n\\nFor this field, please indicate pathological tumour stage value using the indicated staging system.\",\n \"22-0\": \"Tumour stage supplemental\",\n \"22-1\": \"Optional additional staging.\",\n \"23-0\": \"Tumour staging system\",\n \"23-1\": \"Nam e of the tumour staging system used.\"\n },\n \"cols\": 2,\n \"rows\": 24\n}\n[/block]\n###Surgery\n\nThe **surgery** entity represents details about surgical procedures undergone by the donor. Find the properties of the **surgery** entity below.\n\n[block:parameters]\n{\n \"data\": {\n \"h-0\": \"Property\",\n \"h-1\": \"Description\",\n \"1-0\": \"Procedure site\",\n \"0-0\": \"Procedure interval\",\n \"0-1\": \"Interval between primary diagnosis and procedure, in days.\",\n \"1-1\": \"Anatomical site of the procedure. This must use a standard controlled vocabulary which should be reported in advance to the DCC.\",\n \"2-0\": \"Procedure type\",\n \"2-1\": \"Controlled vocabulary description of the procedure type. Vocabulary can be extended by disease-specific projects. Prefix extensions with 3-digit center code, e.g. 008.1 Beijing Cancer Hospital, fine needle aspiration of primary.\",\n \"4-0\": \"Amount\",\n \"4-1\": \"Amount of a product (in μg) prepared for an analysis.\",\n \"5-0\": \"Concentration\",\n \"5-1\": \"Concentration of a product (in μg/μL) prepared for an analysis.\",\n \"6-0\": \"a260_a280 ratio\",\n \"6-1\": \"A numerical value denoting purity assessment using the A260/A280 Ratios.\",\n \"7-0\": \"Well number\",\n \"7-1\": \"The number of wells on the plate in which an analyte has been stored for shipment and for the analysis.\",\n \"8-0\": \"Spectrophotometer method\",\n \"8-1\": \"A method of quantifying the content of nucleic acids in any sample.\",\n \"3-0\": \"Resection status\",\n \"3-1\": \"One of three possible categories that describes the presence or absence of residual tumour following surgical resection.\"\n },\n \"cols\": 2,\n \"rows\": 4\n}\n[/block]\n\n###Therapy\n\nThe **therapy** entity represents details about the type and duration of the therapy the donor received. Find the properties of the **therapy** entity below.\n\n[block:parameters]\n{\n \"data\": {\n \"h-0\": \"Property\",\n \"h-1\": \"Description\",\n \"0-0\": \"First therapy duration\",\n \"0-1\": \"Duration of first postresection therapy, in days.\",\n \"1-0\": \"First therapy response\",\n \"1-1\": \"The clinical effect of the first postresection therapy.\",\n \"2-0\": \"First therapy start interval\",\n \"2-1\": \"Interval between primary diagnosis and initiation of the first postresection therapy, in days.\",\n \"3-0\": \"First therapy therapeutic intent\",\n \"4-0\": \"First therapy type\",\n \"5-0\": \"Other therapy\",\n \"3-1\": \"The therapeutic intent of the first postresection therapy.\",\n \"4-1\": \"Type of first postresection therapy (i.e. therapy given to the patient after the sample was removed from the patient).\",\n \"5-1\": \"Other postresection therapy.\",\n \"6-0\": \"Other therapy response\",\n \"7-0\": \"Second therapy duration\",\n \"8-0\": \"Second therapy response\",\n \"9-0\": \"Second therapy start interval\",\n \"10-0\": \"Second therapy therapeutic intent\",\n \"11-0\": \"Second therapy type\",\n \"6-1\": \"The clinical effect of the other postresection therapy.\",\n \"7-1\": \"Duration of second postresection therapy, in days.\",\n \"8-1\": \"The clinical effect of the second postresection therapy.\",\n \"9-1\": \"Interval between primary diagnosis and initiation of the second postresection therapy, in days.\",\n \"10-1\": \"The therapeutic intent of the second postresection therapy.\",\n \"11-1\": \"Type of second postresection therapy (ie. therapy given to the patient after the sample was removed from the patient).\"\n },\n \"cols\": 2,\n \"rows\": 12\n}\n[/block]","updates":[],"order":21,"isReference":false,"hidden":false,"sync_unique":"","link_url":"","link_external":false,"_id":"5a45230e51d5d600120ed73-12-28T16:59:58.538Z","githubsync":"","__v":0,"parentDoc":null}
https://docs.cancergenomicscloud.org/docs/icgc-metadata
2021-07-24T02:33:30
CC-MAIN-2021-31
1627046150067.87
[]
docs.cancergenomicscloud.org
With User Frontend for Element you can create different types of forms to create different things. - Post form – you can create post of any post type with this. - User form - Category form - Taxonomy form – you can create term of any taxonomy with this. - Tags form – you can create tag with this. - Settings form. Let’s dive into creating a basic form first. For this, the very first thing is enabling elementor page builder for Forms post type so that we can create form with designing the layout by elementor page builder. You can do this from Elementor > Settings in admin menu. Now, from your site’s admin panel go to “User Frontend for Elementor > Add New”. You will find an interface like any other post creating page. Click “Edit with Elementor” to get the Elementor’s page builder advantage to design the form layout as how you want. User Frontend for Elementor comes with numerous form widgets for you. Use them as per your need. Let’s create a Post form with basic fields.
https://docs.cybercraftit.com/docs/user-frontend-for-elementor/create-form/
2021-07-24T00:24:31
CC-MAIN-2021-31
1627046150067.87
[array(['https://docs.cybercraftit.com/wp-content/uploads/2019/12/Elementor-‹-WP5-2-4-—-WordPress.png', None], dtype=object) array(['https://docs.cybercraftit.com/wp-content/uploads/2019/12/Elementor-Elementor-794-1024x475.png', None], dtype=object) array(['https://docs.cybercraftit.com/wp-content/uploads/2019/12/Elementor-Elementor-794-1.png', None], dtype=object) ]
docs.cybercraftit.com
Unleash Proxy The unleash-proxy is compatible with all Unleash Enterprise versions and Unleash Open-Source v4. You should reach out to [email protected] if you want the Unleash Team to host the Unleash Proxy for you. A lot of our users wanted to use feature toggles in their single-page and native applications. To solve this in a preformat and privacy concerned way we built The Unleash Proxy The Unleash Proxy sits between the Unleash API and the application. It provides a simple and super-fast API, as it has all the data it needs available in memory. The proxy solves three important aspects: - Performance – The proxy will cache all toggles in memory, and will be running on the edge, close to your end-users. A single instance will be able to handle thousands of request/sec, and you can scale it easily by adding additional instances. - Security – The proxy evaluates the feature flags for the user on the server-side, and only exposes the results of enabled feature flags for a specific user. - Privacy – If you run the proxy yourself (we can host it as well though) we will not see your end users. This means that you still have full control of your end-users, the way it should be! The Unleash Proxy uses the Unleash SDK and exposes a simple API. The Proxy will synchronize with the Unleash API in the background and provide a simple HTTP API for clients. #How to Run the Unleash Proxy The Unleash Proxy is Open Source and available on github. You can either run it as a docker image or as part of a node.js express application. The easies way to run Unleash is via Docker. We have published a docker image on docker hub. Step 1: Pull Step 2: Start You should see the following output: Step 3: verify In order to verify the proxy you can use curl and see that you get a few evaluated feature toggles back: Expected output would be something like: Health endpoint The proxy will try to synchronize with the Unleash API at startup, until it has successfully done that the proxy will return HTTP 503 - Not Read? for all request. You can use the health endpoint to validate that the proxy is ready to recieve requests: There are multiple more configuration options available. You find all available options on github. #Unleash Proxy API The Unleash Proxy has a very simple API. It takes the Unleash Context as input and will return the feature toggles relevant for that specific context. . #We care about Privacy! The Unleash Proxy is important because you should not expose your entire toggle configurations to your end users! Single page apps works in context of a specific user. The proxy will only return the evaluated toggles (with variants) that should be enabled for those specific users in that specific context. Most of our customers prefer to run The Unleash Proxy themselves. PS! We actually prefer this as we don’t want to see your users. Running it is pretty simple, it is either a small Node.js process you start or a docker image you use. (We can of course host the proxy for you also.) #How to connect to the Proxy? The Unleash Proxy takes the heavy lifting of evaluating toggles and only returns enabled toggles and their values to the client. This means that you would get away with a simple http-client in many common use-cases. However in some settings you would like a bit more logic around it to make it as fast as possible, and keep up to date with changes. - JavaScript Proxy SDK - Android Proxy SDK - iOS Proxy SDK - React SDK (coming soon) - React Native SDK (coming soon) The proxy is also ideal fit for serverless functions such as AWS Lambda. In that scenario the proxy can run on a small container near the serverless function, preferably in the same VPC, giving the lambda extremely fast access to feature flags, at a predictable cost.
https://docs.getunleash.io/sdks/unleash-proxy/
2021-07-24T02:45:31
CC-MAIN-2021-31
1627046150067.87
[array(['/assets/images/The-unleash-proxy-df05d1a9b1c7beb796416a16d1b9f951.png', 'The Unleash Proxy'], dtype=object) ]
docs.getunleash.io
3. Click Add new API key at the bottom of the page Add new API key Client keys 4a. If you're configuring an SDK select Client in the pop-up. And give the key an identifying name allowing you to recognize it later Client 5a. Copy the Secret column and add this to your client Secret Admin operations 4a. If you're going to be using the admin interface via CURL you'll need a key with Admin rights. Select Admin in the Add new API key popup. Admin Remember to give the key a username allowing you to recognize the key in audit logs later 5a. Copy the key in the Secret column and use it in your authorization header. For curl, that would be -H "Authorization: <yoursecrethere>" -H "Authorization: <yoursecrethere>"
https://docs.getunleash.io/user_guide/api-token/
2021-07-24T01:00:02
CC-MAIN-2021-31
1627046150067.87
[]
docs.getunleash.io
Create Luos containers As a developer you will always develop your functionalities into containers and never into the main() program. Warning: Make sure to read and understand how to Create Luos projects before reading this page. How to create and initialize a container To create a container, you have to call this function: container_t* Luos_CreateContainer(void* callback, container_type_t type, char* default_alias, revision_t revision); The returned container_t* is a container structure pointer that will be useful to make your container act in the network after this initialization. callback is a pointer to a callback function called by Luos when your container receive messages from other containers (see Message Handling configuration section for more details). This function needs to have a specific format: void Container_MsgHandler(container_t *container, msg_t *msg) - container is the container pointer of the container receiving the data (basically, it is your container). - msg is the message your container received. type is the type of the your new container represented by a number. Some basic types (e.g. DISTANCE_MOD, VOLTAGE_MOD, etc.) are already available in the container_type_t enum structure of Luos. You can also create your own on top of the Luos one. default alias is the alias by default for your new container. e.g. Mycontainer02. This alias is the one your container will use if no other alias is set by the user of your functionality hosted in your container. Aliases have a maximum size of 16 characters. revision is the revision number of the container you are creating and which will be accessible via Pyluos. Following the project rules, here is a code example for a button container: revision_t ButtonRevision = {.unmap = {0,0,7}}; container_t* container_btn; static void Button_MsgHandler(container_t *container, msg_t *msg) { // Manage received messages } void Button_Init(void) { container_btn = Luos_CreateContainer(Button_MsgHandler, STATE_MOD, "button_mod", ButtonRevision); } void Button_Loop(void) { } Containers categories To make your development as clean as possible, you have to understand in which category (Driver or App) each container of the project is. By following the categories guidelines, you will be able to make clean and reusable functionalities. Drivers guidelines A driver is a type of container that handles hardware. Motors, distance sensors, LEDs are all drivers. By designing a driver, you have to keep the following rules in mind: - A driver container always uses a standard Luos type to be usable by any other containers. - A driver container always uses standard object dictionarySet of objects based on SI metric system that can be transmitted through Luos messages. Any object can easily be converted in other units. structures to be usable by any other containers. - A driver container never depends on or uses any other containers (driver or app). - A driver container is "dumb", as it can't do anything else than manage its hardware feature (but it does it very well). You can have multiple driver containers on the same nodeHardware element (MCU) hosting and running Luos and hosting one or several containers. managing different hardware functionalities of your board, it is up to you to sort them depending on your design. Apps guidelines An application or app is a type of container that only manages software items such as functions or algorithms. Apps use other containers to make your device act, operate, and behave. Apps can be placed in any nodesHardware element (MCU) hosting and running Luos and hosting one or several containers. on a Luos network without any hardware or code modifications. However, the choice of the hosting node can impact global performances of the system. By designing an app, you have to keep the following rules in mind: - An app can't have hardware dependencies. - An app can use custom container types. - An app must use standard object dictionarySet of objects based on SI metric system that can be transmitted through Luos messages. Any object can easily be converted in other units. structures. If the structures used are not standard, Gate containers could be completely unable to manage them. Apps are the embedded smartness of your device, and at least one of them should run a network detection in order to map every containers in every nodes in your device and make it work properly. Go to the Routing table page for more information.
https://docs.luos.io/pages/embedded/containers/create-containers.html
2021-07-24T02:31:59
CC-MAIN-2021-31
1627046150067.87
[]
docs.luos.io
Document.SetLetterContent method (Word) Inserts the contents of the specified LetterContent object into a document. Syntax expression. SetLetterContent( _LetterContent_ ) expression Required. A variable that represents a Document object. Parameters Remarks This method is similar to the RunLetterWizard method except that it doesn't display the Letter Wizard dialog box. The method adds, deletes, or restyles letter elements in the specified document based on the contents of the LetterContent object. Example This example retrieves the Letter Wizard elements from the active document, changes the attention line text, and then uses the SetLetterContent method to update the active document to reflect the changes. Set myLetterContent = ActiveDocument.GetLetterContent myLetterContent.AttentionLine = "Greetings" ActiveDocument.SetLetterContent LetterContent:=myLetterContent See also Support and feedback Have questions or feedback about Office VBA or this documentation? Please see Office VBA support and feedback for guidance about the ways you can receive support and provide feedback.
https://docs.microsoft.com/en-us/office/vba/api/word.document.setlettercontent
2021-07-24T03:07:43
CC-MAIN-2021-31
1627046150067.87
[]
docs.microsoft.com
Get Secret Key A secret key is generated for every account that is used in api requests. User will get secrete key. Using this secrete user can access all api which required secrete key. The POST request will be send over HTTPS to the endpoint.endpoint. Sample Request Sample Response How to generate verification hash? Verification Hash has to be calculated with following combination using SHA256 algorithm and need to be send along with the authentication parameters in each server-to-server request. Parameters required for creating hash are : <secKey><customerId><walletOwnerId> Sample Code Request Parameters This reference lists all the standard flow parameters to be send in request. Response Parameters This reference list lists all the standard flow parameters to be received in response.
https://docs.transactworld.co.uk/wallet/get-secret-key-trans.php
2021-07-24T01:51:56
CC-MAIN-2021-31
1627046150067.87
[array(['../assets/img/ajax-loader.gif', None], dtype=object) array(['../assets/img/ajax-loader.gif', None], dtype=object) array(['../assets/img/ajax-loader.gif', None], dtype=object)]
docs.transactworld.co.uk
3.5. Hooks¶ Hooks are python callables that live in a module specified by hooksfile in the config. Per default this points to ~/.config/alot/hooks.py. 3.5.1. Pre/Post Command Hooks¶ For every COMMAND in mode MODE, the callables pre_MODE_COMMAND() and post_MODE_COMMAND() – if defined – will be called before and after the command is applied respectively. In addition callables pre_global_COMMAND() and post_global_COMMAND() can be used. They will be called if no specific hook function for a mode is defined. The signature for the pre-send hook in envelope mode for example looks like this: Consider this pre-hook for the exit command, that logs a personalized goodbye message: import logging from alot.settings.const import settings def pre_global_exit(**kwargs): accounts = settings.get_accounts() if accounts: logging.info('goodbye, %s!' % accounts[0].realname) else: logging.info('goodbye!') 3.5.2. Other Hooks¶ Apart from command pre- and posthooks, the following hooks will be interpreted: reply_prefix(realname, address, timestamp[, message=None, ui= None, dbm=None])¶ Is used to reformat the first indented line in a reply message. This defaults to ‘Quoting %s (%s)n’ % (realname, timestamp)’ unless this hook is defined forward_prefix(realname, address, timestamp[, message=None, ui= None, dbm=None])¶ Is used to reformat the first indented line in a inline forwarded message. This defaults to ‘Forwarded message from %s (%s)n’ % (realname, timestamp)’ if this hook is undefined pre_edit_translate(text[, ui= None, dbm=None])¶ Used to manipulate a message’s text before the editor is called. The text might also contain some header lines, depending on the settings edit_headers_whitelist and edit_header_blacklist. post_edit_translate(text[, ui= None, dbm=None])¶ used to manipulate a message’s text after the editor is called, also see pre_edit_translate touch_external_cmdlist(cmd, shell=shell, spawn=spawn, thread=thread)¶ used to change external commands according to given flags shortly before they are called. sanitize_attachment_filename(filename=None, prefix='', suffix='')¶ returns prefix and suffix for a sanitized filename to use while opening an attachment. The prefix and suffix are used to open a file named prefix + XXXXXX + suffix in a temporary directory. loop_hook(ui=None)¶ Run on a period controlled by _periodic_hook_frequency
https://alot.readthedocs.io/en/latest/configuration/hooks.html
2021-04-10T21:55:36
CC-MAIN-2021-17
1618038059348.9
[]
alot.readthedocs.io
Question: SKU's were deleted a few months ago and we want to list the products again but are getting a 404 error upon creation. We tried pushing the SKU's but we still get an error. When we create the SKU's manually it does sync. Is there a rule that or a conflict with Shopify trying to "recreate" an SKU after it's been deleted in Shopify? Answer: When updating an item in Shopify, it requires an ID (called a Shopify product ID), which is similar to the NetSuite internal ID. This ID is auto-populated either when an item is created via the Integration App or a mass Item ID update flow is run. This linking between Shopify and NetSuite is done via a custom record which is created for each Shopify's store information and contains IDs as shown below: Once this record is created for any item, any future updates for product or inventory information is done using these IDs only. If the item is deleted on Shopify, this item ID map needs to be updated to reflect the same (ideally it should be deleted) to avoid those 404 errors, as that old ID is not present in Shopify. We are not supporting deletes at this moment, however, the mass update flow can update the new IDs if an item has been deleted and re-created directly in Shopify. If the item had to be deleted from Shopify and needs to be created via the Integration App, you will have to delete this custom record (which will make the Integration App assume that it is a new item) for that store and run the item export flow. Please sign in to leave a comment.
https://docs.celigo.com/hc/en-us/articles/115007611087--Recreating-SKU-s-That-Have-Been-Deleted-in-Shopify
2021-04-10T22:15:24
CC-MAIN-2021-17
1618038059348.9
[array(['/hc/article_attachments/115013255528/image-6.png', 'image-6.png'], dtype=object) ]
docs.celigo.com
Release Notes Cloud Upgrade on July 03, 2020 New Features Realize Storage Insights - Subscribe to saved filters Realize administrators can now subscribe to saved filters and be informed about the non-critical data growth in their organization on a regular basis. Filters can also be subscribed by non-Realize administrators or other staff in the organization. For subscribe instructions, see About Realize Storage Insights. Recommendations Search Recommendations using Backup Set or Server Name Administrators can now search and view particular Recommendations using the Backup Set or Server name. Sort Recommendations By default, Recommendations are listed in order with the backup set having the largest amount of non-critical data at the top and the least at the bottom. Now, administrators. For more information, see About Recommendations. Known Issue Cloud Upgrade on Jun 19, 2020 Enhancement Druva Realize is now enabled for Product Cloud Administrators In addition to Druva Cloud Administrators, now Phoenix Cloud administrators can access Realize Storage Insights and Recommendations services. New Features Realize Storage Insights - More options to export Realize Storage Insights view In addition to PDF, Realize administrators can now export Realize Storage Insights view using the following options - Email - Share Realize Storage Insights view over Email as a PDF attachment. CSV - Export Realize Storage Insights view in CSV format. Administrators can have a more granular view of the data or use the CSV to ingest in 3rd party applications for further processing and gain additional insights. Recommendations - Exclude non-critical file types from Phoenix Content Rule using Recommendations Administrators can now update the Phoenix Content Rule associated with a backup set directly from a Recommendation and exclude the non-critical file types from backup. Administrators have option to exclude the non-critical files either from a particular backup set or all the backup sets to which a Phoenix Content Rule is associated. Use the Exclude Non Critical Data feature in a Recommendation to exclude the file types. After update, non-critical file types are excluded from backup starting the next backup cycle. For more information, see About Recommendations. Cloud Upgrade on May 21, 2020 New Feature Realize Storage Insights - Option to export Realize Storage Insights view as a PDF file Administrators can now export their Realize Storage Insights view to a PDF file. This feature helps administrators print their view or share it with other administrators for collaboration purpose. See Realize Storage Insights. UI Enhancements Recommendations Based on customer interaction with Recommendations, we have updated to make it more intuitive and clear layout to display non-critical data information and enable you to actions. See Recommendations. Cloud Upgrade on April 23, 2020 New Feature Introducing Audit Trails - Log and monitor activities performed by administrators Audit Trails captures the activities performed by the administrators in the Realize Management Console and logs it. Audit Trails helps track and monitor all the activities ensuring complete transparency, traceability, and accountability of all the administrators, thereby aiding forensics and compliance initiatives. Audit Trails captures details such as the name of the administrator who performed the action, the activity performed, the resource on which the activity was performed and the timestamp of when the action was performed. For more information, see Audit Trails. Audit Trails are available for actions performed by administrators on the Recommendations service.
https://docs.druva.com/Druva_Realize/005_Release_Details/Release_Notes
2021-04-10T22:21:32
CC-MAIN-2021-17
1618038059348.9
[array(['https://docs.druva.com/@api/deki/files/56463/si_subs_rn.jpg?revision=1&size=bestfit&width=656&height=558', 'si_subs_rn.jpg'], dtype=object) array(['https://docs.druva.com/@api/deki/files/56392/recomm_enhancements.jpg?revision=1', 'recomm_enhancements.jpg'], dtype=object) array(['https://docs.druva.com/@api/deki/files/56393/recomm_sortby.jpg?revision=1', 'recomm_sortby.jpg'], dtype=object) array(['https://docs.druva.com/@api/deki/files/56328/export_email.jpg?revision=1', 'export_email.jpg'], dtype=object) array(['https://docs.druva.com/@api/deki/files/56338/recomm_resolve_real.jpg?revision=1', 'recomm_resolve_real.jpg'], dtype=object) array(['https://docs.druva.com/@api/deki/files/54801/exportpdf.jpg?revision=1', 'exportpdf.jpg'], dtype=object) array(['https://docs.druva.com/@api/deki/files/54802/info.jpg?revision=1', 'info.jpg'], dtype=object) ]
docs.druva.com
Data Types This page defines data types that are commonly used by many External API requests. The page is divided in several sections that start with general information about how the types are supposed to work. - Data Types Basic This section defines basic data types used in the External API specification. Extended This section describes extended data types used in the External API specification. array The External API makes use of JSON arrays often. We indicate the element type in the brackets, e.g. array[string]. The empty array [] is allowed. Examples ["String1", "String2"] [] object The External API makes use of JSON objects. The empty object {} might be allowed. Examples { "myObject1": { "key": "value"; }, "myObject2": { "array": [1, 2, 3] }, "valid": false } {} file This data type indicates that a file is needed as parameter. flavor Opencast uses flavors to locate tracks, attachments and catalogs associated to events. Flavors have a type and a subtype and are written in the form type + "/" + subtype. flavor ::= type + "/" + subtype whereas both type and subtype consist of ([a-z][A-Z][1-9])+([a-z][A-Z][1-9][+-])* Example dublincore/episode property Opencast often uses sets of key-value pairs to associate properties to objects. The External API uses JSON objects to represent those properties. Both the name of the JSON object field and its value are of type string. Example { "key": "value", "live": "true" } Date and Time Examples 2018-03-11 2018-03-11T13:23:51Z Recurrence Rule To define a set of recurring events within a given time period, Opencast uses recurrence rules. For more details about reccurrence rules, please refer to the Internet Calendaring and Scheduling Core Object Specification (iCalendar). Example "rrule":"FREQ=WEEKLY;BYDAY=MO,TU,WE,TH,FR;BYHOUR=16;BYMINUTE=0" Please note that BYHOUR is specified in UTC. Metadata Catalogs The External API is designed to take full advantage of the powerful metadata facilities of Opencast. Opencast distinguishes between bibliographic metadata and technical metadata: - Bibliographic metadata is supposed to describe the associated objects and to be used to present those objects to endusers (e.g. in a video portal). - Technical metadata is used by Opencast to manage objects (e.g. permissions, processing, scheduling) For events and series, Opencast manages bibliographic metadata in metadata catalogs. There are two kind of metadata catalogs: - The default metadata catalogs for events ( dublincore/episode) and series ( dublincore/series) - An arbitrary number of configurable extended metadata catalogs can be configured for both events and series While the extended metadata can be fully configured, the default metadata catalogs are supposed to hold a minimum set of defined metadata fields. As the metadata catalogs are configurable, the External API provides means of gathering the metadata catalog configuration. This is done by "/api/events/{event_id}/metadata" and "/api/series/{series_id}/metdata". Those requests return self-describing metadata catalogs that do not just contain the values of all metadata fields but also a list of available metadata fields and their configuration. Note that the Opencast configuration defines which metadata catalogs are available for events and series. The following sections define data types that are used to manage metadata catalogs. fields Each metadata catalogs has a list of metadata fields that is described as array of JSON objects with the following fields: Example [ { "readOnly": false, "id": "title", "label": "EVENTS.EVENTS.DETAILS.METADATA.TITLE", "type": "text", "value": "Test Event Title", "required": true }, { "translatable": true, "readOnly": false, "id": "language", "label": "EVENTS.EVENTS.DETAILS.METADATA.LANGUAGE", "type": "text", "value": "", "required": false }, { "translatable": true, "readOnly": false, "id": "license", "label": "EVENTS.EVENTS.DETAILS.METADATA.LICENSE", "type": "text", "value": "", "required": false }, [...] ] values To modifiy values of metadata catalogs, a JSON array with JSON objects contained the following fields is used: Notes: - Fields which are not included in catalog_valueswill not be updated - Attempting to write readonly fields will result in error - Attempting to write empty values to a required field will result in error [ { "id": "title", "value": "Captivating title - edited" }, { "id": "creator", "value": ["John Clark", "Thiago Melo Costa"] }, { "id": "description", "value": "A great description - edited" } ] catalog Besides the metadata configuration, the full metadata catalog configuration includes some additional fields describing the catalog itself: Example { "flavor": "dublincore/episode", "title": "EVENTS.EVENTS.DETAILS.CATALOG.EPISODE", "fields": [ { "readOnly": false, "id": "title", "label": "EVENTS.EVENTS.DETAILS.METADATA.TITLE", "type": "text", "value": "Test 1", "required": true }, [...] ] } ] catalogs The metadata configuration including all metadata catalogs of a given objects is returned as JSON array whereas its elements are of type catalog. Example [ { "flavor": "dublincore/episode", "title": "EVENTS.EVENTS.DETAILS.CATALOG.EPISODE", "fields": [ { "readOnly": false, "id": "title", "label": "EVENTS.EVENTS.DETAILS.METADATA.TITLE", "type": "text", "value": "Test 1", "required": true }, { "readOnly": false, "id": "subjects", "label": "EVENTS.EVENTS.DETAILS.METADATA.SUBJECT", "type": "text", "value": [], "required": false }, { "readOnly": false, "id": "description", "label": "EVENTS.EVENTS.DETAILS.METADATA.DESCRIPTION", "type": "text_long", "value": "", "required": false }, [...] ] } ] Access Control Lists Opencast uses access control lists (ACL) to manage permissions of objects. Each access control list is associated to exactly one object and consists of a list of access control entries (ACE). The access control entries are a list of triples < role, action, allow> which read like "Users with role role are allowed to perform action action on the associate object if allow is true". Opencast defines the following ACL actions: Depending on the configuration of Opencast, there can be additional ACL actions. ace The access control entries are represented as JSON objects with the following fields: Example { "allow": true, "action": "write", "role": "ROLE_ADMIN" } acl The access control lists are represented as JSON arrays with element type ace. Example [ { "allow": true, "action": "write", "role": "ROLE_ADMIN" }, { "allow": true, "action": "read", "role": "ROLE_USER" } ] Workflow workflow_retry_strategy The retry strategy of a workflow operation definition in case of an error. The following values are possible: workflow_state The state the workflow instance is currently in. The following values are possible: workflow_operation_state The state the workflow operation instance is currently in. The following values are possible: operation_definition The workflow operation_definition entries are represented as JSON objects with the following fields: operation_instance The workflow operation_instance entries are represented as JSON objects with the following fields: Statistics parameters The Statistics endpoint can list the available statistics providers. Optionally, the endpoint provides information about supported data query parameters using this JSON object. The following types are available:
https://docs.opencast.org/r/8.x/developer/api/types/
2021-04-10T22:48:38
CC-MAIN-2021-17
1618038059348.9
[]
docs.opencast.org
Pixie's magical developer experience is enabled by the Pixie Platform, an edge machine intelligence system designed for secure and scalable auto-telemetry. The platforms key primitives are: The system-level design is shown below: The connection mode between the Vizier Module and Control Cloud is dependent on how Pixie is deployed. To configure Pixie's data transfer mode, see the instructions here. In this scheme, the browser directly proxies into the Pixie Vizier Module and no customer data is transferred to Pixie's Control Cloud. Communication to Pixie's Control Cloud is limited to account and Kubernetes control data. In this scheme, data flows through the Control Cloud via a reverse proxy as encrypted traffic without any persistence. This allows users to access data without being in the same VPC/network and avoids connectivity issues between the browser and the cluster. This is set as the default scheme in Pixie Community. The Pixie Platform collects data with less than 5% CPU overhead and latency degradation. As shown here, the effective overhead attains steady state ~2% in environments running any substantial workloads. This is dramatically more efficient than legacy monitoring systems. Pixie Platform's distributed architecture allows deployment spanning multiple clusters, clouds and deployment platforms. As shown in the architecture, this is achieved by deploying PEM's in Linux nodes in both K8s or non-K8s clusters which are connected to Pixie Vizier Modules. Note: Support for central Pixie Vizier Module and PEM deployments in non-K8s linux nodes have not yet been launched
https://docs.pixielabs.ai/about-pixie/how-pixie-works/
2021-04-10T21:55:20
CC-MAIN-2021-17
1618038059348.9
[]
docs.pixielabs.ai
Recommendations for facial comparison input images The models used for face comparison operations are designed to work for a wide variety of poses, facial expressions, age ranges, rotations, lighting conditions, and sizes. We recommend that you use the following guidelines when choosing reference photos for CompareFaces or for adding faces to a collection using IndexFaces. Use an image with a face that is within the recommended range of angles. The pitch should be less than 30 degrees face down and less than 45 degrees face up. The yaw should be less than 45 degrees in either direction. There is no restriction on the roll. Use an image of a face with both eyes open and visible. When creating a collection using IndexFaces, use multiple face images of an individual with different pitches and yaws (within the recommended range of angles). We recommend that at least five images of the person are indexed—straight on, face turned left with a yaw of 45 degrees or less, face turned right with a yaw of 45 degrees or less, face tilted down with a pitch of 30 degrees or less, and face tilted up with a pitch of 45 degrees or less. If you want to track that these face instances belong to the same individual, consider using the external image ID attribute if there is only one face in the image being indexed. For example, five images of John Doe can be tracked in the collection with external image IDs as John_Doe_1.jpg, … John_Doe_5.jpg. Use an image of a face that is not obscured or tightly cropped. The image should contain the full head and shoulders of the person. It should not be cropped to the face bounding box. Avoid items that block the face, such as headbands and masks. Use an image of a face that occupies a large proportion of the image. Images where the face occupies a larger portion of the image are matched with greater accuracy. Ensure that images are sufficiently large in terms of resolution. Amazon Rekognition can recognize faces as small as 50 x 50 pixels in image resolutions up to 1920 x 1080. Higher-resolution images require a larger minimum face size. Faces larger than the minimum size provide a more accurate set of facial comparison results. Use color images. Use images with flat lighting on the face, as opposed to varied lighting such as shadows. Use images that have sufficient contrast with the background. A high-contrast monochrome background works well. Use images of faces with neutral facial expressions with mouth closed and little to no smile for applications that require high precision. Use images that are bright and sharp. Avoid using images that may be blurry due to subject and camera motion as much as possible. DetectFaces can be used to determine the brightness and sharpness of a face. Ensure that recent face images are indexed.
https://docs.aws.amazon.com/rekognition/latest/dg/recommendations-facial-input-images.html
2021-04-10T23:17:49
CC-MAIN-2021-17
1618038059348.9
[]
docs.aws.amazon.com
Core Reference Engage Core RequireJS Path 'engage/engage_core' Inherited object functions of the BackboneJS view, see Added object functions and properties: EngageEvent Object Engage Model Inherited object functions of the BackboneJS model, see, how to use BackboneJS models. This model is a global singleton object and can be used by each plugin to add new models which can be used by another plugin again. No special functions are added, but the model is filled with some default data. This default data can be used by each plugin, which has a reference to the EngageModel. Plugin Object Each plugin must create and return a object with some properties which are set by the plugin itself. It is recommend to keep a reference to the object because some properties are set by the core after the plugin is processed.
https://docs.opencast.org/r/2.1.x/admin/modules/player.core.reference/
2021-04-10T22:31:51
CC-MAIN-2021-17
1618038059348.9
[]
docs.opencast.org
Caching and Databases Thola is able to cache data for IP addresses. When using the cache the following data is cached: - Device classes - Connection data (SNMP, HTTP, ...) The cached data is saved in a database. The database has to be specified by the user with the help of flags. The user can decide which database to use by indicating it with the “–db-drivername” flag. The possible values for this flag are: - built-in (badger database) - redis - mysql (supported, but deprecated) By default (if the flag is not set) caching is activated and the default database used for caching is the built-in badger database. If caching is not wanted the flag “–no-cache” has to be set, then no database is used to cache the data.
https://docs.thola.io/caching-and-database/
2021-04-10T22:55:23
CC-MAIN-2021-17
1618038059348.9
[]
docs.thola.io
Using MADlib for Analytics If the pod that runs a primary Greenplum segment instance fails or is deleted, the Greenplum StatefulSet restarts the pod. However, the Greenplum master instance remains offline so you can fail over to the standby master instance. This topic describes how to configure the MADlib open-source library for scalable in-database analytics in Greenplum for Kubernetes. About MADlib in Greenplum for Kubernetes Unlike with other Pivotal Greenplum distributions, Pivotal Greenplum for Kubernetes automatically installs the MADlib software as part of the Greenplum Docker image. For example, after initializing a new Greenplum cluster in Kubernetes, you can see that MADlib is available as an installed Debian Package: $ kubectl exec -it master-0 bash -- -c "dpkg -s madlib" Package: madlib Status: install ok installed Priority: optional Section: devel Installed-Size: 31586 Maintainer: [email protected] Architecture: amd64 Version: 1.15.1 Description: Apache MADlib is an Open-Source Library for Scalable in-Database Analytics To begin using MADlib, you simply use the madpack utility to add MADlib functions to your database, as described in the next section. Adding MADlib Functions To install the MADlib functions to a database, use the madpack utility. For example: $ kubectl exec -it master-0 bash -- -c "source ./.bashrc; madpack -p greenplum install" madpack.py: INFO : Detected Greenplum DB version 5.12.0. madpack.py: INFO : *** Installing MADlib *** madpack.py: INFO : MADlib tools version = 1.15.1 (/usr/local/madlib/Versions/1.15.1/bin/../madpack/madpack.py) madpack.py: INFO : MADlib database version = None (host=localhost:5432, db=gpadmin, schema=madlib) madpack.py: INFO : Testing PL/Python environment... madpack.py: INFO : > Creating language PL/Python... madpack.py: INFO : > PL/Python environment OK (version: 2.7.12) madpack.py: INFO : > Preparing objects for the following modules: madpack.py: INFO : > - array_ops madpack.py: INFO : > - bayes madpack.py: INFO : > - crf ... madpack.py: INFO : Installing MADlib: madpack.py: INFO : > Created madlib schema madpack.py: INFO : > Created madlib.MigrationHistory table madpack.py: INFO : > Wrote version info in MigrationHistory table madpack.py: INFO : MADlib 1.15.1 installed successfully in madlib schema. This installs MADlib functions into the default schema named madlib. Execute madpack -h or see the Greenplum MADlib Extension for Analytics documentation for Pivotal Greenplum Database for more information about using madpack. Getting More Information For more information about using MADlib, see:
https://greenplum-kubernetes.docs.pivotal.io/1-11/madlib.html
2021-04-10T21:19:41
CC-MAIN-2021-17
1618038059348.9
[]
greenplum-kubernetes.docs.pivotal.io
Neptune-Amazon SageMaker Integration¶ You can use Neptune to track experiments that you run on Amazon SageMaker. To set this up, perform the following steps: Register to AWS. Follow the instructions to create your AWS account. Create Lifecycle configuration. Go to SageMaker Lifecycle configurations and click Create configuration. You can choose whatever name you want – just make sure to remember it. Modify the Create Notebook script to run it only once at the creation of your SageMaker Notebook instance. Copy and paste the script below to your Create Notebook tab. In the PARAMETERS section, choose in which environments you want to install neptune-client. #! a Notebook instance. Start Notebook. You can now version your Notebooks and track experiments in Amazon SageMaker with Neptune!
https://docs-legacy.neptune.ai/execution-environments/amazon_sagemaker.html
2021-04-10T21:23:14
CC-MAIN-2021-17
1618038059348.9
[array(['../_images/sagemaker_neptuneml.png', 'Amazon SageMaker neptune.ai integration'], dtype=object)]
docs-legacy.neptune.ai
Organizing Experiments in a Dashboard¶ Neptune is a browser-enabled app that lets you visualize and browse experiments. The Experiments space displays all the experiments in a specific Project in table form. There are several ways to organize your experiments. Following options are available: Continue reading this page to learn more about each option. Using dashboard views¶ Experiment dashboard view is a saved setup of columns configuration and experiment filters. For example, you can filter rows (experiments) by metric value and select a subset of useful columns that represent relevant experiments meta-data. You can create many views in the project. Thanks to this you can quickly jump between different aspects of the project. Notice, that one view “keras with extra visuals” has a pin icon next to it. Use pin to set the default view. Note Every saved view is visible for everybody in the project. When dashboard views are useful?¶ There are few situation when you may want to create custom dashboard view: You work on separate idea or task within a project, and you want to see only relevant information. Your team explores separate ideas in the project and for each idea you want to have separate dashboard. You want to create separate view that contains only your experiments. You want to have a separate view for experiments that have model weights that was pushed to production. What can be customized in view?¶ A view is a saved setup of experiments filter and arrangement of columns. Experiments filter, either basic or advanced, can be saved in view. Learn more about it here: Searching and filtering experiments. Every setup of columns can be saved in view. Check section customizing columns below to learn more about it. How to create dashboard view?¶ In this short tutorial you will learn how to customize experiment dashboard and save it as a new view. Note To save view, you need to be project’s “contributor” or “owner”. Learn more about it here: Roles in project. Step 1: Go to experiment dashboard¶ Open experiments dashboard in your project. In this tutorial we use example project. Result¶ In this short tutorial you learned how to create new view that consist of experiments filter and arrangement of columns. You learned how to save new view and access it later from the list of views. Continue to the section below “Customizing columns” to learn more about what you can do with dashboard columns. Customizing columns¶ You can configure what data logged to Neptune is displayed as columns in the dashboard. Experiments meta-data that you can display are: metrics, parameters, text logs, properties, system parameters. Use “manage columns” button to decide what to display: Note Learn more how to log different types of meta-data: What objects can you log to Neptune. Auto-proposed columns¶ Note, that neptune automatically proposes columns based on what is different between experiments. This helps you see what changed quickly. Suggested columns are the right-most columns in the dashboard. See example below: Sort dashboard by column¶ You can decide over which column to sort the dashboard. Use arrows in the column header to do it: Decide how to display column data¶ For each column individually, you can decide how its data is displayed. Click on the cog icon and select display format: Grouping experiments¶ You can group experiments by one or more column(s). The dashboard displays the selected columns, allowing you to make in-group and across-groups analysis of the experiments. Each group is represented by the first experiment that appears according to the sorting order. After opening it, each group shows at most 10 experiments - all experiments can be viewed by clicking Show all.
https://docs-legacy.neptune.ai/organizing-and-exploring-results-in-the-ui/organizing-experiments.html
2021-04-10T22:39:28
CC-MAIN-2021-17
1618038059348.9
[array(['../_images/views-list.png', 'Views list'], dtype=object) array(['../_images/manage-columns.png', 'Manage columns'], dtype=object) array(['../_images/suggested-columns.png', 'Suggested columns'], dtype=object) array(['../_images/sort-columns.png', 'Sort columns'], dtype=object) array(['../_images/column-display-format.png', 'column display format'], dtype=object) ]
docs-legacy.neptune.ai
How to add loopback addresses. To configure loopback. The Configure Remote Services page opens. -.
https://docs.bluecatnetworks.com/r/Address-Manager-Administration-Guide/Configuring-loopback-addresses/8.3.1
2021-04-10T22:04:27
CC-MAIN-2021-17
1618038059348.9
[]
docs.bluecatnetworks.com
- Reference > - Operators > - Aggregation Pipeline Stages > - $replaceWith (aggregation) $replaceWith (aggregation)¶ On this page Definition¶ $replaceWith¶ New in version 4.2. Replaces the input document with the specified document. The operation replaces all existing fields in the input document, including the _idfield. With $replaceWith, you can promote an embedded document to the top-level. You can also specify a new document as the replacement. The $replaceWithis an alias for $replaceRoot. The $replaceWithstage has the following form: The replacement document can be any valid expression that resolves to a document. For more information on expressions, see Expressions. Behavior¶ If the <replacementDocument> is not a document, $replaceWith errors and fails. If the <replacementDocument> resolves to a missing document (i.e. the document does not exist), $replaceWith errors and fails. For example, create a collection with the following documents: Then the following $replaceWith operation fails because one of the document does not have the name field: To avoid the error, you can use $mergeObjects to merge the name document with some default document; for example: Alternatively, you can skip the documents that are missing the name field by including a $match stage to check for existence of the document field before passing documents to the $replaceWith stage: Or, you can use $ifNull expression to specify some other document to be root; for example: Examples¶ $replaceWith an Embedded Document Field¶ Create a collection named people with the following documents: The following operation uses the $replaceWith stage to replace each input document with the result of a $mergeObjects operation. The $mergeObjects expression merges the specified default document with the pets document. The operation returns the following results: $replaceWith a Document Nested in an Array¶ A collection named students contains the following documents: The following operation promotes the embedded document(s) with the grade field greater than or equal to 90 to the top level: The operation returns the following results: $replaceWith a Newly Created Document¶ Example 1¶ An example collection sales is populated with the following documents: Assume that for reporting purposes, you want to calculate for each completed sale, the total amount as of the current report run time. The following operation finds all the sales with status C and creates new documents using the $replaceWith stage. The $replaceWith calculates the total amount as well as uses the variable NOW to get the current time. The operation returns the following documents: Example 2¶ An example collection reportedsales is populated with the reported sales information by quarter and regions: Assume that for reporting purposes, you want to view the reported sales data by quarter; e.g. To view the data grouped by quarter, you can use the following aggregation pipeline: - First stage: The $addFieldsstage adds a new objdocument field that defines the key kas the region value and the value vas the quantity for that region. For example: - Second stage: The $groupstage groups by the quarter and uses $pushto accumulate the objfields into a new itemsarray field. For example: - Third stage: The $projectstage uses $concatArraysto create a new array items2that includes the _idinfo and the elements from the itemsarray: - Fourth stage: The $replaceWithuses the $arrayToObjectto convert the items2into a document, using the specified key kand value vpairs and outputs that document to the next stage. For example: The aggregation returns the following document: $replaceWith a New Document Created from $$ROOT and a Default Document¶ Create a collection named contacts with the following documents: The following operation uses $replaceWith with $mergeObjects to output current documents with default values for missing fields: The aggregation returns the following documents:
https://docs.mongodb.com/v4.2/reference/operator/aggregation/replaceWith/
2021-04-10T22:08:26
CC-MAIN-2021-17
1618038059348.9
[]
docs.mongodb.com
Stamp Tool Modes When you select the Stamp tool, its properties are displayed in the Tool Properties view. Draw Behind Mode When enabled, your drawing strokes will be added underneath the existing artwork instead of over it. NOTE Your stroke will appear over your artwork as you draw it, until you release the mouse cursor or tablet pen.
https://docs.toonboom.com/help/harmony-20/essentials/drawing/stamp-tool-modes.html
2021-04-10T22:16:46
CC-MAIN-2021-17
1618038059348.9
[array(['../Resources/Images/SBP/Drawing/stamp-tool-modes.png', None], dtype=object) array(['../Resources/Images/SBP/an_draw_behind.png', None], dtype=object)]
docs.toonboom.com
Preferences Dialog Reference The following section contains a an article for each tab in the Preferences dialog. Each article contains a list of all the preferences in its tab, as well as a description of the preference. Refer to this section if you want to know more about what a specific preference does or if you want to get familiarized with all of the preferences in Storyboard Pro.
https://docs.toonboom.com/help/storyboard-pro-20/preferences-guide/about-preference-reference.html
2021-04-10T22:34:55
CC-MAIN-2021-17
1618038059348.9
[]
docs.toonboom.com
Pivotal Greenplum for Kubernetes 1.7 Release Notes Pivotal Greenplum for Kubernetes 1.7 introduces new and changed.7 is supported on the following platforms: - Pivotal Container Service (PKS) 1.4.0 (contains Kubernetes 1.13.5) - Google Kubernetes Engine (GKE) Kubernetes 1.13.5 Additional Kubernetes environments, such as Minikube, can be used for testing or demonstration purposes. Release 1.7.2 Pivotal Greenplum for Kubernetes version 1.7.2 introduces these changes: - Pivotal Greenplum Database was updated to version 5.21.4. Ubuntu packages were updated in order to resolve the following Ubuntu Security Notification (USN): Release 1.7.1 Pivotal Greenplum for Kubernetes version 1.7.1 introduces these changes: - Pivotal Greenplum Database was updated to version 5.21.3. Ubuntu packages were updated in order to resolve the following Ubuntu Security Notifications (USNs): Release 1.7.0 Pivotal Greenplum for Kubernetes version 1.7.0 introduces the following features and changes: - The Greenplum Operator pod now adds the gpadminuser to the sudoersfile. - In previous releases, if you forgot to update the Greenplum operator values.yamlfile to point to the correct image in the container registry, then the operator was deployed with a bad image and it would fail to pull, instead going into the ImagePullBackOffstate. While in this state, if you tried to use helm deleteto delete the operator before recreating it, a pre-delete-greenplum-operatorjob was created that used the same bad image, and this delete job also failed with the ImagePullBackOffstate. The only workaround was to manually delete the pre-delete-greenplum-operatorjob to allow the delete operation to complete. This problem has been resolved in version 1.7.0. Ubuntu packages were updated in order to resolve the following Ubuntu Security Notifications (USNs):. - Pivotal extension framework (PXF) and its associated plug-ins are not supported. -.
https://greenplum-kubernetes.docs.pivotal.io/1-7/release-notes.html
2021-04-10T21:48:22
CC-MAIN-2021-17
1618038059348.9
[]
greenplum-kubernetes.docs.pivotal.io
Controls for modelling and rendering levels of smoothing can be found in the Carrara 8.5 assembly room. Users no longer need to enter the modelling room to adjust these values. This tutorial assumes that the user has properly set up Carrara and knows how to load content from the Smart Content Pane or the Content Pane. It assumes the user knows the difference between a SubD model and a standard model. In order to use smoothing on a model it must be a SubD model. Genesis and Genesis 2 are SubD capable as are figures like the Sub Dragon. If the figure is not a SubD model then the smoothing controls will not be visible in the General tab. A figure can be converted to a SubD figure in Carrara 8.5 by adding smoothing. Select the figure and go to Edit–>Smooth Objects. Check the the “Change Smoothing” box and select the “Smooth” radio button. Smoothing can be added to legacy parametric figures but it should be done in the last step of the workflow. Once smoothing is added the figure will not pose and will act as a static object. In order for the Smoothing options to show up in the General Tab the mesh level line of the model must be selected. This line is the third line down in the hierarchy. For the Genesis figure it is the Actor Line. The mesh line can be identified for other figures as it won't have any children. Smoothing Controls for render and model level smoothing can be found in the Assembly Room under the General Tab. To access the controls Click the 'General Tab' and look under the section labelled with the model's name. Carrara 8.5 gives the user the option to adjust two levels of smoothing: Modeling Level affects how smooth the mesh appears in the view port. Rendering Level affects how smooth the model looks in render. It is possible to have a low modeling level to save resources while working but a high rendering level. Adjusting the value for either option can be done with the slider. The user can also click on the numeric value and enter in any integer value. The value for render level smoothing must be greater than or equal to the value for modeling level smoothing. If modeling level is increased beyond the value of render level then rendering level will automatically be increased. If rendering level is lowered below the value of the modeling level then modeling level will automatically be decreased. High levels of modeling level smoothing may cause Carrara to crash. Each level of smoothing increases the polygon count by a factor of four. Keep this in mind when choosing an appropriate modeling level. High levels of rendering level smoothing can greatly increase render time. In some cases renders with high levels of render level smoothing can crash. Please make sure your system is robust enough to handle the rendering level smoothing value you choose. Smoothing levels can now be adjusted in the Assembly Room. These controls can be accessed when the mesh level of the model hierarchy is selected in the Instances Tray. The controls are found within the General Tab of the Assembly Room.
http://docs.daz3d.com/doku.php/public/software/carrara/8_5/userguide/modeling/tutorials/assembly_room_smoothing_levels/start
2021-04-10T21:23:38
CC-MAIN-2021-17
1618038059348.9
[]
docs.daz3d.com
How to Create a Page Layout for a Losant Experience An Experience View’s layout holds elements and visuals that are common across all pages in your custom experience. This is generally one of the first things you’ll create before building any other components or pages. For this walkthrough, the layout that you’ll be building looks like the following: This layout includes a header and a footer. The header includes a logo, a few placeholder navigation links, and a “Log In” link. The footer includes a copyright statement. The blank space in the middle is where individual pages will be rendered within this layout. Experience Views The Experience Views functionality can be found in the “Edit” Experience section of Losant. In many cases, you may have resources already created as part of the automatic setup. The screenshots throughout this guide start from a blank experience and build up the same example that is auto-generated for you. In the Edit section, you’ll see multiple types of resources, including: Layouts, Pages, and Components. To create your Layout, click the Add button next to the Layouts folder. Layouts After you click the Add button, you’ll be presented with a blank layout. Every layout requires a name and some content. The HTML code for. Note: The remainder of this walkthrough is written assuming you are using Bootstrap as your framework. HTML Title Most of the HTML in the example layout is markup for laying out the content. The first is a template that can found in the header’s title tag. <title>{{ experience.page.name }} | My Experience</title> Every time a page is rendered, some amount of context from your application is provided that you can use inside your page or layout. Some fields are guaranteed, like those found on the experience object. Custom data, which is provided by the workflow that’s handling the current page request, want to change “My Experience” to something relevant to your use case. HTML Meta Description Immediately below the title is a tag for the description: <meta name="description" content="{{section 'metaDescription'}}"> This uses a special section helper. A section defines a placeholder that can be filled by the page being rendered within this layout. In this case, you want the override the description meta tag on a per-page basis. You’ll see this being used in later parts of this walkthrough. Logo The next thing to change is the logo at the top left of the page. Here is the logo in the example: To download, right-click the image and select “Save Image As..” <a class="navbar-brand" href="/" style="padding-top:0; padding-bottom:0;"> <img alt="Logo" style="margin-top:13px; height: 24px;" src="IMAGE_URL"> </a> This is a placeholder image provided as part of the example. To replace this image, you’ll upload your logo somewhere, such as Files or Amazon S3, and replace IMAGE_URL in the src attribute with the new URL. Alternatively, you can Base64 encode the image and place it directly in the layout. Once your logo has been updated, click “Create Layout” at the bottom of the page. You have now successfully configured the example layout, and modified the content to better reflect your application. Next, you will move into components. Components The next item is the userIndicator component, which provides the “Log In” link in the top right corner of the page. This toggles between a “Log In” link and a dropdown menu displaying the user’s first name based on whether a user is logged in. {{component "userIndicator"}} Components provide a way to group and reuse page elements. Components are added to a page or layout using the component Handlebars helper. The name of the component is then used to control which component should be placed on the page. In this example, the component’s name is “userIndicator”. To create that component: Click the “Add” button at the top of the Components list. Just like with layouts, components require a name and some content. You can get the content for this example from GitHub. Since this component is fairly small, its content is also below. Copy/paste this into your new component. Click ‘Create Component’ at the bottom of the page. checks for that object to determine what to display. If there is no user object, it displays a “Log In” link. If the user object does exist, it displays the user’s first name using the experience.user.firstName property and adds a menu item to log out. Page Helper Now return to the layout. You’ll see the special page Handlebars helper after the user indicator component you just built. {{ page }} The page helper defines where the page should render in this layout. Every layout requires a page helper to be defined somewhere. You’ll create pages in subsequent parts of this walkthrough. Google Analytics The last component on the page, gaTracking is used to add Google Analytics tracking to this custom web experience. {{component "gaTracking" "UA-XXXXX-X"}} As we did with adding the userIndicator, create a new component and name it gaTracking. The contents of this component can be found on GitHub and are also displayed below. To use this component, replace UA-XXXXX-X with your specific tracking ID that is provided by Google Analytics. The contents of this component can be found on GitHub and are also displayed below. ','{{.}}','auto');ga('send','pageview'); </script> This code is provided by Google Analytics, so it is outside the scope you can now reference the tracking ID by using the {{.}} template. In Handlebars, a dot refers to the entire context object, which in this case is a string representing the tracking ID that was passed in. What’s Next At this point, you now have a complete layout with two components. The next section of this walkthrough covers adding a “Log In” page that will be rendered within this layout.
http://docs.prerelease.losant.com/guides/building-an-experience/page-layout/
2021-04-10T23:00:52
CC-MAIN-2021-17
1618038059348.9
[array(['/images/experiences/walkthrough/views/page-layout/blank-layout.png', 'Blank Layout Blank Layout'], dtype=object) array(['/images/experiences/walkthrough/views/page-layout/views-tab.png', 'Views Tab Views Tab'], dtype=object) array(['/images/experiences/walkthrough/views/page-layout/add-layout.png', 'Add Layout Add Layout'], dtype=object) array(['/images/experiences/walkthrough/views/page-layout/layout-content.png', 'Layout Content Layout Content'], dtype=object) array(['/images/experiences/walkthrough/views/page-layout/kanarra_sm.png', 'Layout Content Layout Content'], dtype=object) array(['/images/experiences/walkthrough/views/page-layout/add-component.png', 'Add Component Add Component'], dtype=object) array(['/images/experiences/walkthrough/views/page-layout/component-content.png', 'Component Content Component Content'], dtype=object) array(['/images/experiences/walkthrough/views/page-layout/ga-tracking-component.png', 'Google Analytics Tracking Component Google Analytics Tracking Component'], dtype=object) ]
docs.prerelease.losant.com
Product: The Hanging Gardens of Nimrud Product ID: 13149 DAZ Original: NO Released: August 23, 2011 Updated: No Updates. Depending on what version you are using the model will be located in one or several folders. With Poser and DAZ Studio you will have trouble initially seeing the model. The reason for this is because the model is huge. In Poser preview mode, you will need to adjust your Main Camera in adjust the “YON” setting to 5,000+. Use the Cameras provided with the model or set your DollyY set to: DAZ Studio doesn’t have the preview “YON” issues Poser has, so if you pan out enough you will see the model. Use the Cameras provided with the model or set your DollyY set to: Because this model is actually a series of separate smaller models you’ll want to be careful when moving it. In Poser and DAZ Studio, the model is set in an hierarchy format with the “City Base” part being the parent. In Vue, the model has been grouped into section for easy movement. This model does have some morphs that are available in the Poser and DAZ Studio versions. Because of the modular nature of this model, many different forms of customization are available. Numerous additional wall, arch and tower parts have been included. The Vue and Poser versions have very simple billboard plants on the temple garden areas. Heavier displacement can be used to make these areas more three dimensional. You can also import/add more 3D plants to the scene from other vendors such as LB Botanicals or Greenworks. Within Vue, materials have been arranged to take advantage of the Eco-system feature to create much more realistic plants. I suggest using 100% density on these areas. Suggest painting the areas around the temple for more control; sometimes the plants grow into the temple stairway. Improving the Main and Temple Plazas You can remove the existing material from the Main City Base part (RBaseInt) and replace it with a higher resolution tileable texture in Vue or Poser/DAZ Studio (later versions). Because of the epic scale of this model, placing additional figures and props can be challenging if not discouraging at times. They get dwarfed by the architecture. Included with the model (on Poser and DAZ Studio versions) are “Place Figure” poses. These poses will place any figure at a specific location on the model. The poses only include X-Y-ZTrans information for the BODY section, so they will not interfere with your pre-existing poses. Poser props cannot be moved with these poses, but if the props are parented to a figure, they’ll go with the figure, then once transported… unparent them. There is a matching set of Camera poses included in the Poser and DAZ Studio versions. Visit our site for further technical support questions or concerns: Thank you and enjoy your new products! DAZ Productions Technical Support 12637 South 265 West #300 Draper, UT 84020 Phone:(801) 495-1777 TOLL-FREE 1-800-267-5170
http://docs.daz3d.com/doku.php/artzone/azproduct/13149
2018-09-18T23:36:50
CC-MAIN-2018-39
1537267155792.23
[]
docs.daz3d.com
To add a content slider you can use a “Content Slider” module in the Elementor, Visual Composer, or the shortcode. You can find the details of the shortcode inside the Shortcode Helper window. Adding with Elementor - Go to the element list and find [RT] Slider element - Add and slides be using the “+ ADD ITEM” button and configure.
http://docs.rtthemes.com/document/content-slider-5/
2018-09-19T00:03:40
CC-MAIN-2018-39
1537267155792.23
[]
docs.rtthemes.com
CORAL Management User Guide¶ About CORAL Management¶ Developed by Texas A&M University, CORAL Management provides a way to store and access digital copies of documents related to the overall management of electronic resources. Component Overview¶ CORAL Management has three components in the primary navigation at the top of each page. - New Document - Admin Document records are listed alphabetically by name and the name field can be searched. Additionally, the records can be filtered by category and document type. New Document¶ Select New Document from the main navigation to begin adding new document records. This will open the New Document pop-up window. Name: Document name to be uploaded. The Management module only allows one active document per record so a document named Sample Letter to Excess Download Offender would only include a copy of the letter. Description: Brief explanation of the document, if necessary. **Type: **type of document. The options listed in the dropdown box may be created from the Admin tab or by using the ‘add type’ link under the dropdown box. In the case of the Sample Letter to Excess Download Offender, the type could be template. Last Document Revision: The date the document was last revised. If no date is entered, today’s date is used by default. Categories: The group of documents to which the document belongs. A document can be included in more than one category. For example, the document ‘Retiree Policy Regarding Access to Electronic Resources’ could be in both an Access Policy category and a Licensing category. Categories may be created from the Admin tab or by using the add category link under the Categories selection box. Selecting the Browse… button opens the navigation pane used to browse and upload the document. Archived: Used to identify documents that have been superseded by a newer version of the document. Selecting the Add Optional Note link allows notes to be linked to the document by opening up two additional fields. Note: Provides a space to add any notes about the document. Note Type: Provides a way to categorize the type of note. The options listed in the dropdown box may be created from the Admin tab or by using the add note type link under the dropdown box. Editing a Document Record¶ Once a document record has been created, the document record opens. A document record can also be opened by selecting it from the Home screen. The above record is for a Cancellation of an E-Journal checklist. Below the name is the description of the document, the associated categories, the documents creation date and creator, and the date the record was last updated and by whom. In this example, there is an archived version of the checklist as evidenced by the *1 archive(s) available” note. Selecting show archives makes archived copies visible and accessible. To add another archived version, select upload archived document and fill in the appropriate information. As previously mentioned, there can only be one current/active document. Therefore, to add a new version of a document, the current document must be archived first. Once that happens, an upload new document link will display. Select this and complete the necessary information to add the new version.
http://docs.coral-erm.org/en/latest/management.html
2018-09-18T23:50:22
CC-MAIN-2018-39
1537267155792.23
[array(['img/management/managementHomePage.png', 'Screenshot of Management Home Page'], dtype=object) array(['_images/managementNewDocument.png', "Screenshot of Management's New Document form"], dtype=object) array(['_images/managementAddOptionalNote.png', "Screenshot of Management's Editing a Document Record form"], dtype=object) array(['_images/managementEditDocumentRecord.png', "Screenshot of Management's Editing a Document Record"], dtype=object) ]
docs.coral-erm.org
This page exists within the Old ArtZone Wiki section of this site. Read the information presented on the linked page to better understand the significance of this fact. Install the product. Target the Carrara application folder itself and not a subfolder in Carrara. For example, if this is your Carrara folder path: C:\Program Files\DAZ 3D\Carrara8 Then the Carrara8 folder above should be the target folder indicated in the installer. In Carrara, you'll need to add the folder to the browser. The readme tells you where the files for the product will be. You can also add the content (.car & .obj) to the browser with the 'Add Folder' button in the upper right corner. If this is your Carrara 8 install path: Program Files\DAZ 3D\Carrara 8 Then in the browser, click the 'Objects Tab.' In the upper right corner of the browser, click the icon that looks like a piece of paper. You'll see a menu appear. Click 'Add Folder.' Browse to your Carrara 8 folder and within there to this folder: …Carrara 8\Presets\Objects\MikeMoir\ReplicatorGrass Select 'ReplicatorGrass.' Choose 'OK.' Then choose 'Plants' as the type of content. In Object menu in browser, scroll down to 'ReplicatorGrass' and select it. Now look to the larger pane in the Browser. You should see several plant choices. Try one of them, like Grass 2.
http://docs.daz3d.com/doku.php/artzone/azproduct/5537
2018-09-18T22:54:05
CC-MAIN-2018-39
1537267155792.23
[]
docs.daz3d.com
ModOnly The @ModOnly annotation is as simple as the name suggests: It only registers a ZenClass if the provided mod is loaded. Example Crafttweaker Test Project ModOnly @ModOnly(value = "mcp") @ZenClass(value = "crafttweaker.tests.modOnly") @ZenRegister public class ModOnlyWiki { @ZenMethod public static void print() { CraftTweakerAPI.logInfo("print issued"); } } What classes can be annotated || Additional Info - You can annotate all Java Classes that also have the @ZenRegisterAnnotation. Technically, you can register all classes, but only there it will have an impact. - The Annotation requires a String value that represents the modName ( isModLoaded(annotation.getValue())has to return true of the mod is loaded)
https://crafttweaker.readthedocs.io/en/latest/Dev_Area/ZenAnnotations/Annotation_ModOnly/
2018-09-18T22:55:04
CC-MAIN-2018-39
1537267155792.23
[]
crafttweaker.readthedocs.io
How to: Add MFC Support to Resource Script Files (C++) Menu prompt strings List contents for combo box controls ActiveX control hosting However, you can add MFC support to existing .rc files that do not have it. To add MFC support to .rc files Open the resource script file. Note If your project doesn't already contain an .rc file, please see Creating a New Resource Script File. In Resource View, highlight the resources folder (for example, MFC.rc). In the Properties window, set the MFC Mode property to True. Note In addition to setting this flag, the .rc file must be part of an MFC project. For example, just setting MFC Mode to True on an .rc file in a Win32 project won't give you any of the MFC features. Requirements MFC See Also Resource Files Resource Editors
https://docs.microsoft.com/en-us/cpp/windows/how-to-add-mfc-support-to-resource-script-files?view=vs-2017
2018-09-18T23:16:56
CC-MAIN-2018-39
1537267155792.23
[]
docs.microsoft.com
Understanding the Availability Service Applies to: Exchange Server 2010 SP3, Exchange Server 2010 SP2: Performance Distribution Group Handling Availability Service API Availability Service Network Load Balancing Methods Used to Retrieve Free/Busy Information Overview of the Availability Service. Note If you have Outlook 2007 clients running on Exchange Server 2003 mailboxes, Outlook 2007 will use public folders. Availability Service Process Flow The following figure illustrates the process flow for the Availability service. Return to top Improvements Over Exchange 2003 Free/Busy The following table lists the improvements to free/busy functionality that Exchange 2010 and Exchange 2007 provide over Exchange 2003. Free/busy improvements Return to top Information About Away Status e-mail messages. This functionality makes it easier to set and manage automatic-reply messages for both information workers and administrators. For more information, see Managing Automatic Replies. Performance You can use the performance counters listed under MSExchange Availability Service in the Performance Monitor tool to automatically collect performance data about the Availability service from local or remote computers that are running Exchange 2010. Return to top Distribution Group Handling,. Availability Service API The Availability service is part of the Exchange 2010 programming interface. It's available as a Web service to let developers write third-party tools for integration purposes. Availability Service Network Load Balancing Using Network Load Balancing (NLB) on your Client Access servers that are running the Availability service can improve performance and reliability for your users who rely on free/busy information. Outlook 2007 discovers the Availability service URL using the Autodiscover service. To use network load balancing Network Load Balancing (NLB) array of Client Access servers, where <domain name> is the name of your domain. Note For more information, see Network Load Balancing Technical Reference and Network Load Balancing Clusters. You can also search for third-party load-balancing software Web sites. For information, see Configure the Availability Service for Network Load Balanced Computers. Return to top Methods Used to Retrieve Free/Busy Information The following table lists the different methods used to retrieve free/busy information in different single-forest topologies. Return to top
https://docs.microsoft.com/en-us/previous-versions/office/exchange-server-2010/bb232134(v=exchg.141)
2018-09-18T23:27:08
CC-MAIN-2018-39
1537267155792.23
[array(['images/bb232134.d6e4baed-174a-47f7-90d1-340109b57db7%28exchg.141%29.gif', 'Availabililty Service Process Flow Availabililty Service Process Flow'], dtype=object) ]
docs.microsoft.com
Change log¶ This summarizes the changes between released versions. For a complete change log, see the git history. For details on the changes, see the respective git commits indicated at the start of the entry. Version next¶ Breaking changes¶ Client observations that have been requested by sending the Observe option must now be taken up by the client. The warning that was previously shown when an observation was shut down due to garbage collection can not be produced easily in this version, and will result in a useless persisting observation in the background. (See <>) Server resources that expect the library to do handle blockwise by returning true to needs_blockwise_assembly do not allow random initial access any more; this this is especially problematic with clients that use a different source port for every package. The old behavior was prone to triggering an action twice on non-safe methods, and generating wrong results in block1+block2 scenarios when a later FETCH block2:2/x/x request would be treated as a new operation and return the result of an empty request body rather than being aligned with an earlier FETCH block1:x/x/x operation. Version 0.4a1¶ Security fixes¶ - 18ddf8c: Proxy now only creates log files when explicitly requested - Support for secured protocols added (see Experimental Features) Experimental features¶ Support for OSCORE (formerly OSCOAP) and CoAP over DTLS was included These features both lack proper key management so far, which will be available in a 0.4 release. Added implementations of Resource Directory (RD) server and endpoint Support for different transports was added. The transport backends to enable are chosen heuristically depending on operating system and installed modules. - Transports for platforms not supporting all POSIX operations to run CoAP correctly were added (simple6, simplesocketserver). This should allow running aiocoap on Windows, MacOS and using uvloop, but with some disadvantages (see the the respective transport documentations). Breaking changes¶ - 8641b5c: Blockwise handling is now available as stand-alone responder. Applications that previously created a Request object rather than using Protocol.request now need to create a BlockwiseRequest object. - 8641b5c: The .observationproperty can now always be present in responses, and applications that previously checked for its presence should now check whether it is None. - cdfeaeb: The multicast interface using queuewithend was replaced with asynchronous iterators - d168f44: Handling of sub-sites changed, subsites’ root resources now need to reside at path ("",) Deprecations¶ - e50e994: Rename UnsupportedMediaType to UnsupportedContentFormat - 9add964 and others: The .remotemessage property is not necessarily a tuple any more, and has its own interface - 25cbf54, c67c2c2: Drop support for Python versions < 3.4.4; the required version will be incremented to 3.5 soon. Assorted changes¶ - 750d88d: Errors from predefined exceptions like BadRequest(“…”) are now sent with their text message in the diagnostic payload - 3c7635f: Examples modernized - 97fc5f7: Multicast handling changed (but is still not fully supported) - 933f2b1: Added support for the No-Response option (RFC7967) - baa84ee: V4MAPPED addresses are now properly displayed as IPv4 addresses Version 0.3¶ Features¶ - 4d07615: ICMP errors are handled - 1b61a29: Accept ‘fe80::…%eth0’ style addresses - 3c0120a: Observations provide modern async forinterface - 4e4ff7c: New demo: file server - ef2e45e, 991098b, 684ccdd: Messages can be constructed with options, modified copies can be created with the .copymethod, and default codes are provided - 08845f2: Request objects have .response_nonraisingand .response_raisinginterfaces for easier error handling - ab5b88a, c49b5c8: Sites can be nested by adding them to an existing site, catch-all resources can be created by subclassing PathCapable Possibly breaking changes¶ - ab5b88a: Site nesting means that server resources do not get their original Uri-Path any more - bc76a7c: Location-{Path,Query} were opaque (bytes) objects instead of strings; disctinction between accidental and intentional opaque options is now clarified Small features¶ - 2bb645e: set_request_uri allows URI parsing without sending Uri-Host - e6b4839: Take block1.size_exponent as a sizing hint when sending block1 data - 9eafd41: Allow passing in a loop into context creation - 9ae5bdf: ObservableResource: Add update_observation_count - c9f21a6: Stop client-side observations when unused - dd46682: Drop dependency on obscure built-in IN module - a18c067: Add numbers from draft-ietf-core-etch-04 - fabcfd5: .well-known/core supports filtering Internals¶ - f968d3a: All low-level networking is now done in aiocoap.transports; it’s not really hotpluggable yet and only UDPv6 (with implicit v4 support) is implemented, but an extension point for alternative transports. - bde8c42: recvmsg is used instead of recvfrom, requiring some asyncio hacks
https://aiocoap.readthedocs.io/en/latest/news.html
2018-09-19T00:29:29
CC-MAIN-2018-39
1537267155792.23
[]
aiocoap.readthedocs.io
mrthreshold¶ Usage¶ mrthreshold [ options ] input output - input: the input image to be thresholded. - output: the output binary image mask. Description¶ By default, an optimal threshold is determined using a parameter-free method. Alternatively the threshold can be defined manually by the user. Options¶ - -abs value specify threshold value as absolute intensity. - -percentile value threshold the image at the ith percentile. - -top N provide a mask of the N top-valued voxels - -bottom N provide a mask of the N bottom-valued voxels - -invert invert output binary mask. - -toppercent N provide a mask of the N%% top-valued voxels - -bottompercent N provide a mask of the N%% bottom-valued voxels - -nan use NaN as the output zero value. - -ignorezero ignore zero-valued input voxels. - -mask image compute the optimal threshold based on voxels within a mask. not using any manual thresholding option:Ridgway, G. R.; Omar, R.; Ourselin, S.; Hill, D. L.; Warren, J. D. & Fox, N. C. Issues with threshold masking in voxel-based morphometry of atrophied brains. NeuroImage, 2009, 44, 99-111
https://mrtrix.readthedocs.io/en/latest/reference/commands/mrthreshold.html
2018-09-19T00:11:25
CC-MAIN-2018-39
1537267155792.23
[]
mrtrix.readthedocs.io
1 Introduction This how-to explains how to upload a file in your app using ATS. You have some test situations in which you must upload a file to finish that test situation. During manual testing, you upload this file from your local computer into your app. ATS works similar, the only difference is that the local computer is your Selenium hub. This is regarding file uploads by ATS. Quick summary: 1 This only possible when you prepare your own files on that server. 2 This depends on where the agent is installed. This how-to will teach you how to do the following: - Understand why it is difficult to upload files in your app using automated testing - Uploading a file using ATS - The approach for each Selenium setup 2 Prerequisites Before starting with this how-to, make sure you have the following prerequisites in place: - Complete How to Create a Test Case - Know your Selenium setup (a provider like Browsertack, local server, etc.) 3 Uploading a File 3.1 Introduction To upload a file in your app, ATS must have access to that file. Selenium simulates a user on a local computer. When ATS gives the command to upload a file, it provides a file path to the file you want to upload. Since there are three different Selenium setups, there are also three different situations. The first situation is that you use Selenium on a local server. This means Selenium has no access to your local files. But you can add these files to the server or create a set of generic test files for that server. The second situation is that you use Selenium SaaS. This means selenium has no access to your local files unless you use an agent. When you use the agent, situation 1 applies. If you do not use an agent the selenium SaaS creates a VM session for each test case you run. This means there are no constant values like on your local selenium server. Some Selenium SaaS providers upload a generic set of test files into each VM session that you can use in your test case. In the quick summary in 1 Introduction chapter you see which selenium SaaS provides these files. The third situation is that you use a Selenium SaaS agent. ATS executes the test on the machine on which you installed the agent. In most cases this is a server inside your network. ATS can find all the files on this machine. 3.2 Uploading a File Using ATS ATS has a standard action for uploading files into your Mendix app. The Set File Manager action uploads a file from the local computer into the app using a file path. As explained earlier the file must be on the local machine for this to work. The Set File Manager action A possible filepath is: C:\users\ats\documents\receipt-1.png File Uploader widget in the app 3.3 Advice Each Selenium setup has different possibilities. We advise that if you want to test the uploading of files in your Mendix app, you must use a generic test file set. Create a set of files to use in your tests and make sure that your Selenium setup has access to it. 4 Uploading a File Using a Local Selenium Server (Docker) When testing using a local Selenium server, ATS executes the test on that server. The Set File Manager action only has access to the files on that server. You can create a generic set of test files or just add files to the server and use them in your tests. 5 Uploading a File in BrowserStack (SaaS) When testing using the BrowserStack, ATS executes the test against a new VM session every time. So every run gets a new VM session and afterwards BrowserStack deletes the entire session. With this setup it is not possible to upload your own files. BrowserStack does provide a large set of test files that they upload in each VM session. You can use the Set File Manager action to achieve this. Those files are always present so you don’t have to change the filepath every time. You can find the BrowserStack test files here. These files are possibly outdated and not maintained by Mendix. For the latest version please contact BrowserStack. 6 Uploading a File with a Selenium SaaS Agent When you use a Selenium SaaS provider you can also use their agent. Each provider gives access to an agent that allows you to test on a local machine. In the Set File Manager action, you can provide the filepath. This filepath depends on where you activated the local machine. Either a local server or a local computer. 7 Next Up You now learned how to upload a file and if it is possible with your selenium set up. The next how-to is How to Assert Data Grid Rows. You find an overview of all the how-tos and the structure on the ATS 2 How-to’s page. We advise you to follow the predefined structure.
https://docs.mendix.com/ats/howtos/ht-version-2/upload-file-using-ats-2
2018-09-18T22:53:58
CC-MAIN-2018-39
1537267155792.23
[array(['attachments/upload-file-using-ats-2/set-file-manager.png', None], dtype=object) array(['attachments/upload-file-using-ats-2/file-uploader-widget-app.png', None], dtype=object) ]
docs.mendix.com
. Mating planes can only be one level deep. For example, the rule: [Face Name]@[Generated model name]@[Generated sub assembly name] would not be valid..
http://docs.driveworkspro.com/Topic/GTCreateProfileCenterMate
2018-09-19T00:24:27
CC-MAIN-2018-39
1537267155792.23
[]
docs.driveworkspro.com
How to refund a transaction? As a product seller, you should issue refunds if a customer is not satisfied with the content. It is, however, up to you and then later to PayPal to judge if a customer should receive a refund. Tips on avoiding fraudulent refunds - Make sure your products descriptions are well-written - the buyer should be able to install/apply and work with the product without your further assistance just by reading the description or any added instructions. Also, providing refund information is highly recommended. Please don't put 'no refunds' policy in your account - all buyers can get a refund via PayPal, without involving you. - If a buyer contacts you with a complaint or asking for a refund, don't ignore it. PayPal tends to resolve refund cases in favor of buyers (even more so for digital downloads), so it is in your own interest to keep attentive whether or not the refund is fraudulent. Here's a how you as the seller can issue a refund.
http://docs.sellfy.com/article/28-how-does-the-refund-process-work
2017-06-22T18:37:43
CC-MAIN-2017-26
1498128319688.9
[]
docs.sellfy.com
Guaranteed Message Queuing Limits The maximum number of Guaranteed messages that Solace routers can queue at one time for delivery will be cumulatively reduced by the following factors: - the number of messages that provisioned endpoints receive—If a single published message is queued for delivery on n endpoints, that counts as n messages queued for delivery, even though only one copy of the message is stored in either the ADB or external disk storage array. - whether data from the router is being replicated—When the Replication facility is used, a copy of a message in the Replication queue counts as another message queued for delivery. That is, if a message is queued for delivery on n endpoints, and it is to be replicated, that counts as n +1 messages queued for delivery. - whether the router is processing transactions—When transactions are used, the router has to maintain extra information for each message in a transaction until the transaction is committed or rolled back. - whether there are provisioned endpoints configured to respect published messages’ TTL expiry times—When an endpoint respects a message’s TTL, the TTL is recorded and maintained until the message is consumed or discarded.
http://docs.solace.com/Features/G-Msg-Queueing-Limits.htm
2017-06-22T18:32:27
CC-MAIN-2017-26
1498128319688.9
[]
docs.solace.com
TOC & Recently Viewed Recently Viewed Topics Reset Activation Code If you uninstall, and then you reinstall Nessus, you will need to reset your activation code. - Navigate and log in to the Tenable Support Portal. - In the Main Menu of the support portal, select Activation Codes . - Next to your product name, select the x button to expand the product details. - Under the Reset column, select X button. Once reset, your activation code is available for use. Note: Reset codes have a 10 day waiting period before you can reset your code again.
https://docs.tenable.com/nessus/6_10/Content/ResetActivationCode.htm
2017-06-22T18:29:58
CC-MAIN-2017-26
1498128319688.9
[]
docs.tenable.com
TOC & Recently Viewed Recently Viewed Topics Anti-virus and nessus-service.exe.
https://docs.tenable.com/nessus/6_5/Content/AntiVirusSoftware.htm
2017-06-22T18:37:42
CC-MAIN-2017-26
1498128319688.9
[]
docs.tenable.com
Using MQTT The Message Queuing Telemetry Transport (MQTT) protocol is a lightweight, open protocol that can be used for Machine to Machine (M2M) and Internet of Things (IoT) use cases. The Solace Messaging Platform, as of SolOS version 7.1.1, supports this OASIS (Organization for the Advancement of Structured Information Systems) standard protocol (version 3.1.1). This support allows client applications to inter-operate with the Solace Messaging Platform without relying on Solace-specific APIs or custom software development. Like the Solace Message Format (SMF) protocol used by the Solace Messaging Platform, MQTT is a topic-based client/server publish/subscribe protocol that enables client applications to exchange messages without direct knowledge of how those messages will be delivered to other interested clients. Clients can publish messages to defined topics, and they may use topic filters (that is, topic subscriptions) to receive messages that are published to those matching topics. All topics are UTF-8 strings, which consist of one or more topic levels that are separated by forward slash “/” characters. Separating topic levels using slashes creates a hierarchy of information for organizing topics. Note: The SMF and MQTT protocols use similar topic syntax, therefore SMF and MQTT messaging applications can be used together in the same messaging network. That is, when an intersecting topic hierarchy is used, MQTT clients can receive messages published by non-MQTT clients (for example, SMF or REST clients), and non-MQTT clients can receive messages published by MQTT clients. However, there are some differences in topic syntax and usage between SMF and MQTT that must be considered before deploying MQTT applications in a network alongside SMF applications. For more information, see Topic Support and Syntax. MQTT clients connect to an MQTT server (in this case, a Solace router), which is responsible for maintaining their subscription sets and directing published messages to clients that have matching topic filters. (Note that topic filters function the same as topic subscriptions in the Solace Messaging Platform, so for consistency, they will be referred to as topic subscriptions throughout this document.) An MQTT client connection to a specific Message VPN on a Solace router is made through a dedicated MQTT port configured for the Message VPN that they are connecting to. The MQTT client connection also requires an MQTT session on the server. An MQTT session is represented as a managed object in SolOS, and it contains the session state for the MQTT client (that is, its subscriptions and messages). Note: - The Solace implementation of MQTT complies with the 3.1.1 MQTT protocol specification. Solace provides an annotated version of the specification (Contact Us ׀ Support ׀ Blog ׀ solace.com) that highlights any deviations, limitations, or choices made in the “SHOULD” and “MAY” clauses of the protocol specification for the Solace implementation. It is strongly recommended that network architects and programmers should review this document. - MQTT is not supported on the Solace 3230 appliances or Solace appliances that use a Network Acceleration Blade (NAB) model NAB-0401EM. MQTT Sessions An MQTT session object is a virtual representation of an MQTT client connection that exists as a managed object on a Solace router. An MQTT session holds the state of an MQTT client (that is, it is used to contain a client’s QoS 0 and QoS1 subscription sets and any undelivered QoS 1 messages). An MQTT session can be created: - Automatically when a client successfully connects to the Solace router. - By an administrator using the Solace CLI, SEMP, or SolAdmin. Although the MQTT specification does not require administrator‑provisioned MQTT sessions to be supported, they are allowed, and they provide more flexibility for application development. Note: MQTT sessions should not be confused with Solace sessions (that is, non‑MQTT sessions). MQTT Session Persistence When a connecting client provides a CLEAN=1 flag for the MQTT session, the client’s MQTT session and any related information (subscription sets and undelivered messages) are not persisted after the client disconnects. That is, the flag ensures that the session “cleans up” after itself and no information is stored on the router after the client disconnects. This is true even if the session was administratively provisioned (through CLI or SEMP). If the client provides a CLEAN=0 flag, the MQTT session is persisted on the router, which means that the client’s client ID, topic subscriptions, QoS levels, and undelivered messages are all maintained (that is, they are not cleaned up). The client may then reconnect to the persisted session later. An MQTT session can be deleted: - automatically when a client that created the MQTT session with a CLEAN=1 flag disconnects - when a client creates a new MQTT session with a CLEAN=1 flag and the same session identifier as the previous session - manually by administrators through the Solace CLI, SEMP, or SolAdmin Quality of Service Levels MQTT publish messages and topic subscriptions are assigned separate quality of service (QoS) levels, which determine the level of guarantee applied to message delivery. The MQTT QoS levels are: - QoS 0—At most once delivery. No response is required by the receiver (whether it is a Solace router or a subscriber), and no retry attempts are made if the message is not delivered. - QoS 1—At least once delivery. This level ensures that the message is delivered at least once. In a QoS 1 exchange, the receiver (whether it is a Solace router or a subscriber) is required to send an acknowledgment of the message to indicate that it has been received. - A Solace router requires a Guaranteed messaging configuration to provide QoS 1 service. If a router is not configured for Guaranteed messaging, all QoS1 and QoS 2 MQTT topic subscriptions will be downgraded to QoS 0. Additionally, QoS 1 and QoS 2 messages will be accepted by the router, but they will be delivered as QoS 0 messages. For more information on how to configure a Solace router for Guaranteed Messaging, see Managing Guaranteed Messaging. - QoS 2—Exactly once delivery. Solace converts published QoS 2 messages to QoS 1. The Solace equivalent to QoS 0 is a message delivery mode of Direct. The Solace equivalent to QoS 1 is a message delivery mode of Guaranteed (that is, non‑persistent or persistent). For more information, see Working With Guaranteed Messages. The following figure shows how different QoS levels may be applied to the same message. From a publisher to a Solace router, an MQTT publish message uses the QoS level assigned by the message publisher. From the router to the message subscriber, an MQTT publish message uses the QoS level set by the matching topic subscription for the consuming client. QoS Levels Applied During Message Delivery When there is a topic subscription match for an MQTT publish message, a consuming client will receive the message with the lowest possible QoS level for its subscription. For example, if a message is published with a QoS of 1, and it matches the client’s QoS 0 subscription, it will be delivered with a QoS of 0. However, because MQTT uses a one-to-many pub/sub messaging model, that message could also be delivered to a matching QoS 1 topic subscription with a QoS of 1. Queues for QoS1 Suscriptions An MQTT session that has QoS 1 topic subscriptions must be associated with a durable queue to hold those subscriptions and any undelivered messages that are attracted by those QoS1 subscriptions. When an MQTT session is automatically created by a client, a queue is created when the first QoS1 subscription is added. If the MQTT session was created with CLEAN=1 CONNECT, the queue is deleted along with the MQTT session when the client disconnects, but if the MQTT session was created with CLEAN=0 CONNECT, the queue will remain after the client disconnects, and will only be deleted by administrative action. When an MQTT session is administratively created, a queue is not created automatically for the MQTT session. Queues for administratively-created MQTT sessions must be manually created and deleted. The configuration parameters given to an MQTT queue depends on whether it was created automatically on a client‑created MQTT session, or if it was created by an administrative action: - client‑created MQTT sessions—the queue uses the same configuration values used for standard dynamic client‑created queues. (For information on the default values that are used, see Configuring Default Values for Client-Created Endpoints.) - Unlike standard dynamically-created endpoints, an MQTT client cannot pass in custom endpoint properties and provision flags. - If the copy‑from‑on-createcommand is used to specify a CLI‑provisioned queue or topic endpoint with custom values, those values will also be applied to MQTT queues for client‑created MQTT sessions. - administratively created MQTT sessions—the queue uses the default configuration values for an queues provisioned through the Solace CLI with the exception that it is enabled (no shutdown) when created. For information on the default values used for queues provisioned by an administrator through the Solace CLI, see Configuring Queues. Note: MQTT Topics As a publish/subscribe messaging protocol, MQTT relies on a hierarchical topics. Clients can publish messages on specific topics, and clients can receive published messages that match their current topic subscriptions. MQTT topics are UTF-8 strings, which consist of one or more topic levels that are separated by forward slash “/” characters. Separating topic levels using slashes creates a hierarchy of information for organizing topics. Connected clients can publish MQTT messages to defined topics, and they may use topic filters (that is, topic subscriptions) to receive messages that are published to those matching topics. Payload Handling When Message Types Change This section discusses how the payloads of MQTT publish messages received by the Solace router (that is, ingress messages) are handled when they are subsequently delivered as different message types (either as SMF or REST messages) to non‑MQTT clients that have matching topic subscriptions. It also discusses how the message payloads of received SMF or REST message are handled when they are subsequently delivered as MQTT publish messages to consuming MQTT clients with matching topic subscriptions. MQTT Ingress, SMF Egress When a Solace router receives an MQTT publish message from a client, the received message’s payload, which is a sequence of bytes, is encapsulated in the resulting egress SMF message as a binary attachment. MQTT Ingress, REST Egress When a Solace router receives an MQTT publish message from a client, the received message’s payload is delivered with a Content-Type of application/octet‑stream. SMF Ingress, MQTT Egress An SMF message may contain binary data and XML data payloads, and in some cases, user-defined and Solace-defined message header fields. When a Solace router receives an SMF publish message from a client for which there is a matching MQTT topic subscription, the payload of the message is processed before it is sent to the MQTT client as an MQTT publish message. The SMF message’s XML message payload or binary attachment can be used for the payload for the MQTT publish message, but not both. - If the SMF message contains only a binary attachment, the following occurs: - If there is no binary metadata, then the binary attachment is copied into the payload field of the MQTT publish message. - If there is binary metadata, which describes the format of the binary attachment, (that is, a text or binary data) the data of the specified type is copied into the payload field of the MQTT publish message. - If the SMF message contains only an XML message payload, it will be copied into the payload field of the MQTT publish message. - If the SMF message contains both a binary attachment and an XML message payload, neither is sent—regardless of their content. - Custom user properties and userdata properties are not copied to MQTT publish message. Note: Solace enterprise messaging APIs support the ability to carry structured data types (SDTs), such as maps and streams, in the binary attachment of the message or as user-defined message header fields. However, these SDTs cannot be used by MQTT clients. Therefore, they are not included in the MQTT publish message. Note: If the original SMF message contained a payload, but the process described above results in an MQTT publish message with no payload, the MQTT publish message is still delivered to the MQTT client even though it contains no payload. In this case, the message is noted as unformattable in the MQTT Session statistics. REST Ingress, MQTT Egress When a Solace router receives a REST message, its payload, which consists of the data transmitted after the message’s HTTP headers, is delivered in its entirety in an MQTT publish message to an MQTT client. The particular Content-Type of the published message is not significant. Will Messages When connecting to a Solace router, MQTT clients may specify a “last will and testament” (or simply “will”) message. A will message is stored with the MQTT session information that is allocated for the client, and it will be sent if an MQTT client is disconnected from the Solace router unexpectedly. A will message consists of a topic, QoS level, and message body. Will messages allow other interested MQTT clients about unexpected connection loss. Note: - A Solace router will not broadcast will messages when an MQTT session is terminated due to a router restart, high availability (HA) router failover, Message VPN shutdown, or Guaranteed messaging shutdown. - Retained will messages are not supported by the Solace MQTT implementation. If included in a message, the “Will Retain” flag is ignored.
http://docs.solace.com/Features/Using-MQTT.htm
2017-06-22T18:23:22
CC-MAIN-2017-26
1498128319688.9
[array(['images/MQTT_QoS_Levels.png', 'QoS Levels Applied During Message Delivery'], dtype=object)]
docs.solace.com
REST Overview Representational State Transfer (REST) is a lightweight way of designing network applications. It enables clients and network routers to communicate using standard HTTP methods like POST. Systems that implement REST are referred to as RESTful systems. RESTful systems use HTTP methods in a way that is consistent with the HTTP protocol definition (RFC 2616). REST is not a defined standard but rather a set of guideline REST Producers and Consumers There are two main actors in the Solace REST protocol: - REST producers—sends messages to the Solace router - REST consumers. The router authenticates REST producers when they connect using a basic or client certificate authentication scheme. The Solace REST implementation does not support Kerberos authentication. Clients using a basic authentication scheme can connect as anonymous publishers by not specifying a username or password. When a connecting REST producers has been authenticated, the privileges for that producer are determined by standard Solace configuration parameters such as client profiles and ACL profiles. A REST producers that has established a client connection to the router is represented as a normal client and will be visible using Solace CLI commands such as the show client User EXEC command. For information on how Solace routers authenticate client entities and the authorization process that is used to provide them with service, refer to Client Authentication/Authorization. REST Consumers To consume messages from the Solace messaging platform, REST Consumers REST Delivery Points (RDPs) objects are required. RDPs objects that are provisioned within Message VPNs on the Solace router. Physical REST applications that connect to the router can bind to an RDP to consume REST messages from it. RDPs consist of the following components: REST Consumers REST Consumers are virtual objects within a Message VPN that represent physical consumers of REST messages (that is, client applications). Each physical consumer that an RDP services is identified by its IP address or hostname and TCP port number. An RDP can service multiple REST consumers. When an RDP services multiple REST Consumers, the Solace router performs a load balancing function by choosing which REST Consumer to deliver each message to. A Solace router can be configured to use authentication (basic, client certificate, or none) when it initiates connections to REST Consumers. Note: Client certificate authentication is only supported for outgoing connections on Solace appliances. It is not supported on Solace VMRs. Client Profile An RDP must be assigned an existing client profile in the same Message VPN as the RDP. The RDP uses the TCP parameters and egress queue properties of the bound client profile. Queue Bindings An RDP also contains queue bindings, which are references to durable message endpoints on the Solace router. RDPs can only be bound to durable queues that exist within the same Message VPN as the RDP. When an RDP is bound to a queue, messages delivered to that queue will be attracted by the RDP and then scheduled for delivery to one of the REST consumers configured for the RDP. If a queue is deleted, any queue bindings referencing the deleted queue remain intact but those queue bindings will not be operational. To be brought into service, an RDP must have at least one operational queue binding. An RDP whose queue bindings are all not operational cannot be brought into service. REST Message Structure REST messages sent to and from a Solace router use the HTTP POST method. The POST structure differs depending on whether the message is a request or a response, and whether the message is being sent to or from a router. The Solace REST interface uses both standard HTTP headers (such as Content-Length) and Solace-specific HTTP headers (such as Solace-Client-Name). HTTP headers that are unique to the Solace REST interface and used internally by the router are prefixed by Solace-. To avoid confusion, applications should not use HTTP headers that begin with Solace-. For a complete list of the HTTP headers that are specific to the Solace platform, refer to Solace REST Usage of HTTP Headers. The body of a REST message consists of the data transmitted after the message’s HTTP headers. When a REST message reaches a router, this data is encapsulated in a Solace message payload, which may take a number of formats, including text or binary. Messages may optionally omit the message body. REST Message Structure Examples The following are some possible configurations of REST messages based on origin and message type. Note: There is no requirement for communication between a publisher and a subscriber to be exclusively REST or exclusively non-REST. For example, a REST client can receive a REST message and respond with a Solace Message Format (SMF) message. The Solace router will deliver the message to its destination regardless of the message format. REST Published Message to Solace Router A REST message sent from a publisher to a Solace router consists of a POST request, HTTP headers, and an HTTP message body. The router uses the POST request to determine where to route the message, and the HTTP headers specify message properties that may determine how the message is handled. The body of the POST request is included as the message payload. REST Message Acknowledgment from Solace Router When a Solace router successfully receives a REST message, it sends back an acknowledgment (“ack”) in the form of a POST response. For Guaranteed messages, the router delays sending the POST response until it successfully spools the message. An ack consists of a response code (typically 200 OK) and HTTP headers. Because the response code is sufficient to acknowledge receipt of the original message, the returned message has an empty message body. REST Response from Consumer If a request/reply exchange pattern is used, when a consumer responds to a REST request message, it reports its own response code ( 200 OK), new HTTP headers, and a message body. If the response from the consumer is a message, the message body will contain the message payload information that is to be returned to the original publisher. Message Exchange Patterns This section describes the common message exchange patterns that may take place when using REST messaging with the Solace messaging platform. REST Producer One-Way POST to Solace Router The figure immediately below shows an exchange pattern where a REST producer sends a message to a Solace router. REST Producer One-Way POST to Solace Router In this scenario, the REST producer sends a message as a POST request to the router by directing the request to a URL that consists of the IP address or hostname of the router, the endpoint type (queue or topic), and a queue or topic name. The router behaves differently based on whether the message’s delivery mode is identified as direct or persistent. If the message’s Solace-Delivery-Mode is direct, upon receipt of the message the router sends a POST response of 200 OK to the REST producer, then delivers the message. The POST does not indicate that the message has been delivered to its consumers, only that the router has received the message. If the message’s Solace-Delivery-Mode is persistent or non-persistent, when the router receives the message it stores it in the message spool, then sends a POST response of 200 OK to the REST producer. Again, the POST response to the REST producer only indicates that the router has received the message and does not indicate that the message has been delivered. The router then attempts to deliver the message. A REST producer’s message to a router can fail for a number of reasons. If there is an error authenticating the REST Producer with the router or if there is an error in the message format, the router will return a POST response which includes details of the error. Solace Router One-Way POST to REST Consumer The figure immediately below shows a scenario involving a Solace router delivering a message to a REST Consumer. Solace Router One-Way POST to REST Consumer In this scenario, the router has received a message, and the router determines that the message will be received by a REST Consumer. If the message type is persistent, the router stores the message and retains it until delivery is confirmed. The message is delivered to the REST Consumer by way of a POST request to a fixed URL contained in the message. Upon receiving the message, the REST Consumer acknowledges the message with a POST response of 200 OK to the router. If the message type was persistent, the router removes the message from the message spool when a 200 OK response is received. Because this is a one-way message delivery, any information contained in the body of the response from the REST Consumer is discarded. If the router receives any status code in response other than 200 OK, it will continue trying to deliver the message until the delivery is a success or until some other condition is met (for example, the max retry parameter is exceeded). REST Producer Request/Reply to Solace Router The figure immediately below shows how a REST Producer can send a message to any message consumer and specify that a reply message is desired. REST Producer Request/Reply to Solace Router The REST Producer sends an initial REST message that includes the header Solace-Reply-Wait-Time-In-ms. This header indicates that a reply message is expected. If the message’s Solace-Delivery-Mode is direct, when the Solace router receives the message, it encodes the REST message as an SMF message and sends it, with its specified delivery mode, to a consumer. If the message’s Solace-Delivery-Mode is persistent or non-persistent, the router spools the message before sending it. The consumer replies with its own message, which includes a reply to the original message. Upon receiving this reply, the router constructs a REST POST response from the reply and delivers it to the original producer with the status code 200 OK and the reply message’s contents as the HTTP message payload. In this scenario, delivery failure can occur due to a number of reasons, including the wait time being exceeded or the reply message being in an unsupported format. Solace Router Request/Reply to REST Consumer The figure immediately below shows how request/reply messages may also originate from a non-REST source. Solace Router Request/Reply to REST Consumer This scenario is similar to a one-way POST from a Solace router to a REST Consumer. In this case, however, the POST from the router that the REST Consumer receives includes a Solace-Reply-Wait-Time-In-ms header. This indicates to the REST Consumer that the router expects a reply to the message. The REST Consumer encodes its reply in the HTTP message body of the POST response that it returns to the router. The status code of the response message is 200 OK if the original message was received correctly. A 200 OK response indicates that the message was received even if, for example, there are errors in processing the message. If required, the REST Consumer can encode details about any errors in the body of a 200 OK response. REST Producer Request with Asynchronous Reply-to Destination The figure immediately below shows how an application can send messages and receive asynchronous replies using REST. REST Producer Request with Async Reply-to Destination In this scenario, an application acts as both a REST Producer and a REST Consumer (that is, it both sends and receives messages using HTTP POST). As a REST Producer, it sends the message as a POST to the router including the Solace-Reply-To-Destination header. The router reads the Reply-To information and destination from the POST. The router stores the message on the message spool if it is a persistent message. It then returns a 200 OK status response to the REST Producer before sending the message to the end consumer. When the end consumer receives the message, the message’s Reply-To information is extracted from the original POST. The consumer sends its reply back to the router, where the reply is spooled again. The router delivers the reply message with a POST to the original application, whose connection parameters are stored as a REST Consumer in an RDP on the router. When the application receives the final reply message, it sends a 200 OK back to the router to indicate that the sequence has completed.
http://docs.solace.com/Features/Using-REST.htm
2017-06-22T18:23:35
CC-MAIN-2017-26
1498128319688.9
[array(['images/REST_Producer_One_Way_POST_to_Solace.png', None], dtype=object) array(['images/Solace_One_Way_POST_to_REST_Consumer.png', None], dtype=object) array(['images/REST_Producer_Request_Reply_to_Solace.png', None], dtype=object) array(['images/Solace_Request_Reply_to_Consumer.png', None], dtype=object) array(['images/REST_Producer_Async_Reply_to_Destination_4.png', None], dtype=object) ]
docs.solace.com
Payment methods available on Sellfy Payment methods available on Sellfy A cool perk of using Sellfy regardless of your plan is that you will receive the payments immediately after a purchase is made, which means no waiting for monthly payouts! As a buyer you’ll be able to choose between 2 payment options, which will show up on the ‘Checkout’ screen: - PayPal - Credit card Note: both options can be used simultaneously. The seller chooses which options are available. As a product seller, the payment options depend on which plan you are using. Also, the fees charged per transaction differ depending on the plan you use - here’s an overlook. If you are using the Free plan you can receive payments via PayPal (). If you are using the Professional plan you will be able to offer your customers two payment options: - PayPal - plus, you get additional currencies and express checkout with the PRO plan. - Stripe (). Enabling Stripe payment options will allow your buyers to pay directly via credit card. You can set up the payment options in Store Settings —> Payment Options If you’re experiencing some issues with PayPal payments, please make sure that: - Your PayPal account is verified and your credit card is associated with the account - Your PayPal account is linked to your Sellfy account - The country where you are located allows PayPal transactions both for selling and buying. (If you’re a seller located in India you’ll be able to sell on Sellfy only if you use the Professional plan.) If you’re having trouble setting up PayPal - here's how to solve most of them.
http://docs.sellfy.com/article/42-payment-methods-available-on-sellfy
2017-06-22T18:37:20
CC-MAIN-2017-26
1498128319688.9
[]
docs.sellfy.com
Paymetheus Setup Guide¶¶ The Windows Installer ( .msi file) is located here:. It will install Paymetheus to your computer’s Program Files folder. Installation is pretty straightforward, but instructions are provided below: Download the correct file: For 32-bit computers, download the decred_1.0.5-release_x86.msifile. For 64-bit computers, download the decred_1.0.5-release_x64.msifile. Navigate to download location and double click the .msifile. Follow the installation steps. Within this process you’ll be prompted to accept an End-User License Agreement. After setup, the features should be installed to your ..\Program Files\Decred\folder and accessible through the Start Menu (look for Decredin the Program list) Start Paymetheus¶. We’re going to use a local one that Paymetheus has already started so just press Continue. The first time Paymetheus starts, it will download the blockchain in the background. This can take up to an hour. Create or Restore Wallet¶¶. Continue to Using Paymetheus
https://docs.decred.org/getting-started/user-guides/paymetheus/
2017-06-22T18:21:43
CC-MAIN-2017-26
1498128319688.9
[array(['../../../img/Paymetheus-dcrd-login.png', 'Paymetheus connection screen'], dtype=object) array(['../../../img/Paymetheus-seed-window.png', 'Paymetheus wallet creation screen'], dtype=object)]
docs.decred.org
JDocument::setModifiedDate The "API17" namespace is an archived namespace. This page contains information for a Joomla! version which is no longer supported. It exists only as a historical reference, it will not be improved and its content may be incomplete and/or contain broken links. JDocument::setModifiedDate Description Sets the document modified date. public function setModifiedDate ($date) - Returns void - Defined on line 615 of libraries/joomla/document/document.php See also JDocument::setModifiedDate source code on BitBucket Class JDocument Subpackage Document - Other versions of JDocument::setModifiedDate User contributed notes Code Examples Advertisement
https://docs.joomla.org/API17:JDocument::setModifiedDate
2017-06-22T18:40:40
CC-MAIN-2017-26
1498128319688.9
[]
docs.joomla.org
You build (spawn) units called creeps the same way as in other strategy games, but with one exception: you construct the "body" of a new creep out of 7 available body part types, the resulting body being a sequence up to 50 parts. It allows thousands of creep types and their roles: ordinary workers, huge construction machines able to build or repair a structure within a few cycles, weaselly couriers, heavy capacious trucks, fast and cheap scouts, well-equipped fighters with regeneration ability, etc. It may even be creeps resembling towers or fortresses for mining, defending, or seizing, with very little speed (couple of tiles per minute), but monstrous characteristics. Everything is up to you, your tactics and imagination. However, remember that any creep has a life cycle of 1500 game ticks (approx. 30-60 minutes depending on the tick duration). Then it "ages" and dies. So you not only need to control existing creeps but set up manufacturing and automatic control of superseding generations of your creeps as well. A standard spawn (structure) can only spawn regular creeps with the total cost of up to 300 energy units. Spawning more expensive creeps requires a spawn extension in the room. Each extension can contain up to 50 extra energy units that may be spent on creation of a creep. The exact location of extensions within a room does not matter, but they should be in the same room with the spawn (one extension can be used by several spawns). All the necessary energy should be in the spawn and extensions in the beginning of the creep creation. The amount of extensions available for construction depends on the Room Controller in the room. Read more in Global control. Creeps Skills Possible part types of a creep body: WORK– ability to harvest energy, construct and repair structures, upgrade controllers. MOVE– ability to move. CARRY– ability to transfer energy. ATTACK– ability of short-range attack. RANGED_ATTACK– ability of ranged attack. HEAL– ability to heal others. CLAIM- ability to claim territory control. TOUGH– "empty" part with the sole purpose of defense. The effectiveness of an ability depends on the amount of parts of a corresponding type. For example, a worker creep with 3 parts of the WORK type will work 3 times as effectively as a creep with only 1 WORKpart. The same applies to all the other types and actions. Movement Each body part has its own physical weight: the more parts a creep bears, the more difficult it is for it to move. Each body part (except MOVE) generates fatigue points when the creep moves: 1 point per body part on roads, 2 on plain land, 10 on swamp. Each MOVE body part decreases fatigue points by 2 per tick. The creep cannot move when its fatigue is greater than zero. To maintain the maximum movement speed of 1 square per tick, a creep needs to have as many MOVEparts as all the other parts of its body combined. In other words, one MOVE part can move one other part one square per tick. If a creep has less MOVE parts, its movement will be proportionally slowed which is seen by the increasing fatigue. It's worth noting that empty CARRY parts don't generate fatigue. Samples: - Creep [CARRY, WORK, MOVE]will move 1 square per tick if it does not bear energy, and 1 square per 2 ticks if loaded. - Creep [TOUGH, ATTACK, ATTACK, MOVE, MOVE, MOVE]will move at maximum speed of 1 square per tick. - Creep [TOUGH, ATTACK, ATTACK, MOVE, MOVE]will move 1 square per 2 ticks because of rounding up. Damage The total amount of hits a creep has depends of the amount of its body parts – 100 hits per each part. The order in which the parts were specified during the spawning of a creep also has a bearing. Under attack, the first parts to take hits are those specified first. Full damage to a part leads to complete disabling of it – the creep can no longer perform this function.
http://docs.screeps.com/creeps.html
2017-06-22T18:16:52
CC-MAIN-2017-26
1498128319688.9
[array(['img/bodyparts.png', None], dtype=object)]
docs.screeps.com
Update installation profile Using the existing installer, it is now possible to update a "live" Providence configuration with additions or changes to the profile configuration using a "mini" profile. In previous versions of the software, if you reinstalled a profile, the instance of Providence would be entirely overwritten with the new profile. As of version 1.5, however, you may simply update an existing configuration with a partial, or "mini" profile, incorporating the needed additions. Currently, this function is only accessible via the command line, in CaUtils. update-installation-profile - Updates the installation profile to match a supplied profile name. This function only creates new values and is useful if you want to append changes from one profile onto another. Your new profile must exist in a directory that contains the profile.xsd schema and must validate against that schema in order for the update to apply successfully. The directory must also contain base.xml, or whichever base profile you are using. Options for update-installation-profile sphinx
http://docs.collectiveaccess.org/wiki/Update_installation_profile
2017-06-22T18:29:02
CC-MAIN-2017-26
1498128319688.9
[]
docs.collectiveaccess.org
Feedback We're here to help you. But we can't if you don't tell us what you need. The best way to do that is file a ticket. Creating a ticket helps us keep track of your needs and any outstanding requests. It also helps other people who might have the same problem or question that you have, because they can follow the result of your ticket. Please note that all information in a ticket is visible to the public. If for some reason you can't file a ticket, you can send email to [email protected].
https://docs.astro.columbia.edu/wiki/Feedback
2017-06-22T18:26:57
CC-MAIN-2017-26
1498128319688.9
[]
docs.astro.columbia.edu