content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
In this section, we will show you how to use Mecanim. To begin with, please download and install “Raw Mocap Data for Mecanim” and “Unity-chan” from the Unity Asset Store. For the Unity 5 users, click accept when you try to import the “Unity-chan” package to Unity. Then, drag the unitychan which is located in UnityChan> Prefabs into the Hierarchy. Now select unitychan and remove her Animator component as we will set this up for ourselves later. After removing the Animator component, please remove the scripts as well. Then, create a “myAssets” folder with “myAnimations”, “myAnimators” and “myPrefabs” folders in it for organization.
http://docs.styly.cc/mecanim/getting-started-with-mecanim/
2017-12-11T03:58:27
CC-MAIN-2017-51
1512948512121.15
[array(['http://docs.styly.cc/wp-content/plugins/lazy-load/images/1x1.trans.gif', None], dtype=object) array(['http://docs.styly.cc/wp-content/plugins/lazy-load/images/1x1.trans.gif', None], dtype=object) array(['http://docs.styly.cc/wp-content/plugins/lazy-load/images/1x1.trans.gif', None], dtype=object) array(['http://docs.styly.cc/wp-content/plugins/lazy-load/images/1x1.trans.gif', None], dtype=object) array(['http://docs.styly.cc/wp-content/plugins/lazy-load/images/1x1.trans.gif', None], dtype=object) array(['http://docs.styly.cc/wp-content/plugins/lazy-load/images/1x1.trans.gif', None], dtype=object) array(['http://docs.styly.cc/wp-content/plugins/lazy-load/images/1x1.trans.gif', None], dtype=object) ]
docs.styly.cc
Django Projects¶ At its core, a Django project requires nothing more than a settings file. In practice, almost every project consists of the following items: Settings¶ The settings module is the only true requirement for a Django project. Typically, it lives in the root of your project as settings.py. Handling Settings for Multiple Environments¶ Django’s startproject command gives you a single settings.py file. If you’re new to Django, stick with the single file while you learn the ropes. As you start to deploy production sites and work with more than one developer, you’ll realize the benefit in maintaining multiple settings files. For example, you probably want to run with DEBUG on locally, but not in production. There are numerous ways to handle multiple settings. Whatever solution you choose, it should meet the following requirements: - All the important settings files are version controlled. If the settings change on your production site, you’ll want to know who made the changes and when they were made. - All settings inherit from a common base. If you want to add django-debug-toolbarto your INSTALLED_APPS, you should be able to do it without redefining all your INSTALLED_APPS. If you don’t want to think about it, simply use our Django project template when starting new projects. It is ready to support multiple projects out of the gate: django-admin.py startproject --template= -e py,rst,example,gitignore my_project_name See also - Django’s Split Settings Wiki - Examples of handling multiple settings Handling File Paths¶ One function of your settings is to tell Django where to find things such as your static media and templates. Most likely they’ll already live inside your project. If so, let Python generate the absolute path names for you. This makes your project portable across different environments. import os DIRNAME = os.path.dirname(__file__) # ... STATIC_ROOT = os.path.join(DIRNAME, 'static') URLconf¶ By default, you’ll find your URLconf in the root of your project as urls.py. It defines how requests should be routed for your project. Keep it Simple¶ Your project URLconf should simply include URLconfs from your applications whenever possible. This keeps your application logic inside your application and your project simply serves as a pointer to it. See also - Django URL dispatcher documentation - Including other URLconfs Handling URLconfs for Multiple Environments¶ Just like your settings module, eventually, you’ll come across the need to run different URLconfs for different environments. You may want to use admin locally, but not once deployed. Django already provides an easy way for you to do this with the ROOT_URLCONF setting. This is basically the same scenario as having multiple settings. You can use the same solution here: myproject ... settings/ __init__.py base.py <-- shared by all environments def.py production.py urls/ __init__.py base.py <-- shared by all environments dev.py production.py ... See also Our django-layout template WSGI File¶ The WSGI file tells your WSGI server what it needs to do to serve your project on the web. Django’s default wsgi.py is sufficient for most applications. Local Applications¶ Local applications are Django applications that are domain-specific to your project. They typically live inside the project module and are so closely tied to your project, they would have little use outside of it. Local vs. Third Party¶ There are hundreds [1] of open source Django applications available. Before you reinvent the wheel, make sure somebody hasn’t already solved your problem by searching on Google or Django Packages. If you find something that will work do not put it your project code, instead add it to your pip requirements. The Namespace¶ How local applications should be imported into your project is a source of ongoing debate in the Django community [2]. Fortunately, with the release of Django 1.4, the default manage.py no longer changes the PYTHONPATH [3], making this much less of an issue. At Lincoln Loop, we put project applications inside the project namespace. This prevents polluting the global namespace and running into potential naming conflicts. Templates¶ Location¶ Templates typically live in one of two places, inside the application or at the root level of a project. We recommend keeping all your templates in the project template directory unless you plan on including your application in multiple projects (or developing it as a open source “reusable” application). In that case, it can be helpful to ship with a set of sample templates in the application, allowing it to work out-of-the-box or serving as an example for other developers. Naming¶ Django’s generic views provide an excellent pattern for naming templates. Following design patterns already found in Django can be helpful for a couple reasons. - They have been well thought out and tested. - It makes your code immediately understandable to new developers picking up your Django code. Most generic view templates are named in the format: [application]/[model]_[function].html For example, creating a template to list all of the contacts ( Contact model) in my address book ( address_book application), I would use the following template: address_book/contact_list.html Similarly, a detail view of a contact would use: address_book/contact_detail.html Not every template you create will map so closely to a single model, however. In those cases, you’re on your own for naming, but should still keep your templates in a directory with the same name as your application. When using inclusion tags or other other functionality to render partial templates, keep them in an includes directory inside the application template directory. For example, if I had an inclusion tag to render a contact form inside my address book application, I would create a template for it at: address_book/includes/contact_form.html There is no rule (anymore) that templates must have an html file extension. If you are rendering something else (plain text, JSON, XML, etc), your templates file extension should match that of the content you are generating. Static Media¶ Static media encompasses all the non-dynamic content needed for your website: CSS, images, JavaScript, Flash, etc. It comes in two flavors, user-generated content and the media needed to render your site. Best practice dictates that your static media lives inside your project and your version control system. Certainly, we don’t want stuff our users’ uploads to go to the same place. As such, we always use django.contrib.staticfiles [4]. In addition to some other slick features, staticfiles gives you a static template tag [5] that will properly locate your static files whether they are on your local computer or in a non-local storage on your production system. This leaves MEDIA_URL and MEDIA_ROOT to manage user generated content. See also On Static Media and Django
http://django-best-practices.readthedocs.io/en/latest/projects.html
2017-08-16T19:32:40
CC-MAIN-2017-34
1502886102393.60
[]
django-best-practices.readthedocs.io
Preside Drafts System As of Preside 10.7.0, the core versioning system also supports draft changes to records. The site tree will automatically have this feature activated whereas data manager objects will need the feature activated should you wish to use it. To activate drafts in an object managed in the Data manager, you must annotate your object with the datamanagerAllowDrafts attribute (it defaults to false). For example: /** * @labelfield name * @dataManagerGroup widget * @datamanagerAllowDrafts true */ component { property name="name" type="string" dbtype="varchar" required="true"; property name="job_title" type="string" dbtype="varchar"; property name="biography" type="string" dbtype="text"; property name="organisation" type="string" dbtype="varchar"; property name="image" relationship="many-to-one" relatedTo="asset" allowedtypes="image"; }
https://docs.presidecms.com/devguides/drafts.html
2017-08-16T19:28:50
CC-MAIN-2017-34
1502886102393.60
[]
docs.presidecms.com
This is the home of the Teiid 8.13 Document Space. Teiid is a data virtualization platform, where one can integrate data from multiple disparate systems like RDBMS, XML, EXCEL, SalesForce etc and provides a single view of the data. Teiid is typically hosted inside JBoss EAP platform, but can also be run in embedded mode. Teiid Designer project provides the tooling for Teiid. Before one can delve into installing of Teiid and Teiid Designer, it is very important to learn few basic constructs of Teiid, like what is VDB? what is Model? etc. For that please read the short introduction here.
https://docs.jboss.org/author/display/teiid813final/Home
2017-08-16T19:39:18
CC-MAIN-2017-34
1502886102393.60
[]
docs.jboss.org
What’s New in NSClient++ 0.4.2# The main goal of the 0.4.2 update is to create a new modern check subsystem. This means all checks will get: - help command line options to get help - proper documentation - the same or similar syntax - “real-time” options in many instances Documentation# Completely new documentation tool chain (your reading the result). New windows subsystem# The windows subsystem in < 0.4.2 is optimized for Windows NT and really needs an overhaul. - Support for remote checking of windows machines (i.e. allow NSClient++ act as a broker and check remote machines in an agent-less fashion) - Support for Modern Windows (vista and beyond) - Drop support for “Legacy Windows” (will still be available as-is in but not updated/get extended support) - Support real-time monitoring in more instances New Syntax# All queries has inherited the new syntax from “event-log checks” this means you can now check a lot of nifty stuff which was previously impossible. It also makes things much simpler as all the command now has the same syntax where previously most command has slight different dialects. A simple example of migrating a command: CheckCPU MaxWarn=80% MaxCrit=90% check_cpu "warn=load > 80%" "crit=load > 90%" The first thing to realize is that the old commands still work as they will be migrated automatically for you. Also all new commands have _ instead of CamelCase of check_cpu is the new version of CheckCPU. Migrating the thresholds is pretty simple as well as most old thresholds are of the form Max/Min crit this or that. and then you just replace them as above. Sensible defaults# Another change is that all new commands now have a sensible default. For instance check_cpu will work without arguments. check_cpu
https://docs.nsclient.org/about/whatsnew/0.4.2/
2017-08-16T19:48:09
CC-MAIN-2017-34
1502886102393.60
[]
docs.nsclient.org
indexes.conf The following are the spec and example files for indexes.conf. indexes.conf.spec # Version 6.6.2 # #. sync = <nonnegative integer> * The index processor syncs events every <integer> number of events. * Set to 0 to disable. * Highest legal value is 32767 * Defaults to 0. defaultDatabase = <index name> * If no index is specified during search, Splunk searches the default index. * The specified index displays as the default in Splunk Manager settings. * Defaults to "main". queryLanguageDefinition = <path to file> * DO NOT EDIT THIS SETTING. SERIOUSLY. * The path to the search language definition file. * Defaults to $SPLUNK_HOME/etc/searchLanguage.xml. is 1048576 (which corresponds to 1 TB) * Setting this too high can lead to splunkd memory usage going up substantially. * Setting this too low can degrade splunkd indexing performance. * Setting this to "auto" or an invalid value will cause Splunk to autotune this parameter. * Defaults to "auto". * launched, lowercase letters, underscores, # and hyphens. They cannot begin with an underscore or hyphen, or contain # the word "kvstore". #**************************************************************************. * We strongly recommend that you avoid the use of environment variables in index paths, aside from the possible exception of SPLUNK_DB. See homePath for the complete rationale. index can occupy on the peers. integer> * Number of seconds after which indexed data rolls to frozen. * If you do not specify a coldToFrozenScript, data is deleted when rolled to frozen. * IMPORTANT: Every event in the DB must be older than frozenTimePeriodInSecs before it will roll. Then, the DB will be frozen the next time splunkd checks (based on rotatePeriodInSecs attribute). * Highest legal value is 4294967295 * Defaults to 188697600 (6 years). warmToColdScript = <script path> * Specifies a script to run when moving data from warm to cold. * This attribute is supported for backwards compatibility with versions older than 4.0. Migrating data across filesystems is now handled natively by splunkd. * If you specify a script here, the script becomes responsible for moving the event data, and Splunk-native data migration will not be used. * The script must accept two arguments: * First: the warm directory (bucket) to be rolled to cold. * Second: the destination in the cold path. * Searches and other activities are paused while the script is running. * Contact Splunk Support () if you need help configuring this setting. * could splunk differently. Specifically data from before 4.2 and after are handled differently. See "Freezing and Thawing" below: * The script must be in $SPLUNK_HOME/bin or a subdirectory thereof. coldToFrozenDir = <path to frozen archive> * An alternative to a coldToFrozen script - simply specify a destination path for the frozen archive * Splunk will automatically put frozen buckets in this directory *ible but cannot be recre integer>|auto|auto_high_volume * The maximum size in MB for a hot DB to reach before a roll to warm is triggered. * Specifying "auto" or "auto_high_volume" will cause Splunk to autotune this parameter (recommended). * You should use "auto_high_volume" for high-volume indexes (such as the main index); otherwise, use "auto". A "high volume index" would typically be considered one that gets over 10GB of data per day. * Defaults to "auto", which sets the size to 750MB. * "auto_high_volume" sets the size to 10GB on 64-bit, and 1GB on 32-bit systems. * Although the maximum value you can set this is 1048576 MB, which corresponds to 1 TB, a reasonable number ranges anywhere from 100 to 50000. Before proceeding with any higher value, please seek approval of Splunk Support. * If you specify an invalid number or string, maxDataSize will be auto tuned. * NOTE: The maximum size of your warm buckets may slightly exceed 'maxDataSize', due to post-processing and timing issues with the rolling policy. rawFileSizeBytes = <positive integer> * Deprecated in version 4.2 and later. We will ignore this value. * Rawdata chunks are no longer stored in individual files. * If you really need to optimize the new rawdata chunks (highly unlikely), edit rawChunkSizeBytes rawChunkSizeBytes = <positive integer> * Target uncompressed size in bytes for individual raw slice in the rawdata journal of the index. * If 0 is specified, rawChunkSizeBytes will be set to the default value. * NOTE: rawChunkSizeBytes only specifies a target chunk size. The actual chunk size may be slightly larger by an amount proportional to an individual event size. * WARNING: This is an advanced parameter. Only change it if you are instructed to do so by Splunk Support. * Must restart splunkd after changing this parameter; index reload will not suffice. * Highest legal value is 18446744073709551615 * Defaults to 131072 (128KB). minRawFileSyncSecs = <nonnegative decimal>|disable * How frequently we force a filesystem sync while compressing journal slices. During this interval, uncompressed slices are left on disk even after they are compressed. Then we force a filesystem sync of the compressed journal and remove the accumulated uncompressed files. * If 0 is specified, we force a filesystem sync after every slice completes compressing. * Specifying "disable" disables syncing entirely: uncompressed slices are removed as soon as compression is complete * integer> * The amount of memory to allocate for indexing. * This amount of memory will be allocated PER INDEX THREAD, or, if indexThreads is set to 0, once per index. * IMPORTANT: Calculate this number carefully. splunkd will crash if you set this number higher than the amount of memory available. * The default is recommended for all environments. * Highest legal value is 4294967295 * Defaults to 5. max * Defaults to 0. maxHotBuckets = <positive integer> * Maximum hot buckets that can exist per index. * When maxHotBuckets is exceeded, Splunk rolls the least recently used (LRU) hot bucket to warm. * Both normal hot buckets and quarantined hot buckets count towards this total. * This setting operates independently of maxHotIdleSecs, which can also cause hot buckets to roll. * Highest legal value is 4294967295 * Defaults to 3. *. * There is a delta between when maximum is exceeded and bucket is rolled. * This means a bucket may end up with epsilon more lines than specified, but this is not a major concern unless excess is significant * If set to 0, this setting is ignored (it is treated as infinite) * integer> * Defines how frequently metadata is synced to disk, in seconds. * Defaults to 25 (seconds). * You may want to set this to a higher value if the sum of your metadata file sizes is larger than many tens of megabytes, to avoid the hit on I/O in the indexing fast path. * at serviceMetaPeriod. * <integer> specifies how frequently it should sync. Zero means that this feature is turned off and serviceMetaPeriod is the only time when metadata sync happens. * If the value of partialServiceMetaPeriod is greater than serviceMetaPeriod, this setting will have no effect. * By default it is turned off (zero). * indexer acknowledgements (ack) on forwarders and have replication enabled (via Index Clustering) * This parameter puts an upper limit on how long events can sit unacknowledged the parameter = 0|auto * Valid only for indexer cluster peer nodes. * Determines whether an index gets replicated. * Value of 0 turns off replication for this index. * Value of "auto" turns on replication for this index. * This attribute must be set to the same value on all peer nodes. * needed for multi-site clustering where streaming targets may be primary. * if set to 0, we never sync (equivalent to infinity). * May be specified in either the provider stanza, or the virtual index stanza. *ited list of Splunk dirs/jars to add to the classpath in the Search Head and MR. vix.env.HUNK_THIRDPARTY_JARS = <list of paths> * Comma delimited. * May be specified in either the provider stanza or in the virutal index stanza. * extract fields with ${field}. I.e: "/data/${server}/...", where server will be extracted. * May start with a schema. * The schema of the path specifies which hadoop filesystem implementation to use. Examples: * hdfs://foo:1234/path, will use a HDFS filesystem equivalent extractions - See "Earliest time extractions". #************************************************************************** # Volume settings. This section describes settings that affect the volume- # optional and volume-mandatory parameters only. # # All volume stanzas begin with "volume:". For example: # [volume:volume_name] # path = /foo/bar # # These volume stanzas can then be referenced by individual index # parameters, e.g. homePath or coldPath. To refer to a volume stanza, use # the "volume:" prefix. For example, to set a cold DB to the example stanza # # # defines a new high-volume index, called "hatch", and # sets this to be the default index for both incoming data and search. # # Note that you may want to adjust the indexes that your roles have access # to when creating indexes (in authorize.conf) defaultDatabase = hatch [hatch] homePath = $SPLUNK_DB/hatchdb/db coldPath = $SPLUNK_DB/hatchdb/colddb thawedPath = $SPLUNK_DB/hatchdb/thaweddb maxDataSize = 10000 maxHotBuckets = 10 # The following example changes the default amount of space used on a # per-index basis. [default] maxTotalDataSizeMB = 650000 maxGlobalDataSizeMB = 0 # The following example changes the time data is kept around by default. # It also sets an export script. NOTE: You must edit this script to set # export location before running it. [default] maxWarmDBCount = 200 frozenTimePeriodInSecs = 432000 rotatePeriodInSecs = 30 coldToFrozenScript = "$SPLUNK_HOME/bin/python" "$SPLUNK_HOME/bin/myColdToFrozenScript.py" # This example freezes buckets on the same schedule, but lets Splunk do the # freezing process as opposed to a script [default] maxWarmDBCount = 200 frozenTimePeriodInSecs = 432000 rotatePeriodInSecs = 30 coldToFrozenDir = "$SPLUNK_HOME/myfrozenarchive" ### This example demonstrates the use of volumes ### # volume definitions; prefixed with "volume:" [volume:hot1] path = /mnt/fast_disk maxVolumeDataSizeMB = 100000 [volume:cold1] path = /mnt/big_disk # maxVolumeDataSizeMB not specified: no data size limitation on top of the # existing ones [volume:cold2] path = /mnt/big_disk2 maxVolumeDataSizeMB = 1000000 # index definitions [idx1] homePath = volume:hot1/idx1 coldPath = volume:cold1/idx1 # thawedPath must be specified, and cannot use volume: syntax # choose a location convenient for reconstitition from archive goals # For many sites, this may never be used. thawedPath = $SPLUNK_DB/idx1/thaweddb [idx2] # note that the specific indexes must take care to avoid collisions homePath = volume:hot1/idx2 coldPath = volume:cold2/idx2 thawedPath = $SPLUNK_DB/idx2/thaweddb [idx3] homePath = volume:hot1/idx3 coldPath = volume:cold2/idx3 thawedPath = $SPLUNK_DB/idx3/thaweddb ### Indexes may be allocated space in effective groups by sharing volumes ### # perhaps we only want to keep 100GB of summary data and other # low-volume information [volume:small_indexes] path = /mnt/splunk_indexes maxVolumeDataSizeMB = 100000 # and this is our main event series, allowing 50 terabytes [volume:large_indexes] path = /mnt/splunk_indexes maxVolumeDataSizeMB = 50000000 # summary and rare_data together will be limited to 100GB [summary] homePath=volume:small_indexes/summary/db coldPath=volume:small_indexes/summary/colddb thawedPath=$SPLUNK_DB/summary/thaweddb # low-volume indexes probably don't want a lot of hot buckets maxHotBuckets = 2 # if the volume is quite low, and you have data sunset goals you may # want to have smaller buckets maxDataSize = 500 [rare_data] homePath=volume:small_indexes/rare_data/db coldPath=volume:small_indexes/rare_data/colddb thawedPath=$SPLUNK_DB/rare_data/thaweddb maxHotBuckets = 2 # main, and any other large volume indexes you add sharing large_indexes # will be together be constrained to 50TB, separately from the 100GB of # the small_indexes [main] homePath=volume:large_indexes/main/db coldPath=volume:large_indexes/main/colddb000 ### This example demonstrates database size constraining ### # In this example per-database constraint is combined with volumes. While a # central volume setting makes it easy to manage data size across multiple # indexes, there is a concern that bursts of data in one index may # significantly displace data from others. The homePath.maxDataSizeMB setting # can be used to assure that no index will ever take more than certain size, # therefore alleviating the concern. # global settings # will be inherited by all indexes: no database will exceed 1TB homePath.maxDataSizeMB = 1000000 # volumes [volume:caliente] path = /mnt/fast_disk maxVolumeDataSizeMB = 100000 [volume:frio] path = /mnt/big_disk maxVolumeDataSizeMB = 1000000 # and this is our main event series, allowing about 50 terabytes [volume:large_indexes] path = /mnt/splunk_indexes maxVolumeDataSizeMB = 50000000 # indexes [i1] homePath = volume:caliente/i1 # homePath.maxDataSizeMB is inherited coldPath = volume:frio/i1 # coldPath.maxDataSizeMB not specified: no limit - old-style behavior thawedPath = $SPLUNK_DB/i1/thaweddb [i2] homePath = volume:caliente/i2 # overrides the default maxDataSize homePath.maxDataSizeMB = 1000 coldPath = volume:frio/i2 # limits the cold DB's coldPath.maxDataSizeMB = 10000 thawedPath = $SPLUNK_DB/i2/thaweddb [i3] homePath = /old/style/path homePath.maxDataSizeMB = 1000 coldPath = volume:frio/i3 coldPath.maxDataSizeMB = 10000 thawedPath = $SPLUNK_DB/i3/thaweddb # ### This example demonstrates how to configure a volume that points to ### S3-based remote storage, and an index that uses = This documentation applies to the following versions of Splunk® Enterprise: 6.6.2 Something that is not perfectly clear to me - can any setting that is being listed in the "per index" part here, also be defined in the [default] stanza? E.g. frozenTimePeriodInSecs - can this be defined in the [default] stanza (as suggested by the examples)?
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Indexesconf
2017-08-16T19:26:09
CC-MAIN-2017-34
1502886102393.60
[array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)]
docs.splunk.com
California Public Utilities Commission 505 Van Ness Ave., San Francisco _________________________________________________________________________________ FOR IMMEDIATE RELEASE PRESS RELEASE Media Contact: Terrie Prosper, 415.703.1366, [email protected] CPUC VOTES TO OPPOSE PROPOSITION 23 SAN FRANCISCO, September 23, 2010 - The California Public Utilities Commission (CPUC) today voted to opposed Proposition 23 on the November ballot because it would suspend, indefinitely, Assembly Bill (AB) 32, the state's landmark Global Warming Solutions Act, and would significantly impair the state's ability to implement policies aimed at combating climate change. CPUC CPUC voted to oppose Proposition 23 on the following grounds: 1. California is a leader in economically viable environmental protection. AB 32 is one of the most ambitious climate change laws in the U.S. and its enactment signified California's continued commitment to leading the nation in fighting climate change. Its passage follows a long line of environmental policies that have demonstrated national and international leadership. From energy efficiency to renewables, California has taken a leadership role in many of the technologies and strategies that will reduce greenhouse gas emissions, while providing secure, affordable power to its citizens. 2. Proposition 23 will suspend the implementation of AB 32 indefinitely. Requiring a 5.5 percent unemployment rate in order to implement AB 32 ostensibly suspends the law indefinitely. The requirements set out in Proposition 23 under which the suspension could be lifted have only been achieved in three instances in the last 40 years. During a suspension, the state's ability to implement policies aimed at combating climate change would be greatly impaired, as the state would have to cease implementation efforts on several important programs. In addition, the CPUC would be prohibited from proposing or adopting any regulations that would implement AB 32. 3. The indefinite suspension of AB 32 will cause investment dollars currently being spent in California to retreat to other states halting the largest growth industries in the state and stunting job growth. Sending investment dollars out of California will halt momentum in the cleantech markets, hampering economic growth and stunting job growth. 4. Regulatory uncertainty caused by the indefinite suspension of AB 32 will undermine energy markets, potentially increasing the cost of electricity for consumers. Suspending AB 32 may result in the development of less efficient 30-year assets that ultimately cost consumers more than if the low-carbon procurement path set out in AB 32 had been followed. 5. The evidence of the predicted deleterious impacts of climate change is more conclusive than the purported injurious impacts of AB 32. The report of job leakage and loss promoted by the supporters of Proposition 23 is challenged by several scholars and is in direct conflict with other economic reports. The preponderance of evidence regarding climate change shows that it is more cost effective to take actions now, rather than waiting. For more information, please see comments made by President Peevey at today's meeting, below. For more information on the CPUC, please visit. ### on Item # 51 - CPUC Position on Proposition 23 On September 27, 2006 Governor Arnold Schwarzenegger signed Assembly Bill 32 (AB32), The Global Warming Solutions Act of 2006 into law. AB32 is the first comprehensive climate change law in the United States and its passage clearly signified California's commitment to lead the Nation in combating climate change. Just to refresh everyone's mind: AB32 established the goal of reducing California's statewide greenhouse gas (GHG) emissions to 1990 levels by 2020, a reduction of approximately 12.5% from current levels and 40% from business-as-usual in 2020. However, AB32 is under attack. Proposition 23, which would suspend AB 32, is on the November 2 ballot in California as an initiated state statute. Oil companies Tesoro, Valero and Koch Industries have taken the lead in bankrolling Proposition 23. Colleagues, I firmly believe that we, as Commissioners of the Public Utilities Commission, must register our strong opposition to this ballot measure. We must resist the efforts of out of state oil companies to roll back one of the most important environmental protection laws California has ever enacted. The Commission should formally oppose Proposition 23 on the following grounds: 1. California has been and will continue to be a leader in economically viable environmental protection. AB32 is one of the most ambitious climate change laws in the United States and its enactment signified California's continued commitment to leading the nation in fighting climate change. From energy efficiency to renewables, California has taken a leadership role in many of the technologies and strategies that will reduce greenhouse gas emissions, while providing secure, affordable power to its citizens. California's establishment of carbon emissions standards for cars and trucks, which far exceeded the federal requirement, initially drew a hostile response from the federal government. But early this year, the U.S. Department of Transportation and the Federal EPA followed California's lead and announced national emissions standards that will bring a cleaner fleet of automobiles to national markets. California has a long history of instituting policies which have demonstrated leadership on a national scale. From catalytic converters to building and appliance standards, California has implemented policies which have significantly improved the environment without harming the economy, many of which have been adopted federally and internationally. California's leadership on climate change, from its collaboration with other states in the Western Climate Initiative to its ability to influence the entire nation, is jeopardized by Proposition 23. 2. Proposition 23 will suspend the implementation of AB32 indefinitely. Proposition 23 requires the suspension of AB32 until the unemployment rate for the state is 5.5% or less for four consecutive quarters. According to the California Employment Development Department, in the last 40 years, there have been three periods since 1976 when unemployment in the state remained below 5.5% for four or more quarters Further, with the unemployment rate in California near its 20-year high, undermining one of the state's fastest growing sectors would only make matters worse. Instead, California should continue to support the growth of its clean-tech sector, which promises to bring hundreds of thousands of new jobs to the state. 3. The suspension of AB32 may diminish investment dollars currently being spent in California on clean technologies, harming the largest growth industries in the State. In four short years, AB32 has already had a positive impact on investment and job growth in California. Since AB32 became law clean energy investment has tripled. About three of every five venture capital dollars nationwide has been invested in California companies, with about $2.1 billion worth of clean energy investments in 2009 alone. Venture capital expenditures not only create jobs now, but they lead to future expenditures on capital investments and infrastructure that create even more jobs. Our state's policies - and AB32 in particular - are helping draw this activity to California. 4. Suspension of AB32 and the resulting regulatory uncertainty will potentially increase costs for consumers in the long run. The investment decisions we make today will have economic and environmental impacts for 30 years or more. Energy generating infrastructure and the buildings that consume much of our energy last for decades once built. Developers and investors need clear signals of what resources California will need to meet its energy demands before they make necessary investments. AB32 provides that certainty. Suspending AB32 would reverse the regulatory signal to invest in clean, environmentally friendly resources. If this were to occur, customers could face significant carbon abatement costs when AB32 or federal regulation forces the inclusion of a carbon price into the price of the power. Delaying action now will make it more expensive to reduce greenhouse gases in the future. 5. The long-run health and economic benefits of mitigating climate change outweigh the negligible economic impacts of AB32 predicted by credible economists. In addition to the other environmental benefits supported by AB 32, the independent California Legislative Analyst's Office concluded that suspending AB32 "could halt air quality improvements that would have public health benefits such as reduced respiratory illnesses," and that "these public health benefits translate into economic benefits, such as increased worker productivity and reduced government and business costs for health care." As Senator John McCain put trouble that global warming will bring. We stand warned by serious and credible scientists across the world that time is short and the dangers are great." The most important development in California energy policy in the past several years, if not the past several decades, is reaching consensus that California must act to decrease its greenhouse gas emissions. The reality of climate change is not in doubt, and the consequences of inaction could not be more extreme. California is past the talking stage, but much of our authority to fight climate change rests on AB 32. This is why AB32 must not be suspended. Doing so will significantly impair, not only our ability to combat climate change, but it will also impair the ability of our sister agencies such as the California Energy Commission and the Air Resources Board from implementing programs and polices that mitigate climate change. In closing I want to read a quote: was President Obama. I couldn't agree more. Colleagues, I urge you to join me in opposition to Proposition 23.
http://docs.cpuc.ca.gov/PUBLISHED/NEWS_RELEASE/123814.htm
2017-08-16T19:19:13
CC-MAIN-2017-34
1502886102393.60
[]
docs.cpuc.ca.gov
The server certificate private key is stored in the vmo_keystore table of the Orchestrator database. In case you lose or delete this key, or if you bind the Orchestrator server to a different database, the contents of the exported packages signed with this certificate become unavailable. To ensure that packages are decrypted on import, you must save this key to a local file. Before you begin You must have created or imported a server certificate. Procedure - Log in to the Orchestrator configuration interface as vmware. - Click Server Certificate. - Click Export certificate database. - Type a password to encrypt the content of the exported keystore database. You must enter this password again when importing the file. - Click Export. - Save the vmo-server.vmokeystore file when prompted.
https://docs.vmware.com/en/vRealize-Orchestrator/6.0.1/com.vmware.vrealize.orchestrator-install-config.doc/GUID-6596EF24-E9F0-4934-985C-9FE41AFF9D10.html
2017-11-18T01:09:51
CC-MAIN-2017-47
1510934804125.49
[]
docs.vmware.com
Services are checked by the Shinken, Shinken won’t perform any on-demand service checks. The performance of on-demand service checks can be significantly improved by implementing the use of cached checks, which allow Shinken Shinken from checking the status of a service depending on the state of one or more other services. More information on dependencies can be found here.. When Shinken Shinken is all about. When services change state too frequently they are considered to be “flapping”. Shinken can detect when services start flapping, and can suppress notifications until flapping stops and the service’s state stabilizes. More information on the flap detection logic can be found here.
http://testdocshinken.readthedocs.io/en/latest/05_thebasics/servicechecks.html
2017-11-18T00:45:29
CC-MAIN-2017-47
1510934804125.49
[]
testdocshinken.readthedocs.io
To protect the wiki against automated account creation, we kindly ask you to answer the question that appears below (more info): To pass captcha, please enter the... third first 5th ...characters from the sequence 5b9b024d8f: 5b9b024d8f Real name is optional. If you choose to provide it, this will be used for giving you attribution for your work. edits pages recent contributors
https://docs.joomla.org/index.php?title=Special:UserLogin&type=signup&returnto=JCacheControllerCallback
2015-02-27T05:22:32
CC-MAIN-2015-11
1424936460472.17
[]
docs.joomla.org
Message-ID: <70621227.1125.1425010525357.JavaMail.haus-conf@codehaus02.managed.contegix.com> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_1124_1749430754.1425010525357" ------=_Part_1124_1749430754.1425010525357 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: The @Newify transformation proposes two new ways of instantiatin= g classes. The first one is providing Ruby like approach to creating instan= ces with a new() class method:=20 @Newify rubyLikeNew() { assert Integer.new(42) =3D=3D 42 } rubyLikeNew()=20 But it is also possible to follow the Python approach with omitting the = new keyword. Imagine the following tree creation:=20 class Tree { def elements Tree(Object... elements) { this.elements =3D elements as List } } class Leaf { def value Leaf(value) { this.value =3D value } } def buildTree() { new Tree(new Tree(new Leaf(1), new Leaf(2)), new Leaf(3)) } buildTree()=20 The creation of the tree is not very readable because of all those new k= eywords spread across the line. The Ruby approach wouldn't be more readable= , since a new() method call for creating each element is needed. But by usi= ng @Newify, we can improve our tree building slightly to make it easier on = the eye:=20 @Newify([Tree, Leaf]) buildTree() { Tree(Tree(Leaf(1), Leaf(2)), Leaf(3)) }=20 You'll also notice that we just allowed Tree and Leaf to be newified= . By default, under the scope which is annotated, all instantiations a= re newified, but you can limit the reach by specifying the classes= you're interested in. Also, note that for our example, perhaps a Groovy bu= ilder may have been more appropriate, since its purpose is to indeed create= any kind of hierarchical / tree strucutre.=20 If we take another look at our coordinates example from a few sections e= arlier, using both @Immutable and @Newify can be interesting for creating a= path with a concise but type-safe manner:=20 D=3D 3=20 A closing remark here: since a Path(Coordinates[] coordinates) was gener= ated, we can use that constructor in a varargs way in Groovy, just= as if it had been defined as Path(Coordinates... coordinates).
http://docs.codehaus.org/exportword?pageId=117900700
2015-02-27T04:15:25
CC-MAIN-2015-11
1424936460472.17
[]
docs.codehaus.org
3.2 Numeric Types The basic integer types at various sizes. The s or u prefix specifies a signed or an unsigned integer, respectively; the ones with no prefix are signed. The _sbyte and _ubyte types are aliases for _sint8 and _uint8, respectively. The _byte type is like _ubyte, but adds 256 to a negative Racket value that would work as a _sbyte (i.e., it casts signed bytes to unsigned bytes). The _sword and _uword types are aliases for _sint16 and _uint16, respectively. The _word type is like _uword, but coerces negative values in the same way as _byte. Aliases for basic integer types. The _short aliases correspond to _int16. The _int aliases correspond to _int32. The _long aliases correspond to either _int32 or _int64, depending on the platform. Similarly, the _intptr aliases correspond to either _int32 or _int64, depending on the platform. More aliases for basic integer types. The _size and _uintmax types are aliases for _uintptr, and the rest are aliases for _intptr. For cases where speed matters and where you know that the integer is small enough, the types _fixnum and _ufixnum are similar to _intptr and _uintptr but assume that the quantities fit in Racket’s immediate integers (i.e., not bignums). Similar to _fixnum/_ufixnum, but based on _int/_uint instead of _intptr/_uintptr, and coercions from C are checked to be in range. The _float and _double types represent the corresponding C types. Both single- and double-precision Racket numbers are accepted for conversion via both _float and _double, while both _float and _double coerce C values to double-precision Racket numbers. The type _double* coerces any Racket real number to a C double. Represents the long double type on platforms where it is supported, in which case Racket extflonums convert to and from long double values.
http://docs.racket-lang.org/foreign/Numeric_Types.html
2015-02-27T04:04:01
CC-MAIN-2015-11
1424936460472.17
[]
docs.racket-lang.org
Any Jive deployment beyond a simple pilot or proof-of-concept must use file system storage, where each web application node reads and writes from a shared disk system via NFS. In a single data center HA configuration, it's assumed that the external storage is redundant within the data center. For an illustration, see Designing a Single Data Center HA Configuration. In the event of a local (within the single data center) or a catastrophic (the entire data center) failure, it is assumed that the storage layer that Jive is configured with is redundant and that recovery is handled transparently by the underlying storage system. In a multiple data center HA configuration, it's assumed that the external storage is being replicated across the data centers transparently. For an illustration, see Designing a Multiple Data Center HA Configuration.
https://docs.jivesoftware.com/jive/6.0/community_admin/topic/com.jivesoftware.help.sbs.online_6.0/admin/HAStorage.html
2015-02-27T04:02:38
CC-MAIN-2015-11
1424936460472.17
[]
docs.jivesoftware.com
To protect the wiki against automated account creation, we kindly ask you to answer the question that appears below (more info): To pass captcha, please enter the... 5th ...characters from the sequence 6a9c83fb69 in the reverse order of the listing above: 6a9c83fb69 Real name is optional. If you choose to provide it, this will be used for giving you attribution for your work. edits pages recent contributors
https://docs.joomla.org/index.php?title=Special:UserLogin&type=signup&returnto=JSessionStorageWincache
2015-02-27T04:41:32
CC-MAIN-2015-11
1424936460472.17
[]
docs.joomla.org
12 Interface-Oriented Programming for Classes A static interface name is used by the checked method call variants (send/i, send*/i, and send/apply/i). When used as an expression, a static interface name evaluates to an interface value. Use define-interface/dynamic to wrap interfaces from other sources. The argument static-ifc-id must be defined as a static interface. The method method-id must be a member of the static interface static-ifc-id; otherwise a compile-time error is raised. The value of obj-expr must be an instance of the interface static-ifc-id; otherwise, a run-time error is raised. No dynamic object check is performed when calling a method (using send/i, etc) on a name defined via define/i. No dynamic object check is performed when calling a method (using send/i, etc) on a name bound via one of these forms. Note that in the case of init-field/i this check omission is unsound in the presence of mutation from outside the class. This should be fixed.
http://docs.racket-lang.org/unstable/class-iop.html
2015-02-27T04:00:19
CC-MAIN-2015-11
1424936460472.17
[]
docs.racket-lang.org
the ANI in the incoming call is empty, the MVS Session Manager sends a verification code response to the BlackBerry device using DTMF tones. The MVS Session Manager, using the SIP.
http://docs.blackberry.com/en/admin/deliverables/43951/PF_BBMVS_making_a_call_from_a_BB_device_GW_1319877_11.jsp
2015-02-27T04:23:38
CC-MAIN-2015-11
1424936460472.17
[array(['PF_BBMVS_Yos_make_call_from_BB_device_GW_1317308_11.jpg', None], dtype=object) ]
docs.blackberry.com
changes.mady.by.user Athena Yao Saved on Jun 17, 2009 Some of the information below is outdated, as modifications to the proposed structure were made in the course of the refactoring. For the most up-to-date information on the new package structure, as well as other changes, see the Jetty 7 Refactoring page. Note that this page describes the migration from codehaus Jetty 7 to Eclipse Jetty 7; the linked page describes migration from Jetty 6 to Eclipse Jetty 7. key points for maven: ... Powered by a free Atlassian Confluence Open Source Project License granted to Codehaus. Evaluate Confluence today.
http://docs.codehaus.org/pages/diffpages.action?originalId=123666757&pageId=228174736
2015-02-27T04:17:59
CC-MAIN-2015-11
1424936460472.17
[]
docs.codehaus.org
settings Manage Internal Devices How to manage devices used for backoffice operations. Manage your payment methods Update your payment method or add new ones for your Bolt Account. Merchant IP Addresses This feature enables you to add office IP addresses to your account. Unsubscribe Shoppers From Abandoned Cart Emails How to unsubscribe shoppers from Abandoned Cart Emails. Upgraded Monitoring This feature enables you to send Bolt additional logs and analytics for troubleshooting.
https://docs.bolt.com/tags/settings/
2021-10-16T03:53:55
CC-MAIN-2021-43
1634323583408.93
[]
docs.bolt.com
This chapter contains guides and resources for getting started with Stardog. Chapter Contents - Install Stardog - Learn how to install and start up Stardog - Access Stardog Studio - Access Stardog Studio and connect to your Stardog server - Getting Started Resources - Contains fundamental learning resources (blogs, video trainings, tutorials) for working with Stardog. - Getting Started with Stardog Cloud - Resources to get Started with Stardog Cloud
https://docs.stardog.com/get-started/
2021-10-16T01:37:15
CC-MAIN-2021-43
1634323583408.93
[]
docs.stardog.com
Backup & Restore This page discusses backing up and restoring the Stardog server. For information backing up individual Stardog databases and restoring them, please see the page on Database Backups Page Contents Overview Stardog 7+ supports backup/restore functionality for the entire system in one step stardog-admin server backup and stardog-admin server restore commands. This functionality is convenient when all databases need to be backed up and restored simultaneously. Server Backup Server backup will back up the entire Stardog server, all databases and associated metadata. Unlike database backups, which takes a full backup of the database every time it is run, server backup takes an incremental backup of the data. That way, each time the command is run only the updates to the databases since the last time backup need to be saved. Server backups are accomplished with the following command: $ stardog-admin server backup You can optionally specify the backup location, otherwise the Stardog defaults will be used, similar to the db backup command: $ stardog-admin server backup /path/to/custom/backup/location This command only supports file-based backups, it cannot be used with S3, for instance. To copy the backups off local disk we recommend using a tool such as rclone. After setting up rclone you can use it to send the backups to another server: $ rclone sync /path/to/backup/location/ sftp:/path/to/other/location/ Consult the rclone docs for the full list of supported storage systems. Server Restore You can use server restore command to restore server backups created by server backup. To do so you must shut down Stardog and set $STARDOG_HOME to an empty home directory. The server restore command will restore the complete server to $STARDOG_HOME. Once complete you can start the Stardog server. $ export STARDOG_HOME=/path/to/empty/stardog/home $ stardog-admin server restore /path/to/server/backup Server backups do not contain your license file, stardog.properties, or any other additional files or directories created externally under STARDOG_HOME so you need to back up and restore those files and directories separately. By default server restore will restore the latest backup found in the backup directory. The server configuration option backup.keep.last.number.backups that can be set in stardog.properties controls how many backups will be retained. By default, this option is set to 4 and any one of the older backups can be restored if desired by specifying a backup ID in the command: $ stardog-admin server restore -b 3 /path/to/server/backup The server backup command prints the ID for the backup created which is the value that can be passed to the server restore command. The backup IDs correspond to directories under the versions directory of the backup directory. The creation date for these directories will indicate when the corresponding backup was created.
https://docs.stardog.com/operating-stardog/server-administration/backup-and-restore
2021-10-16T03:40:36
CC-MAIN-2021-43
1634323583408.93
[]
docs.stardog.com
Attachment Types Object attachments or object links are files, ToDos or URLs that have been specifically linked to a particular model object. They can provide specific instructions or important information related to that object. ToDos Link model objects to new ToDos. Supported actions: View, Remove Project Files Attached project files always point to the latest version of the file stored in the Connect Project. Supported actions: View in Connect, View in new window, Remove URLs Add a custom URL Supported actions: View, Remove
https://docs.3d.connect.trimble.com/object-attachments/attachment-types
2021-10-16T01:36:29
CC-MAIN-2021-43
1634323583408.93
[]
docs.3d.connect.trimble.com
Media Manager Namespaces Choose namespace Media Files - Media Files - Upload - Search Upload to [root] Sorry, you don't have enough rights to upload files. File - View - Date: - 2019/03/26 12:05 - Filename: - eduroam_win10_10_en.png - Format: - PNG - Size: - 14KB - Width: - 375 - Height: - 500 - References for: - eduroam with Windows (Vista+) - eduroam with Windows S
https://docs.gwdg.de/doku.php?id=en:services:network_services:eduroam:apple_iphone&tab_files=upload&do=media&tab_details=view&image=en%3Aservices%3Anetwork_services%3Aeduroam%3Aeduroam_win10_10_en.png&ns=
2021-10-16T02:38:14
CC-MAIN-2021-43
1634323583408.93
[]
docs.gwdg.de
The Conversation Update - v2.20 Read all about it! Dreams 2.20 is here! This latest update brings a handful of fixes to some cheeky bugs, and makes ALL Media-Molecule-made (Mmm for short) prizes remixable! This means you'll no longer encounter publishing issues when using our content in your creations. But the big moment in 2.20? Threaded comments! With this big change, the Dreamiverse will help you keep track of all the conversations happening on your creation. This is the first version of this system, so please do let us know of any bugs you encounter, or ways we could improve it. What's New? - Threaded Comments: Hurrah! When replying to a comment, your message will now be threaded below the comment you're replying to. - Mm Prizes: Not one. Not two. But ALL the Mm Prizes are now remixable, including those from Imp Quests! This will prevent you encountering any publishing issues if you use them in a creation. Other Improvements - Banner Indicator: Added an indicator below banners in DreamSurfing and DreamShaping, indicating your position in the banner roll. - DreamSurfing Thumbnails: The warp on creation thumbnails has been reduced, meaning they'll display more clearly. - Audio Effect Field Collection Update: Updated some of the Elements released in our last update to newer versions. Bug Fixes - Fixed a bug preventing the setting for the Save Online Reminder prompt being correctly remembered. - Fixed a bug that would show users an incorrect banner first when loading DreamSurfing. - Fixed a bug that would cause a crash when accepting updates on music which had many updates. The Dreams User Guide is a work-in-progress. Keep an eye out for updates as we add more learning resources and articles over time.
https://docs.indreams.me/en/updates/release-notes/dreams/v220
2021-10-16T02:15:51
CC-MAIN-2021-43
1634323583408.93
[]
docs.indreams.me
%Compiler.Type.Object class %Compiler.Type.Object extends %Compiler.Type.LiteralProperty superclass for object values, for internal use only Method Inventory (Including Private) - Get() - GetObject() - GetObjectId() - GetSwizzled() - IsEmpty() - NewObject() - Set() - SetObject() - SetObjectId() - UnSwizzle() %RawString [ Language = objectscript ] Returns an oref, performing swizzling if necessary it returns "" ($$$NULLOREF) if there is no oref and the id="" method GetObject(force As %Integer = 0) as %ObjectIdentity [ Language = objectscript ] Stores simply calls GetObject() with decomposes Oid into Id method GetSwizzled(noswizzle As %Boolean = 0) as %RegisteredObject [ Language = objectscript ] SetObject(newvalue As %ObjectIdentity) as %Status [ Language = objectscript ] Stores a new value for the OID of the referenced object. if swizzling has not occurred, it simply stores the OID (after perhaps stripping the class name if CLASSNAME=0). if swizzling has occurred, then it references the object to obtain its OID. If it is the same, then nothing happens. If they are different then the old object is closed and the new OID is stored. Stores simply calls SetObject() with constructed Oid Unswizzles a property, if it has an oref
https://docs.intersystems.com/irislatest/csp/documatic/%25CSP.Documatic.cls?LIBRARY=%25SYS&PRIVATE=1&CLASSNAME=%25Compiler.Type.Object
2021-10-16T03:38:25
CC-MAIN-2021-43
1634323583408.93
[]
docs.intersystems.com
To manage objects, you create a set of information management lifecycle (ILM) rules and organize them into an ILM policy. Every object ingested into the system is evaluated against the active policy. When a rule in the policy matches an object's metadata, the instructions in the rule determine what actions StorageGRID takes to copy and store that object. Note that object metadata is not managed by ILM rules. Instead, object metadata is stored in a Cassandra database in what is known as a metadata store. Three copies of object metadata are automatically maintained at each site to protect the data from loss. The copies are evenly distributed across all Storage Nodes. This example ILM rule applies to all objects belonging to Tenant A. It makes three replicated copies of those objects and stores each copy at a different data center for 2 years. After 2 years, one copy is moved to archive storage forever, and the other two copies are deleted.
https://docs.netapp.com/sgws-114/topic/com.netapp.doc.sg-admin/GUID-D42F6306-DA63-4677-B7A5-DFE2431D87DA.html
2021-10-16T02:40:31
CC-MAIN-2021-43
1634323583408.93
[]
docs.netapp.com
Configure an SNMP requestor Contributors Download PDF of this page When SNMP version 2 is enabled, you can enable or disable a requestor, and configure requestors to receive authorized SNMP requests. Click. Under General SNMP Settings, click Yes to enable SNMP. From the Version list, select Version 2. In the Requestors section, enter the Community String and Network information. Optional: To add another requestor, click Add a Requestor and enter the Community String and Network information. Click Save Changes.
https://docs.netapp.com/us-en/element-software/storage/task_system_manage_snmp_configure_an_snmp_requestor.html
2021-10-16T04:08:40
CC-MAIN-2021-43
1634323583408.93
[]
docs.netapp.com
Architecture Overview #Jamstack Native Uniform Optimize is designed for Jamstack sites: personalization data, the intent manifest, is baked in at build time. The execution of personalization is ridiculously fast compared to traditional personalization solutions. Personalization loading in only a few milliseconds? Yes, please. tip Uniform is flexible in its execution model; it can run at the edge, purely client-side, or with server-side-rendering. #Privacy First Uniform Optimize stores all personalization data and history on the visitor's browser by default. It gives the visitor complete control over the data you have collected about their personalization profile. Want to store personalization data for your analysis? Our extensible plugin architecture lets you put data wherever you want. #Data Sources Uniform Optimize uses two primary sources of data: - The intent manifest - defines an array of known intents and their signals. It is usually provided at build time by a Uniform Optimize request and baked into your JS bundle. - The visitor intent scoring - defines the score of the current visitor in each intent. These scores are based on triggering signals that modify their intent score. The intent score is stored most commonly in cookies. The most common types of signals are: - Request signals - cookies, query strings, page route, visited, etc. These are triggered based on the browser's request. - Behavior signals - triggered based on content with intent tags shown to the visitor. For example, being shown a call to action with a 'developer' tag will result in some developer score being added. - Programmatic signals - Event signals are matched on an event stream such as Google Analytics events (i.e., 'label contains foo'). Developers connect this event stream. These data sources are used to perform visitor classification. The intent manifest provides the rules by which we calculate intent scoring. Scoring, in turn, influences how we select a personalized variation to show visitors.
https://docs.uniform.app/optimize/dev/architecture/
2021-10-16T03:04:36
CC-MAIN-2021-43
1634323583408.93
[]
docs.uniform.app
Vapor 3.5.0 is now Live!¶ August 4, 2021 Vapor’s release cycle is accelerating, and now 3.5.0 is live. New improvements include: - New particle data reader: Data Collection Particles (DCP) - New raw data reader: Brick of Values (BOV) - Easily, and visually save your state with the new Bookmarks feature - Further improvements that streamline Vapor’s GUI - Faster data access with OpenMP As well as many other reported issues, including the 1/4 canvas bug on BigSur. Thanks to users ViggoHanst and Fortran for reporting this. Thank you to everyone who has helped steer us by filling out our survey for 3.6 features. The survey is still live, so please fill it out if you have 5 minutes. Please engage us on our forum. Thank you! -The Vapor Team We need your feedback!¶ May 21, 2021 We are trying to prioritize new features for Vapor 3.6. Our goal has always been to support scientists and help them produce superior visualizations of their simulation data. We can’t do this without the feedback of the people in our user community. Please help us by providing your feedback in the 5 minute survey below, which outlines new features we are considering for Vapor 3.6 and beyond. Each feature in the survey is linked to an issue on GitHub (), so feel free to comment there if you’d like to track our progress, or discuss the issue further after the survey. And please let us know if you have new ideas that we haven’t considered yet. Thank you for helping us produce better open source visualization software. -The Vapor Team Vapor 3.4.0 is now Live!¶ April 5, 2021 Version 3.4 is Vapor’s first usability-focused release. See some of the new features in the following demo: - Major usability improvements include: Improved algorithm for the Flow renderer’s random seeding bias variable Data-independent settings (e.g. Cache size) can now be modified before data is loaded Restructuring of Vapor’s top-level tabs New and improved Colorbars Improved functionality of the Geometry tab Better space management of the Renderer Table - Other notable issues include: Addition of clang-format linter, and git pre-push hook Fixed weekly builds on all platforms A comprehensive list of fixes can be viewed in the 3_4_0 release milestone. Vapor 3¶ VAPOR Contents: - Downloads - Quick Start Guide - Getting Data into Vapor - Using Vapor 3 - The Renderers - Contours - 2D Data - Image Renderer - Volume Renderer - Isosurfaces - Flow - Slices - Barbs - Wire Frames - 3D Models - Basic Renderer Controls - Navigation Settings - Global Settings - Ancillary Tools - YouTube Channel - Get Help on Our Forum - Contributing to Vapor - Where to start? - Bug Reports and Feature Requests - Code Contributions - Version control, Git, and GitHub - Forking Vapor’s code - Third-Party Libraries - Install System Libraries - Building Vapor from source - Adding to the Code Base - Build and Test an Installer - Submitting Your Changes - Vapor Coding Conventions - Contributing to Vapor’s Documentation - Contributing to Vapor’s Gallery - License and Citation
https://vapor.readthedocs.io/en/readthedocs/index.html
2021-10-16T03:03:41
CC-MAIN-2021-43
1634323583408.93
[array(['_images/vaporBanner.png', '_images/vaporBanner.png'], dtype=object)]
vapor.readthedocs.io
Form We have designed this component specifically for compatibility with practical problems and issues.This component is based on rc-form and antd-design form elements. Below are the guidelines, which we believe, are necessary for a form: -. - Submit Trigger should be customizable. - Each supported input type should support almost all the valid - All types of validations are supported via the use of rc-form. Check this link to know how you can create rules Usage const formElements = [ { type: "email", name: "email", inputProps: { clear: true, placeholder: "Enter Email", labelNumber: 1.5, style: [styles.input], placeholderTextColor: "#fff", color: "#fff", children: ( <Image resizeMode="contain" source={envelope} style={[styles.inputIcon]} /> ) }, options: email }, { type: "password", name: "password", inputProps: { clear: true, placeholder: "Enter Password", labelNumber: 1.5, style: [styles.input], placeholderTextColor: "#fff", color: "#fff", children: ( <Image resizeMode="contain" source={lock} style={[styles.inputIcon]} /> ) }, after : ( <Link textStyle={[styles.forgotPassword]} ), options: password } ]; <Form elements={formElements} style={{ Body: styles.list }} onSubmit = {this.onSubmit.bind(this)} submitTrigger={{ buttonProps: { style: styles.button }, textProps: { style: styles.buttonText }, text : "Sign In" }} > </Form> Supported Properties elements : It should be a array of form elements. Each form element object has the below format : { type: "email", name: "email", inputProps: { clear: true, placeholder: "Enter Email", labelNumber: 1.5, style: [styles.input], placeholderTextColor: "#fff", color: "#fff", children: ( <Image resizeMode="contain" source={envelope} style={[styles.inputIcon]} /> ) }, options: email } Format of form element object : title="标题" placeholder="auto focus in Alipay client" data-seed="logId" ref={el => this.autoFocusInst = el} autoHeight /> ), options : textarea // The options to be passed to getFieldDecorator function } submitTrigger. Format of submitTrigger : - form -> Form Object which is created by rc-form. It supports a lot of helper methods. To check the complete list. { buttonProps : { }, textProps : { }, getTrigger : ({form)=>{ }, text : "" } onSubmit : If default submit button is used, onSubmit is the callback that is called on click of that button. It has 1 arguments passed to it : - values - contains all the values of the inputs under the FormComponent onError : If default submit button is used, onError is the callback if the form has validation errors. It has 1 arguments passed to it : - errors - Any validations errors while validating the form
https://arivaa-firebase-docs.laxaar.com/components/form
2021-10-16T01:52:56
CC-MAIN-2021-43
1634323583408.93
[]
arivaa-firebase-docs.laxaar.com
SCAN_BYTES_LIMIT Query Option (CDH 6.1 or higher only) The SCAN_BYTES_LIMIT query option sets a limit on the bytes scanned by HDFS and HBase SCAN operations. If a query is still executing when the query’s coordinator detects that it has exceeded the limit, the query is terminated with an error. The option is intended to prevent runaway queries that scan more data than is intended. For example, an Impala administrator could set a default value of SCAN_BYTES_LIMIT=100GB for a resource pool to automatically kill queries that scan more than 100 GB of data (see Impala Admission Control and Query Queuing for information about default query options). If a user accidentally omits a partition filter in a WHERE clause and runs a large query that scans a lot of data, the query will be automatically terminated after it scans more data than the SCAN_BYTES_LIMIT. Because the checks are done periodically, the query may scan over the limit at times. Syntax: SET SCAN_BYTES_LIMIT=bytes; Type: numeric - A numeric argument represents memory size in bytes. - Specify a suffix of m or mb for megabytes. - Specify a suffix of g or gb for gigabytes. - If you specify a suffix with unrecognized formats, subsequent queries fail with an error. Default: 0 (no limit) Added in: CDH 6.1
https://docs.cloudera.com/documentation/enterprise/6/6.1/topics/impala_scan_bytes_limit.html
2021-10-16T03:26:14
CC-MAIN-2021-43
1634323583408.93
[]
docs.cloudera.com
Run MLflow Projects on Databricks article describes the format of an MLflow Project and how to run an MLflow project remotely on Databricks clusters using the MLflow CLI, which makes it easy to vertically scale your data science code. MLflow Project execution is not supported on Databricks Community Edition.> -b databricks --backend-config <json-new-cluster-spec> where <uri> is a Git repository URI or folder containing an MLflow project and <json-new-cluster-spec> is a JSON document containing a new_cluster structure. The Git URI should be of the form:<repo>#<project-folder>. An example cluster specification is: { "spark_version": "7.3.x-scala2.12", "num_workers": 1, "node_type_id": "i3.xlarge" } If you need to install libraries on the worker, use the “cluster specification” format. Note that wheels must be uploaded to DBFS and specified as pypi dependencies. For example: { "new_cluster": { "spark_version": "7.3.x-scala2.12", "num_workers": 1, "node_type_id": "i3.xlarge" }, "libraries": [ { "pypi": { "package": "tensorflow" } }, { "pypi": { "package": "/dbfs/path_to_my_lib.whl" } } ] } Important .eggand .jardependencies are not supported for MLflow projects. - Execution for MLflow projects with Docker environments is not supported. - You must use a new cluster specification when running an MLflow Project on Databricks. Running Projects against existing clusters is not supported. Using SparkR In order to use SparkR in an MLflow Project run, your project code must first install and import SparkR as follows: if (file.exists("/databricks/spark/R/pkg")) { install.packages("/databricks/spark/R/pkg", repos = NULL) } else { install.packages("SparkR") } library(SparkR) Your project can then initialize a SparkR session and use SparkR as normal: sparkR.session() ... Example This example shows how to create an experiment, run the MLflow tutorial project on a Databricks cluster, view the job run output, and view the run in the experiment. Prerequisites - Install MLflow using pip install mlflow. - Install and configure the Databricks CLI. The Databricks CLI authentication mechanism is required to run jobs on a Databricks cluster. Step 1: Create an experiment In the workspace, select Create > MLflow -b databricks --backend-config: Resources For some example MLflow projects, see the MLflow App Library, which contains a repository of ready-to-run projects aimed at making it easy to include ML functionality into your code.
https://docs.databricks.com/applications/mlflow/projects.html
2021-10-16T02:06:18
CC-MAIN-2021-43
1634323583408.93
[]
docs.databricks.com
What is an ASHX file? An ASHX file is a webpage that is used by the ASP.NET HTTP Handler to serve user with the pages that are referenced inside this file. The ASP.NET HTTP Handler processes the incoming request, references the pages from the .ashx file, and sends back the compiled page back to the user’s browser. The method of processing is mostly similar to that of ASPX files with the differecne that in this case, the referenced pages/documents are processed and sent back. ASHX File Format The .ashx files are saved in plain text file format and contains references to other pages or documents that are sent back to user’s browser upon request. These can be opened in any text editor and developer IDEs such as Xamarin Studio, Microsoft Notepad, Notepad++, and many more. The ASHX files are useful in case when you have: - Binary files - Dynamic image views - Performance-critical web pages - XML files - Minimal web pages How to dynamically compile an ASHX file? The following steps can be used to add and compile an ASHX file using Microsoft Visual Studio. - add a Generic handler - Handler1.ashx in visual studio - delete the cs file which auto-created. - open ashx again, ** remove CodeBehind=“Handler1.ashx.cs” ** add c# code below using System; using System.Collections.Generic; using System.Linq; using System.Web; public class Handler1 : IHttpHandler { public void ProcessRequest(HttpContext context) { context.Response.ContentType = "text/plain"; context.Response.Write("Hello World2"); } public bool IsReusable { get { return false; } } } ASHX Example The following ASHX code returns the image file to user’s request when the ASHX file is called in internet browser. <%@ WebHandler Language="C#" Class="QueryStringHandler" %> using System; using System.Web; public class QueryStringHandler : IHttpHandler { public void ProcessRequest (HttpContext context) { HttpResponse r = context.Response; r.ContentType = "image/png"; string file = context.Request.QueryString["file"]; if (file == "Arrow") { r.WriteFile("Arrow.gif"); } else { r.WriteFile("Image.gif"); } } public bool IsReusable { get { return false; } } }
https://docs.fileformat.com/web/ashx/
2021-10-16T02:53:53
CC-MAIN-2021-43
1634323583408.93
[]
docs.fileformat.com
Talview Interviews Last updated by the developer on: February 12, 2021 General information Information provided by Talview Inc to Microsoft: Feedback Questions or updates to any of the information you see here? Contact us! How the app handles data This information has been provided by Talview.. NA Talview Inc about how this app handles authentication, authorization, application registration best practices, and other Identity criteria. Feedback Questions or updates to any of the information you see here? Contact us!
https://docs.microsoft.com/en-us/microsoft-365-app-certification/teams/talview-inc-interviews
2021-10-16T03:33:19
CC-MAIN-2021-43
1634323583408.93
[array(['../media/attested.png', 'Publisher Attestation: The information on this page is based on a self-assessment report provided by the app developer on the security, compliance, and data handling practices followed by this app. Microsoft makes no guarantees regarding the accuracy of the information.'], dtype=object) ]
docs.microsoft.com
Multi-Link PPPoE (MLPPP)¶ Multi-Link PPPoE (MLPPP) is a unique WAN option that bonds together multiple PPPoE lines from the same ISP to form one larger virtual circuit. This means a firewall can get the true aggregate bandwidth of all circuits in the bundle. For example, if a firewall has three 10 Mbit/s DSL lines in a bundle, it could potentially get 30Mbit/s from a single connection. Requirements¶ The largest hurdle for MLPPP is that the ISP must support it on the circuits connected to the firewall. Few ISPs are willing to support MLPPP, so if an ISP is available that does, it would be worth taking advantage of that fact. Additionally, each line must be on a separate interface connected to the firewall running pfSense® software. Setup¶ Setup for MLPPP is simple: Configure a WAN for a single line with the correct credentials Navigate to Interfaces > Assign, PPPs tab Click to edit the entry for the PPPoE WAN Ctrl-click to select the other physical interfaces that belong to the same MLPPP bundle Click Save The firewall will then attempt to bond the lines using MLPPP. Caveats¶ One downside to using MLPPP is that troubleshooting is much more difficult. Statistics and status are not available for the individual lines. To determine the status, read through the PPP log, as there is not yet a way to query the lines separately. In some cases it’s obvious if a line is down, as there may be a noticeable problem at the modem (out of sync) or that the maximum attainable bandwidth is reduced.
https://docs.netgate.com/pfsense/en/latest/multiwan/mlppp.html
2021-10-16T03:19:41
CC-MAIN-2021-43
1634323583408.93
[]
docs.netgate.com
SNMP¶ The Simple Network Management Protocol (SNMP) daemon enables remote monitoring of some pfSense® software parameters. Depending on the options chosen, monitoring may be performed for network traffic, network flows, pf queues, and general system information such as CPU, memory, and disk usage. The SNMP implementation documentation, but there are plenty of print and online resources for SNMP, and some of the MIB trees are covered in RFCs. For example, the Host Resources MIB is defined by RFC 2790. See also The Hangouts Archive contains a video which covers monitoring via SNMP. SNMP Daemon¶ These options dictate if, and how, the SNMP daemon will run. To turn the SNMP daemon on, check Enable. Once Enable has been checked, the other options may then be changed. - Polling Port SNMP connections are made using only UDP, and SNMP clients default to using UDP port 161. This setting controls which port is used for the SNMP daemon, and the SNMP client or polling agent must be changed to match. - System location This text field specifies a string to return when the system location is queried via SNMP. Any text may be used here. For some devices a city or state may be close enough, while others may need more specific detail such as which rack and position in which the system resides. - System contact A string defining contact information for the system. It can be a name, an e-mail address, a phone number, or whatever is needed. - Read Community String With SNMP, the community string acts as a kind of username and password in one. SNMP clients will need to use this community string when polling. The default value of publicis common, so we strongly recommend using a different value in addition to restricting access to the SNMP service with firewall rules. SNMP Traps¶ To instruct the SNMP daemon to send SNMP traps, check Enable. Once Enable has been checked, the other options may then be changed. - Trap server The trap server is the hostname or IP address to which SNMP traps are forwarded. - Trap server port By default, SNMP traps are set on UDP port 162. If the SNMP trap receiver is set for a different port, adjust this setting to match. - SNMP trap string This string will be sent along with any SNMP trap that is generated. Modules¶ Loadable modules allow the SNMP daemon to understand and respond to queries for more system information. Each loaded module will consume additional resources. As such, ensure that only required modules are loaded. - MibII This module provides information specified in the standard MIB II tree, which covers networking information and interfaces. Having this module loaded will, among other things, provides network interface information including status, hardware and IP addresses, the amount of data transmitted and received, and much more. - Netgraph The netgraph module provides some netgraph-related information such as netgraph node names and statuses, hook peers, and errors. - PF The pf module provides a wealth of information about pf. The MIB tree covers aspects of the ruleset, states, interfaces, tables, and ALTQ queues. - Host Resources This module provides information about the host itself, including uptime, load average and processes, storage types and usage, attached system devices, and even installed software. This module requires MibII, so if MibII is unchecked when this option is checked, MibII will be checked automatically. - UCD This module provides various system information knows as the ucdavis MIB, or UCD-SNMP-MIB. It provides information about memory usage, disk usage, running programs, and more. - Regex The Regex module is reserved for future use or use by users customizing the code to their needs. It allows creating SNMP counters from log files or other text files..
https://docs.netgate.com/pfsense/en/latest/services/snmp.html?highlight=snmp
2021-10-16T02:04:09
CC-MAIN-2021-43
1634323583408.93
[]
docs.netgate.com
MailChimp The MailChimp integration allows businesses using Pelcro to send automated emails based on events happening on Pelcro. It allows you to leverage the power of both Pelcro and MailChimp without requiring any code and the setup will take you 10-15 mins. You can control the copy, branding, and call-to-actions while leveraging data fed from the integration. Below are all the steps required to get you started. Create a MailChimp account To use the integration you'll need an active Mailchimp account. To get started or to know more about the Mailchimp sign up process, please click here Note: If you're already on Mailchimp you can just use your existing account. Create a Mailchimp Audience The first step to getting started with Mailchimp is setting up your audience. According to Mailchimp, your audience is where you’ll store and manage all your contact data. It's the bedrock of our integration. Required Audience fields Audience fields and MERGE tags MUST have the following three fields: 1- field first name and field merge tag is: FNAME - This will be updated with the customer's first name. 2- field last name and field merge tag is: LNAME - This will be updated with the customer's last name 3- field token and field merge tag is: TOKEN - This will be updated with a password reset token every time a user requests one. The token is required to be MERGE field in order to use it as a link in the email. This is due to the fact that MailChimp doesn't support using Event API fields while creating links. fields and merge tags in details Can I use my existing Mailchimp Audience? We recommend using a fresh new audience, however, you can use any Audience list in your Mailchimp account. Get your Audience list ID We'll need the Audience list ID from your MailChimp account. You can easily get it in five steps by clicking this link. After getting your list ID please save it in a secured place as we're going to use it in the integration. Note the following requirements: 1- The selected audience list must have the three fields mentioned above. 2- The integration will overwrite any existing member's first name and last name. 3- The token will be updated with the latest token requested by the customer. More information: 1- Create a Mailchimp Audience 2- Getting started with Mailchimp merge tags Generate a Mailchimp API key Using your managerial permissions on your Mailchimp account, please generate a new API key for the integration. Go to Account > Extras > API key. For more details, please check out the link below: 1- Generate a new API key. What's next? Once you generate the API key, we recommend saving it in a secure place so we can use it later. Now, log in to your Pelcro account Pelcro Dashboard Within your Pelcro's Dashboard, please find the "integration" icon on the left side menu. click on integrations Integrations Click on the Mailchimp icon click on the mailchimp integration Mailchimp Integration What's next? Just add the API key and the Audience List-ID that we've created earlier in this documentation, and click the ADD NEW green button. Please note: 1- You'll not be able to restore or read the full API key string after clicking 'ADD NEW' As we masked it following our security protocols. 2- We'll show the last four characters of the API key unmasked so you can recognize it. 3- If you clicked 'DELETE KEYS', this will deactivate the integration immediately. 4- To reactivate the integration one more time you'll have to add your API key and Audience List ID as described above. 5- You can select "Sync newsletters into MailChimp" and this will sync all newsletters that are created in Pelcro with MailChimp. - Creating a newsletter for an email will create or update the customer on Mailchimp and the lists will be synced into tags - The customer will be added to the same audience list provided in the integration Before we go back to MailChimp Now, we assume/hope that everything went well so far till this point. Before we get back to Mailchimp please use your Pelcro account to create a test customer. This way we can walk you through the next steps of using the best of the integration features. Now it's the Time to get back to Mailchimp Go to your Mailchimp account and open the Audience List that you used with our integration. You'll find that the user John Doe has been added to your LIST automatically. Breakdown of the process Mailchimp API custom events We're using the most up-to-date Mailchimp feature called API CUSTOM EVENTS. With this feature, we're not just adding your Pelcro's customers to your Mailchimp Audience but also we're continuously updating their Mailchimp profiles with "Events activity'. What exactly are Events Activities? Each time any of your customers do something on Pelcro, we add a new Event activity to his profile on Mailchimp. Activity data will be handy and you can build emails, statistics, and different feedbacks on top of it. John updated his first name to Johnson. The integration will update John's Mailchimp profile and adds a new Event activity called customer_updated to his Activity Feed. Johnson created his first subscription The integration will add a new Event activity called subscription_created to Johnson's Mailchimp profile. Inside this activity, you'll find all the necessary information that's related to the new subscription that Johnson created. In addition, you'll find all the details of the subscription below. Now, Let's map out our Events CUSTOMER CREATED event name: customer_created usage: This event is triggered when a new customer record has been created. attached event data: [ 'first_name', 'last_name', 'email' ]; CUSTOMER UPDATED event name: customer_updated usage: This event is triggered when an existing customer record has been updated. attached event data: [ 'first_name', 'last_name', 'email' ]; CUSTOMER PASSWORD RESET event name: customer_reset_password usage: This event is triggered when the customer has requested a password reset. As part of the password reset email being sent out to the customer, you will need to include the password token. Below is an example of how you should build the link based on the user email and the password token. example:*|EMAIL|*&token=*|TOKEN|* attached event data: [ 'password_token' ]; SUBSCRIPTION CREATED event name: subscription_created usage: This event is triggered when a new subscription has been created for a customer. SUBSCRIPTION GIFT DONOR event name: subscription_gift_donor usage: This event is triggered when a donor has purchased a gift subscription for a recipient. You can use it to build logic around sending emails to donors SUBSCRIPTION GIFT RECIPIENT event name: subscription_gift_recipient usage: This event is also triggered when a donor has purchased a gift subscription for a recipient, but it is meant to be used for Recipients. You can use it to build logic around sending emails to recipients with their gift code. SUBSCRIPTION GIFT NOTIFICATION event name: subscription_gift_notification usage: This means it is the time to notify the customer of his gifted subscription. SUBSCRIPTION UPDATED event name: subscription_updated usage: This event is triggered when the customer subscription has been updated. SUBSCRIPTION CANCELED event name: subscription_canceled usage: This means this customer has canceled a subscription SUBSCRIPTION TRIAL WILL END event name: subscription_trialwillend usage: This means the subscription's trial period will end in three days, or when a trial is ended immediately. SUBSCRIPTION RENEWED event name: subscription_renewed usage: This means this customer has renewed his subscription.. SUBSCRIPTION EXPIRED event name: subscription_expired usage: This means this customer's subscription was expired. subscription events' data: [ 'id', 'site_id', 'address_id', 'backdate_start_date', 'billing', 'billing_cycle_anchor', 'cancel_at', 'cancel_at_period_end', 'canceled_at', 'cancel_reason', 'created', 'current_period_end', 'current_period_start', 'days_until_due', 'ended_at', 'quantity', 'shipments_remaining', 'source', 'status', 'trial_end', 'trial_start', 'shipments_undeliverable', 'shipments_suspended_until', 'is_gift_donor', 'is_gift_recipient', 'gift_code', 'is_redeemed', 'gift_recipient_email', 'gift_recipient_first_name', 'gift_recipient_last_name', 'gift_start_date', 'gift_donor_subscription_id', 'agency_id', 'updated_at', 'latest_invoice_id', 'subscription_plan', // Plan name 'subscription_product', // Product name ]; INVOICE CREATED event name: invoice_created usage: This means this customer has a new invoice INVOICE PAYMENT SUCCEEDED event name: invoice_payment_succeeded usage: This means the customer's attempt to pay an invoice was succeeded. INVOICE PAYMENT FAILED event name: invoice_payment_failed usage: This means the customer's attempt to pay an invoice was FAILED. invoice events' data: [ 'id', 'amount_due', 'amount_paid', 'amount_remaining', 'billing', 'billing_reason', 'currency', 'created', 'ending_balance', 'finalized_at', 'invoice_pdf', 'marked_uncollectible_at', 'paid_at', 'period_end', 'period_start', 'post_payment_credit_notes_amount', 'pre_payment_credit_notes_amount', 'starting_balance', 'status', 'total', 'voided_at', 'updated_at', 'number', ]; CHARGE SUCCEEDED event name: charge_succeeded usage: This means the attempt of charging the customer succeeded and a charge is created. CHARGE FAILED event name: charge_failed usage: This means the attempt of charging the customer failed. CHARGE REFUNDED event name: charge_refunded usage: This means the attempt of refunding a charge to the customer was succeeded. And the customer was refunded. charge events' data: [ 'id', 'user_id', 'invoice_id', 'payment_intent_id', 'source_id', 'amount', 'amount_refunded', 'created', 'currency', 'disputed', 'failure_code', 'failure_message', 'offline', 'offline_entered_by', 'order', 'origin', 'paid', 'reference', 'refunded', 'status', 'refunded_at', 'origin', 'deleted_at', 'remit_rate', 'remit_rate_amount' ]; SOURCE EXPIRING event name: source_expiring usage: This means the customer's payment source or his card will expire at the end of the month. SOURCE CANCELED event name: source_canceled usage: This means the customer's payment source or his card is canceled. source events' data: [ 'exp_month', 'exp_year', 'address_zip_check', 'brand', 'country', 'cvc_check', 'funding', 'last4', 'name', 'address_line1_check', 'dynamic_last4', ]; ADDRESS CREATED event name: address_created usage: This means this customer has added a new address ADDRESS UPDATED event name: address_updated usage: This means this customer has updated his address address events' data: [ 'id', 'type', 'first_name', 'last_name', 'line1', 'line2', 'city', 'state', 'country', 'postal_code', 'created_at', 'updated_at' ]; Finally, let's send a notification Let's use what we learned so far to send a subscription confirmation email. It will be sent automatically once the user creates their new subscription on Pelcro. Go to automate and click on the Email link Now, select API, and then Event API Name your email campaign Click edit the trigger Now copy the event "subscription_created" and add it as the event name and click update trigger Please note 1- You must copy the event's name from this document. 2- Note that you have all the events and their data properties 3- You must paste the custom event name in the event name field as shown below. 4- Formating the event data attributes may not work according to some of our tests "|EVENT:subscription_product|" - please avoid formatting the event attributes. Then, click next to add a subject and sender information. Afterward, fill out all the Mailchimp requirements until you reach the Email Message Body. Customize your email using merge tags and event merge tags To access any of the subscription created event data (which we listed above), you can easily use it inside the Email body as shown in the picture below. "|EVENT:subscription_plan|" Sample email for the subscription_created event. Hey *|FNAME|*, Thank you for subscribing to the following: Product: *|EVENT:subscription_product|* Plan: *|EVENT:subscription_plan|* Start date: *|EVENT:current_period_start|* End date: *|EVENT:current_period_end|* Please reply to this email if you have any questions. Thanks, Business Name Use merge tags to generate personalized links Prepare your email message in your template Add link Use the button link from your Mailchimp toolbox on the right side of the screen Select Web Address In the web address (URL) input, please add your website link. EX:*|EMAIL|*&token=*|TOKEN|* Click insert and the link will be ready More information Please click here Mailchimp API Custom event video tutorial If you couldn't apply the previous example or for any reason you want more information, we recommend visiting the Mailchimp API custom event video tutorial. It's a great help in learning how to use and implement automated emails using API custom events. For the Mailchimp API CUSTOM EVENT please click here Mailchimp webhook support Pelco's Mailchimp integration is now capable of generating a unique webhook URI per account, With a hashed unique URIs you can sync back all the Mailchimp list unsubscribes to your Pelcro's integration. For the Mailchimp webhooks please click here Limitations Maximum number of members in the audience list If you exceed your MailChimp account limit without upgrading, the integration will automatically be de-activated. Updated 17 days ago
https://docs.pelcro.com/docs/mailchimp
2021-10-16T03:03:47
CC-MAIN-2021-43
1634323583408.93
[array(['https://files.readme.io/cb6a9e8-Screen_Shot_2020-03-18_at_5.56.05_PM.png', 'Screen Shot 2020-03-18 at 5.56.05 PM.png'], dtype=object) array(['https://files.readme.io/cb6a9e8-Screen_Shot_2020-03-18_at_5.56.05_PM.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/15e4423-Screen_Shot_2020-03-18_at_5.58.25_PM.png', 'Screen Shot 2020-03-18 at 5.58.25 PM.png'], dtype=object) array(['https://files.readme.io/15e4423-Screen_Shot_2020-03-18_at_5.58.25_PM.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/2c27371-Screen_Shot_2020-03-11_at_6.17.04_PM.png', 'Screen Shot 2020-03-11 at 6.17.04 PM.png click on integrations'], dtype=object) array(['https://files.readme.io/2c27371-Screen_Shot_2020-03-11_at_6.17.04_PM.png', 'Click to close... click on integrations'], dtype=object) array(['https://files.readme.io/b2030d3-Screen_Shot_2020-03-11_at_6.19.34_PM.png', 'Screen Shot 2020-03-11 at 6.19.34 PM.png click on the mailchimp integration'], dtype=object) array(['https://files.readme.io/b2030d3-Screen_Shot_2020-03-11_at_6.19.34_PM.png', 'Click to close... click on the mailchimp integration'], dtype=object) array(['https://files.readme.io/76847f0-Screen_Shot_2020-03-11_at_6.23.50_PM.png', 'Screen Shot 2020-03-11 at 6.23.50 PM.png'], dtype=object) array(['https://files.readme.io/76847f0-Screen_Shot_2020-03-11_at_6.23.50_PM.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/585f1d4-Screen_Shot_2020-03-11_at_6.39.58_PM.png', 'Screen Shot 2020-03-11 at 6.39.58 PM.png'], dtype=object) array(['https://files.readme.io/585f1d4-Screen_Shot_2020-03-11_at_6.39.58_PM.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/434a21f-Screen_Shot_2020-03-11_at_6.47.11_PM.png', 'Screen Shot 2020-03-11 at 6.47.11 PM.png'], dtype=object) array(['https://files.readme.io/434a21f-Screen_Shot_2020-03-11_at_6.47.11_PM.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/0c3db84-Screen_Shot_2020-03-11_at_6.48.13_PM.png', 'Screen Shot 2020-03-11 at 6.48.13 PM.png'], dtype=object) array(['https://files.readme.io/0c3db84-Screen_Shot_2020-03-11_at_6.48.13_PM.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/e226db7-Screen_Shot_2020-03-11_at_6.58.21_PM.png', 'Screen Shot 2020-03-11 at 6.58.21 PM.png'], dtype=object) array(['https://files.readme.io/e226db7-Screen_Shot_2020-03-11_at_6.58.21_PM.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/c381052-Screen_Shot_2020-03-11_at_7.40.50_PM.png', 'Screen Shot 2020-03-11 at 7.40.50 PM.png'], dtype=object) array(['https://files.readme.io/c381052-Screen_Shot_2020-03-11_at_7.40.50_PM.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/d030c4b-Screen_Shot_2020-03-11_at_7.47.47_PM.png', 'Screen Shot 2020-03-11 at 7.47.47 PM.png'], dtype=object) array(['https://files.readme.io/d030c4b-Screen_Shot_2020-03-11_at_7.47.47_PM.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/b7d4eeb-Screen_Shot_2020-03-11_at_8.24.08_PM.png', 'Screen Shot 2020-03-11 at 8.24.08 PM.png'], dtype=object) array(['https://files.readme.io/b7d4eeb-Screen_Shot_2020-03-11_at_8.24.08_PM.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/1d9bd2e-Screen_Shot_2020-03-11_at_8.25.31_PM.png', 'Screen Shot 2020-03-11 at 8.25.31 PM.png'], dtype=object) array(['https://files.readme.io/1d9bd2e-Screen_Shot_2020-03-11_at_8.25.31_PM.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/476b9fd-Screen_Shot_2020-03-11_at_8.26.48_PM.png', 'Screen Shot 2020-03-11 at 8.26.48 PM.png'], dtype=object) array(['https://files.readme.io/476b9fd-Screen_Shot_2020-03-11_at_8.26.48_PM.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/312a7fc-Screen_Shot_2020-03-11_at_8.27.59_PM.png', 'Screen Shot 2020-03-11 at 8.27.59 PM.png'], dtype=object) array(['https://files.readme.io/312a7fc-Screen_Shot_2020-03-11_at_8.27.59_PM.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/7cd2200-Screen_Shot_2020-03-11_at_8.31.03_PM.png', 'Screen Shot 2020-03-11 at 8.31.03 PM.png'], dtype=object) array(['https://files.readme.io/7cd2200-Screen_Shot_2020-03-11_at_8.31.03_PM.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/cb63085-Screen_Shot_2020-03-11_at_8.32.25_PM.png', 'Screen Shot 2020-03-11 at 8.32.25 PM.png'], dtype=object) array(['https://files.readme.io/cb63085-Screen_Shot_2020-03-11_at_8.32.25_PM.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/e9992e7-Screen_Shot_2020-03-11_at_8.46.23_PM.png', 'Screen Shot 2020-03-11 at 8.46.23 PM.png'], dtype=object) array(['https://files.readme.io/e9992e7-Screen_Shot_2020-03-11_at_8.46.23_PM.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/8fe3a08-Screen_Shot_2020-03-12_at_12.51.36_AM.png', 'Screen Shot 2020-03-12 at 12.51.36 AM.png'], dtype=object) array(['https://files.readme.io/8fe3a08-Screen_Shot_2020-03-12_at_12.51.36_AM.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/82757dd-Screen_Shot_2020-03-18_at_2.25.40_AM.png', 'Screen Shot 2020-03-18 at 2.25.40 AM.png'], dtype=object) array(['https://files.readme.io/82757dd-Screen_Shot_2020-03-18_at_2.25.40_AM.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/b09a768-Screen_Shot_2020-03-18_at_2.11.00_AM.png', 'Screen Shot 2020-03-18 at 2.11.00 AM.png'], dtype=object) array(['https://files.readme.io/b09a768-Screen_Shot_2020-03-18_at_2.11.00_AM.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/f61f7b1-Screen_Shot_2020-03-18_at_2.23.15_AM.png', 'Screen Shot 2020-03-18 at 2.23.15 AM.png'], dtype=object) array(['https://files.readme.io/f61f7b1-Screen_Shot_2020-03-18_at_2.23.15_AM.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/d431b33-Screen_Shot_2020-03-18_at_6.32.15_PM.png', 'Screen Shot 2020-03-18 at 6.32.15 PM.png'], dtype=object) array(['https://files.readme.io/d431b33-Screen_Shot_2020-03-18_at_6.32.15_PM.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/b453300-Screen_Shot_2020-03-18_at_6.32.33_PM.png', 'Screen Shot 2020-03-18 at 6.32.33 PM.png'], dtype=object) array(['https://files.readme.io/b453300-Screen_Shot_2020-03-18_at_6.32.33_PM.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/2027758-Screen_Shot_2021-09-29_at_2.30.59_PM.png', 'Screen Shot 2021-09-29 at 2.30.59 PM.png'], dtype=object) array(['https://files.readme.io/2027758-Screen_Shot_2021-09-29_at_2.30.59_PM.png', 'Click to close...'], dtype=object) ]
docs.pelcro.com
Table of Contents Product Index Lovely Precious Vol 05 - Lynx is a unique collection of poses and new textures for Precious Deer Expansion, also with a lovely pillow for relax this cute animal in the long day of fun and games! That set allow you transform your Lynx into a lovely creature or create any scene that may require adorable toon Lynx.
http://docs.daz3d.com/doku.php/public/read_me/index/48385/start
2021-10-16T02:32:32
CC-MAIN-2021-43
1634323583408.93
[]
docs.daz3d.com
Date: Tue, 15 May 2007 23:06:41 -0500 From: "Jack Barnett" <[email protected]> To: freeBSD <[email protected]> Subject: Disk going bad? Message-ID: <[email protected]> Next in thread | Raw E-Mail | Index | Archive | Help I think I have a disk going bad. Is there any tools for freebsd that'll check the disk and let me know if there is a hardware problem or not? (I'm getting ECC errors from my controller and trying to firgure out if it's a bad controller or a bad drive [already checked the cables and all that, so just have it down to these 2 peices of hardawre]) k, thanks! Want to link to this message? Use this URL: <>
https://docs.freebsd.org/cgi/getmsg.cgi?fetch=962624+0+/usr/local/www/mailindex/archive/2007/freebsd-questions/20070520.freebsd-questions
2021-10-16T04:04:46
CC-MAIN-2021-43
1634323583408.93
[]
docs.freebsd.org
Batch Processing Tool The workflows for Geocoding and Territory Assignment designed can be used on a day-to-day basis for addresses that are changed or added occasionally. However, for bulk geocoding/territory assignment on data, we recommend using the Batch Processing tool. This tool ensures that the bulk geocoding on records is handled in single batch. While in the territory assignment, this tool helps to assign the respective territories to all the selected records based on their Latitude and Longitude values . To run this tool, the user needs to make sure that machine has Microsoft .net framework 4.0 and Maplytics installed in Dynamics CRM. To run the tool, Extract the Batch Processing tool .zip file and run the BatchProcessingTool application. Executing Batch Processing Tool: Connection Details: Please enter the CRM details on this screen. The users have to enter the following details: 1. Deployment model: The user can select either of the options given in below screenshot depending on their deployment. 2. CRM Server: Please enter the serverurl of CRM. For example, http://: or 3. Organization: If the deployment model is ‘OnPremise’ then the user has to specify the organization name. For other deployment models leave this blank. When the user enters the server URL in case of On-Premise deployment model, the organization list will be auto-populated with the organizations available in CRM. Select the organization for which Maplytics is to be or already has been installed. 4. Username: Username of a valid CRM user. For example: In case of On-Premise deployment model, the organization list will be auto-populated with the organizations available in CRM. Select the organization for which Maplytics is to be or already has been installed. 5. Password of CRM user. User can also use their Client Id and Client secret key to log in for their respective CRM using Application user. Choose Option: Here, the user can make selection whether they would like to run this tool for Geocoding (Batch Geocode) or Territory Assignment. In this example, we have chosen Batch Geocode as shown below: Entity Selection and Filtering: Please find the details of the same below: Entity: Select the entities on which the user would like to run this tool. Views: Once the user selects the entity, all the views defined for this entity are populated in the list of views. Select the required view. Results and Log Settings: Since this is executed as a batch process in the background, enter the email address, if the users would like the job results to be mailed to them. Progress : In this section, the user can see the progress of the geocoding process. The Batch Processing Tool takes time to process based on the number of records selected. Please do not close the window until this process is completed. Record Summary: Geocoding: Bing Status: This shows the number of the records for which the geocoding from Bing maps were successful or failed. CRM Status: This shows the number of the records for which the updation of geocordinates (Latitude & Longitude) to the respective CRM records were successful or failed. After completing the geocoding process, the user can download the results. The results will be downloaded in CSV format. User can check the columns of CRMDescription and BingDescription in the result to check the reason for failure of processes on the records. Territory assignment: CRM Status: This shows the number of the records for which the updation of Territories for the respective CRM records were successful or failed. 1. If a user selects ‘All records’ for an entity, then a view for that entity cannot be selected. 2. A user cannot select an entity with the same view twice. 3. If the user adds an entity with a view and then selects ‘All Records’ for the same entity, it will remove all the views selected for that entity with the user’s permission. 4. All of the above instructions apply for running Batch processing tool for Territory Assignment. Related Posts: Geocode Bulk Dynamics 365 CRM or PowerApps records for OOB/custom entities in a single batch with our new Batch processing tool! Run workflow on individual records on maps within Dynamics CRM Geocoding Records in Dynamics CRM Configure Driving Instruction Workflow Uninstallation Last modified 1mo ago Copy link Contents Executing Batch Processing Tool:
https://docs.maplytics.com/features/batch-processing-tool
2021-10-16T03:29:49
CC-MAIN-2021-43
1634323583408.93
[]
docs.maplytics.com
Package net.sourceforge.pmd.lang.ast Interface CharStream - All Known Implementing Classes: JavaCharStream, JavaCharStreamBase, SimpleCharStream public interface CharStreamPMD flavour of character streams used by JavaCC parsers. TODO for when all JavaCC languages are aligned: * rename methods to match decent naming conventions * move to impl.javacc package Method Detail readChar char readChar() throws IOExceptionReturns the next character from the input. After a backup(int), some of the already read chars must be spit out again. - Returns: - The next character - Throws: IOException- If the underlying char stream throws BeginToken char BeginToken() throws IOExceptionCalls readChar()and returns its value, marking its position as the beginning of the next token. All characters must remain in the buffer between two successive calls to this method to implement backup correctly. - Throws: IOException GetImage String GetImage()Returns a string made up of characters from the token mark up to to the current buffer position. GetSuffix char[] GetSuffix(int len)Returns an array of characters that make up the suffix of length 'len' for the current token. This is used to build up the matched string for use in actions in the case of MORE. A simple and inefficient implementation of this is as follows : String t = tokenImage(); return t.substring(t.length() - len, t.length()).toCharArray(); - Parameters: len- Length of the returned array - Returns: - The suffix - Throws: IndexOutOfBoundsException- If len is greater than the length of the current token backup void backup(int amount)Pushes a given number of already read chars into the buffer. Subsequent calls to readChar()will read those characters before proceeding to read the underlying char stream. A lexer calls this method if it has already read some characters, but cannot use them to match a (longer) token. So, they will be used again as the prefix of the next token. - Throws: AssertionError- If the requested amount is greater than the number of read chars getBeginColumn @Deprecated int getBeginColumn()Deprecated. getBeginLine @Deprecated int getBeginLine()Deprecated. getEndColumn int getEndColumn()Returns the column number of the last character for the current token. getEndLine int getEndLine()Returns the line number of the last character for current token. getTokenDocument default JavaccTokenDocument getTokenDocument()Returns the token document for the tokens being built. Having it here is the most convenient place for the time being. getStartOffset default int getStartOffset()Returns the start offset of the current token (in the original source), inclusive. getEndOffset default int getEndOffset()Returns the end offset of the current token (in the original source), exclusive.
https://docs.pmd-code.org/apidocs/pmd-core/7.0.0-SNAPSHOT/net/sourceforge/pmd/lang/ast/CharStream.html
2021-10-16T03:05:50
CC-MAIN-2021-43
1634323583408.93
[]
docs.pmd-code.org
VMware® Server Virtualization Solution Guide# Rackspace Server Virtualization is a managed VMware® vSphere environment that enables you to offload the maintenance and monitoring of the infrastructure, VMware stack, and guest operating system layer to a Rackspace Technology team of VMware Certified Professionals who are experts in storage, networking, and infrastructure. Rackspace VMware Server Virtualization runs on VMware vSphere, which allows you to consolidate the hardware requirements for higher capacity utilization and benefit from greater efficiency. You can also take advantage of features not available in traditional environments such as snapshots, vMotion and Storage vMotion, quicker workload provisioning, and less impactful hardware maintenance processes. This Solution Guide provides conceptual information and procedural instructions you can use to manage your VMware Server Virtualization environment. This guide contains the following chapters: Understanding Server Virtualization VMware Server Virtualization architecture options Understanding roles and responsibilities Datastore overhead requirements Migrate a workload to VMware Server Virtualization Log in to the Rackspace Technology Customer Portal Manage Rackspace Technology Customer Portal users Manage your virtual machine - - Maintain your environment -
https://docs.rackspace.com/docs/docs-vmware-server-virt/server-virt-handbook/
2021-10-16T02:45:37
CC-MAIN-2021-43
1634323583408.93
[]
docs.rackspace.com
Unity Integrations allow you to connect the following Unity Services in your development workflow to non-Unity tools: To add an Integration, you must be an Owner or Manager on the Project. For information on assigning roles, see the Members & Groups section of Managing your Organization. Unity Integrations supports the following non-Unity tools: Unity Personal limits users to one Integration for all events, but multiple Unity services can use that Integration. Unity Teams, Plus, and Pro users can have up to 100 Integrations. If your Unity Teams, Pro, or Plus subscription expires, Unity keeps all of your current Integrations, but automatically disables them. If this happens, you can still sign into the Unity Services Dashboard and enable a single Integration. To add an Integration to your workflow: To configure a webhook Integration, supply the following information: To configure Discord Integrations, Unity calls an app that uses the Discord API to register a webhook to a Discord channel. If you do not have a Discord server, see How do I create a server? in the Discord documentation. Important: You must enable webhooks on the account with which you are integrating. To complete the configuration: To configure Slack Integrations, Unity calls an app that uses the Slack API to register a webhook to a Slack channel. After creating a new Slack integration and configuring the events that you want to appear, you must authorize Unity to access your Slack server. If you do not have a Slack server, see Create a Slack workspace in the Slack documentation. To complete the configuration: Email Integrations notify team members when specified events for your Project occur. The Unity JIRA Integration allows you to interact with JIRA as follows: To configure a JIRA Integration for Collaborate or Cloud Diagnostics, you must supply the following information: After you have entered the information for your JIRA instance, you must supply the following additional information for Cloud Diagnostics: When you make changes in Collaborate, update JIRA-associated issues by referencing the issue key in your commit message. For example, “I fixed the crashes caused by ISS–42” adds publish details to issue “ISS–42”. The Unity Trello Integration has the following features: To configure a Trello Integration for Collaborate or Cloud Diagnostics: To manage existing Integrations: To edit or delete an existing Integration, click EDIT next to the Integration you want to modify. The types of edits you can make depend on the Integration: To enable or disable an existing Integration, click the Status toggle to enable or disable the Integration.
https://docs.unity3d.com/2019.3/Documentation/Manual/UnityIntegrations.html
2021-10-16T02:11:47
CC-MAIN-2021-43
1634323583408.93
[]
docs.unity3d.com
Connect Klaviyo with Bolt Connecting Bolt with Klaviyo requires your ecommerce platform to have the Klaviyo integration already installed and set up. 1. Complete Klaviyo Integration Setup Use the following guides by Klaviyo to set up your Klaviyo integration. 2. Set up Bolt Use the Platform Guides to add Bolt to your store. 3. Complete Bolt Connection Steps This step varies per ecommerce platform, however, it typically requires providing your Publishable Bolt API key and enabling Bolt Pay.
https://docs.bolt.com/developers/guides/integration-guides/klaviyo/
2021-10-16T02:53:10
CC-MAIN-2021-43
1634323583408.93
[]
docs.bolt.com
All Hallows' Dreams We've got a ghoulish event planned for Halloween! Introducing All Hallows' Dreams, a collaborative event between Media Molecule and... you! We'll be creating a multi-story, many-roomed Haunted House in Dreams, with each room created by a different Dreamer! You’re invited to submit your own spooky room, or carve a pumpkin for the pumpkin patch, starting today! Your contribution can range from truly spooky, to ghoulishly delightful or somewhere in between. Our official room and pumpkin templates are available NOW (8th September), just please submit them by the 27th September 2020. You can find both templates at the top of DreamShaping, or by searching the following terms in Dreams: - All Hallows’ Dreams Haunted Room Templates by j_plusb - All Hallows’ Dreams Pumpkin Template by j_plusb We've laid out the rules, guidelines and some helpful tips below - please have a read! Once you've finished and released your Room and/or Pumpkin, head to this indreams page and enter the indreams.me link for your creation. The submission page will let you know if you've missed some of the major criteria to enter the collaboration, like thermometer limits, which will hopefully help make sure your content can be included. We’ll be announcing more details as we get closer to Halloween and we hope you’ll join us for this fantastic haunting! Eligibility# Content must not contain content that is protected by any third party's copyright or other intellectual property rights. Submitted content must comply with the Code of Conduct set out in the PSN Terms of Service(opens in new tab), as well as the Dreams Community Guidelines. Room Template Guidelines# General# - There are three templates to look out for, all contained within our Haunted Room Templates collection. Each room has a different exit location, so choose the template which works best for you. - There are thermometer limits to stick to! You are limited to 15% graphics, 15% gameplay, and 5% audio for your room. - Before creating anything for the haunted house, make sure you're scoped into the room group (L1 + X). Everything you create must be inside this group! To make sure you've got this right, scope out of everything (repeat L1 + O until you're outside of all groups) and try moving the room. If everything inside the room moves with it, you're good! - When you've scoped into the room template, you will notice there are two powered-off Text Displayers – these denote the entrance and exit for your room. When you are happy with the layout of the room, feel free to remove them. - The sculptures already placed in the element are guides for the content of your room, and everything you place in the room must be kept inside those dimensions. You may not edit them! If you make your own walls, flooring, and ceiling (and they must be placed correctly - e.g. no gaps in the flooring, no visible gaps between the floor and the walls, and the walls and the ceiling) feel free to remove the guide sculptures. - Any objects in the template frozen with the Freeze tool must not be tinkered with! - Do not edit the decoration on the exterior of your room. - Finally, if what you've added inside the room exceeds or moves out of the dimensions of your room, it won't be allowed. Graphics# - Feel free to use any assets in your room – sculpt from scratch, grab stuff from the Dreamiverse – as long as it's spooky! - You may not remove the windows or door frames in the template. - Make sure the entrance and exit to your room are not blocked by objects that you've placed in the room. Feel free to block the windows, however! - Lighting is fine to use, just make sure that when it's placed inside your room it doesn't extend the dimensions of the room. For example, do not cast a light from the entrance, exit, or windows of your room, as this will affect the lighting of other rooms. - There is a limit of 6 Light gadgets per room. - You may not add the Sun & Sky or Grade & Effects gadgets. There is a powered-off Sun & Sky gadget in the Room Properties microchip found at the entrance of the template. When powered on this will show you what the final atmosphere of the event will look like. - You may not use Cameras or Camera Pointers in your room. - Fog is fine, just as long as it stays within the room and it's only used as a visual effect. Please don't make it difficult to navigate the room! - Only use the Camera Shaker if it improves a short effect you've made (like an explosion), and it's contained within your room. For example, don't shake the camera throughout the player's experience in your room! Gameplay# - Your final room must have a clear path, and if you plan on adding puzzles or objectives in your room then they must be optional. This is to ensure the player can navigate the floors easily! - We will be using a puppet based on the Blank Sliding Platformer Puppet, found in the Blank Puppet Collection inside the Gameplay Gear menu. The only difference in our puppet is that the Camera Tilt, found in the Camera Properties tab of its Controller Sensor, is 7° instead of 14°. - We encourage you to make your room VR friendly if you so wish! The puppet will be able to switch between third and first person views using the touch pad. For testing purposes, use a temporary Blank Sliding Platformer Puppet and tweak its Controller Sensor. In the Camera Properties tab, change the Camera Distance to 0.0m. Then when you test your room, you will see your creation in first person. Don't forget to delete the puppet before you submit, however! - The puppet will be tagged with "Puppet". Use this for any gadgets you need for locating the position of the player. - The puppet's head will be tagged with "Look at me" - use this for anything in the room that will look directly at the puppet's head, for instance. - No Moveable objects are allowed in your room, unless you have connected them to other non-Moveable objects in your room using Connectors. - You may use puppets in your room using the above workaround. If you place the puppet in the room template group, and then attach a String (from Gadgets/Connectors) from the wall of the room to the puppet itself, then the puppet will work just fine. If you choose to use a puppet in your room, however, ensure the puppet's Possession Mode is set to None by tweaking its Controller Sensor. - Any gadgets which make objects Moveable (and therefore have physics independent to the room template) are not allowed. - You may not use Checkpoints, Prize Bubbles, Doorways, Scores/Score Modifiers, Head/Camera Trackers, or Hand/Imp Trackers in your room. - You may animate objects in your room by using Keyframes, Action Recorders, and Record Possession. Note that if you use Record Possession, be sure to tweak the Controller Sensor of your puppet and set the Possession Mode to None. - You may use Text Displayers in your room, but they must be set to In Scene (Settings tab of the Text Displayer). Customise your text however you like, as long as the text or text box does not visibly protrude the dimensions of the room. - Make sure that the Text Displayer is not set to Always On Top! Face Camera is fine to use. Sound# - If you wish to add music or ambience to your room, please add it to the Music & Ambience microchip found around the centre of the template. The microchip contains another microchip – do not tweak this! - Make sure that the music or ambience you use in the room is not too loud for players - check out the ambience and music in the All Hallows' Dreams Haunted Room Templates collection for an idea of how the volume levels should be. - Feel free to use any non-ambient sounds in your room, but they must only occur within your room. You may use 3D Panning on your sounds to ensure they have a position within your room, but make sure that the zone does not extend the dimensions of your room. Pumpkin Template Guidelines# General# - Firstly, make sure you are scoped in to the pumpkin group, which contains the pumpkin sculpture, the pumpkin lid, and a microchip. - If you would like your Online ID to be shown over your pumpkin, open the microchip and enable the switch labelled Online ID, then change the text in the Text Displayer to your exact Online ID. Your Online ID will be shown in a scene in our event when the possessed puppet is near the pumpkin. - The thermometer limits for submission are 1% gameplay and 1% graphics. To check the current thermometer levels, switch to Assembly Mode, open Show/Hide, and enable the Thermometer setting. - You may not edit any of the frozen gadgets in the Pumpkin Template element. These are the Auto Guide next to the pumpkin, the Settings microchip inside the pumpkin group, and the Settings microchip inside the Levitation Settings microchip. - You may not edit any of the gadgets inside the microchip, except for the following: - Toggling the Online ID switch - Changing the text in the Text Displayer - Powering the Glow switch - Changing the colour of the Glow switch, thereby changing the colour of the pumpkin's glow - Powering the glow pulse switch - Changing the Glow Frequency slider inside the Glow Settings microchip - Powering the Levitation switch - Changing the value of the Levitation Height and Levitation Speed sliders inside the Levitation Settings microchip - You may not resize the pumpkin. - You may not edit either of the sculptures inside the pumpkin lid group. - You may not edit the invisible glowing sculpture inside the pumpkin group. - You may not add anything else to any of the groups in the template - just edit what's already there! Carving the Pumpkin# - To begin, make sure you carve on the front of the pumpkin. To make sure you're looking at the front of the pumpkin, scope out and make sure the microchip above the pumpkin is directly facing the camera, and the pumpkin is in the centre of the screen. - To carve the pumpkin sculpt, you may use Subtract with Shapes (press triangle while a shape is selected) to make holes in the pumpkin, and you may use Guides to make this easier (Surface Snap, Grid, Mirror, Kaleidoscope, etc). We suggest using the lightest orange colour available in the Colours menu in Sculpt Mode. - You may use different Flecks and Finishes when subtracting shapes from the sculpture. - The final pumpkin must only use 1% of the gameplay and graphics thermometers. - You may not reduce the sculpture detail of the pumpkin. - You may not move, edit, or delete the shapes already subtracted from the pumpkin which create its initial shape. Take care not to accidentally remove these while adding your sculpts. For reference, the pieces we have used for the pumpkin are: - 2 x thin kaleidoscoped donuts (Subtract) - the pumpkin's ribs - 2 x soft blend cubes (Subtract) - the top and bottom of the pumpkin - 1 x inverted cone (Subtract) - a hole for the pumpkin lid - 1 x sphere (Subtract) - the inside of the pumpkin - 1 x sphere (Add) - the shell of the pumpkin Decorating the Pumpkin# - Note that all rules in this section also apply to the sculptures found within the Pumpkin Lid group. - When tweaking the pumpkin sculpture, you may alter any of the outer and inner properties except for Glow, the Emit Light From Glow option, and the Cast Shadows option. - You may also tweak the Ruffle and Impasto sliders in the Fleck Properties section of the pumpkin sculpture's tweak menu. - You may not adjust the sliders in the Effects section of the pumpkin sculpture's tweak menu, or animate the pumpkin in any way. - You may not adjust any settings in the Physical Properties, Collision Labels, Labels & Ownership, and Audio Surface Type section of the pumpkin sculpture's tweak menu. - You may use Spraypaint on the pumpkin while in Sculpt Mode - any colour or pattern is fine! - You may not add Paint to the pumpkin. - You may use Coat Mode to apply any Finish you like to your pumpkin, except for Glow. - You may use any of the tools in Style Mode except Looseness. Extra Touches# - To add a glow effect from the inside of your pumpkin, firstly enable the switch with the sun symbol. You can change the colour of the glow by tweaking and changing the colour of the switch. - If you would like the glow of your pumpkin to pulsate, enable the switch with the sun and wave symbols. You can change the rate at which the glow pulsates by opening the microchip to the right of this switch and changing the value of the slider inside. - Finally, to make the pumpkin levitate, enable the switch with the up-down arrow symbols. You can alter the maximum height of the pumpkin's levitation using the top slider, and you can change the speed at which it levitates using the bottom slider. Happy spooky creating! We can't wait to see what you Dream up! The Dreams User Guide is a work-in-progress. Keep an eye out for updates as we add more learning resources and articles over time.
https://docs.indreams.me/en-US/community/news/all-hallows-dreams
2021-10-16T02:11:47
CC-MAIN-2021-43
1634323583408.93
[]
docs.indreams.me
New to Telerik Reporting? Download free 30-day trial How to Create OHLC Chart In this article we will show you how to create an OHLC chart using the Graph item. Create an OHLC chart using the Graph Wizard The Graph Wizard provides a fast and efficient way to create complex charts. The wizard starts every time you start creating a new graph in Standalone Report Designer or Visual Studio Report Designer. In this scenario we will use the Standalone Report Designer, but the steps are applicable to both designers. Start the designer and open or create a new report. Select the report section in which the Graph item will be placed. From the Insert tab find Charts section and select Scatter menu. Its Financial group has two items - Candlestick and Bar, which provide the same functionality, but produce different series markers. In this how-to article we'll create a series with candlestick markers. In a real-world application the data would be retrieved from a database, but here we'll use some sample data as a CSV data source. Create a new CSV data source and paste the following text as content: date,open,high,low,close 03.07.2017,18.2,18.6,18.1,18.4 04.07.2017,18.3,18.6,18.1,18.4 05.07.2017,18.5,19,18.3,18.7 06.07.2017,18.9,19,18.2,18.3 07.07.2017,18.3,18.9,18.3,18.7 10.07.2017,18.8,19.2,18.3,18.4 11.07.2017,18.2,19.2,18.1,18.8 12.07.2017,18.3,18.6,18.2,18.6 13.07.2017,18.4,19.4,18.3,18.7 Set the field separator to comma and check the The CSV has headers checkbox. Define the column types and preview the data using the Parse CSV... button to make sure the data source is set up correctly. In the Arrange graph fields page drag the date column to Categories box. Drag the open, high, low and close fields to their respective boxes in the bottom right corner. If you click the Set default datapoint styling checkbox, the wizard will add conditional formatting rules that will change the datapoint color depending on the open-close values. You can click the Finish button or setup additional properties of the DateTime scale and the finish the wizard. When you close the wizard, the newly created chart will appear. Format the axis labels, set their angle and add additional styling if necessary. Your chart should look like the one below: The marker width is automatically calculated based on two properties. The first property is the X scale's SpacingSlotCount. The second property is series' MinMarkerWidth, which determines the minimum width of the marker, measured in Unit, i.e. 10px, 1cm, etc. By default the wizard sets the MinMarkerWidth to 6px. Note the gaps between 07.07.2017 and 10.07.2017 where there are no data points. Since you are using a DateTime field to create the categories, the wizard will automatically create a DateTimeScale scale for your X axis. The DateTime scale represents the values in a continuous domain - that's why you see axis labels for 08.07.2017 and 09.07.2017 even if there are no data for these dates. In case you do not want to have such gaps, you should use a CategoryScale for your X axis. In this article we explained how to use the Graph Wizard to create a candlestick series, presenting stock prices over a period of time.
https://docs.telerik.com/reporting/graphhowtocreateohlcchart
2021-10-16T04:04:21
CC-MAIN-2021-43
1634323583408.93
[array(['/reporting/media/OhlcChart.png', 'ohlc'], dtype=object)]
docs.telerik.com
Interchange 5.12 Administrator Guide Save PDF Selected topic Selected topic and subtopics All content Configure error processing Use the Configure error processor page to configure the way Interchange handles messages when it rejects them for any reason. This page enables you to react to message rejection by specifying the inline processing rule(s) or the script to apply to the rejected message. This feature does not automatically provide for error processing based on the reason for the message rejection. To apply message filtering based on the message rejection reason or time, you must add filtering to the inline processors or to the script that you apply. About error processing When Interchange rejects a message, it triggers error processing on the message at the moment of rejection. If you have set an error processor through the Configure error processor page, Interchange: Adds the metadata PreviousState to the rejected message. It sets the value of this metadata to the name of the previous message state. This enables you to know what the state of message processing was immediately before rejection. Records the reason for rejection in the metadata RejectedReason. Calls any inline processing rules that you have defined, in the order they are listed. Calls a post-processing script if you have defined one. Configure error processing From the menu bar, select Manage trading configuration, to open the Communities page. From the task list at the bottom of the page, select Configure error processing. Complete the fields: Enable this error processor – Select to enable the processor on message rejection. Perform inline processing – Select this option if you want to apply inline processing rules to a rejected message, then specify one or more inline processors: Description – Optionally enter a description of this inline processing rule. Class name – Enter the name of the Java class for implementing message error processing. Parameter – If there are any parameters required for implementing the Java class, enter them here. Execute a processing script – Select this option if you want to apply a processing script to rejected messages, then specify the script: Processing script – The name of the post-processing script. Click Save. Related topics Delivery settings Message Simulator Uniqueness in Interchange Message handler Post-processing configuration Collaboration settings Inbound message validation rules Sequential message delivery Use generic MMDs Related Links
https://docs.axway.com/bundle/Interchange_512_AdministratorsGuide_allOS_en_HTML5/page/Content/Interchange/Message_handling/te_config_error_processor.htm
2021-10-16T02:47:42
CC-MAIN-2021-43
1634323583408.93
[]
docs.axway.com
Power on storage resources for a NetApp HCI system Contributors Download PDF of this page You can power on NetApp HCI after the scheduled outage is complete. Steps Power on all the storage nodes using either the physical power button or the BMC. If using the BMC, log in to each node and navigate to Remote Control > Power Control > Power On Server. When all the storage nodes are online, log in to the NetApp HCI storage system and verify that all nodes are operational.
https://docs.netapp.com/us-en/hci/docs/task_nde_hci_power_on_storage.html
2021-10-16T04:06:35
CC-MAIN-2021-43
1634323583408.93
[]
docs.netapp.com
Getting started This is the documentation for the Platformer Console. Platformer Console is a hassle-free enterprise-grade application platform to simplify and streamline your Kubernetes experience on Cloud, Hybrid, On-Premise or Edge Infrastructure. It is a one stop solution for managing your workloads, CI/CD in a multi cluster environment both in Cloud and On Premise. You can get started for free. No strings attached. Have a look at the Console UI Overview Switch to your Default Workspace. In Platformer Heirachy, You need to create an Organization to get started. You already have a default workspace created which can be used as an Organization. Read more about Organizations here. Easily set up a billing account with your credit card. Connect a Kubernetes Cluster You can connect any kubernetes cluster including local clusters such as minikube, microk8s and kind. -
https://docs.platformer.com/getting-started/
2021-10-16T02:21:01
CC-MAIN-2021-43
1634323583408.93
[]
docs.platformer.com
Using CNAMEs with Cloud Files containers Last updated on: 2021-04-05 Authored by: Rackspace Support A CNAME record links your Cloud Files container to a branded URL that you display instead of a CDN URL. For example, you might want to create a CNAME record that links your CDN URL, such as, to a shorter or branded URL, such as. Find your CDN URL Before you create the CNAME record, you need to know the CDN URL of the container. You can find this information in the Cloud Files section of the Cloud Control Panel by clicking the gear icon next to the container name and selecting View All Links. A popup dialog box with the CDN links to the container displays, as shown in the following image: Alternatively, you can request your container information by using’re using Rackspace Cloud DNS, you can create a CNAME record that uses the example values introduced at the beginning of this article, which is similar to the following image: If you want to edit or delete your CNAME record, you can do that by managing your DNS in your existing tool. Notes: - You are not charged extra for using CNAMEs. - At this time, CNAMEs do not work with SSL (or HTTPS) delivery. Use the Feedback tab to make any comments or ask questions. You can also start a conversation with us.
https://docs.rackspace.com/support/how-to/using-cnames-with-cloud-files-containers/
2021-10-16T02:37:48
CC-MAIN-2021-43
1634323583408.93
[array(['https://d33wubrfki0l68.cloudfront.net/28c0e34976bdae23981cee12db487de259233cde/547ce/how-to/using-cnames-with-cloud-files-containers/1080-2_2.png', None], dtype=object) array(['https://d33wubrfki0l68.cloudfront.net/adfeb20790b461d411a457825aaf46c9da6c1814/59e4a/how-to/using-cnames-with-cloud-files-containers/cnameadd.png', None], dtype=object) ]
docs.rackspace.com
If the DBS Control flag TimeDateWZControl is enabled, the following statements request the current time based on the current session time and time zone. SELECT CURRENT_TIME; SELECT CURRENT_TIME AT LOCAL; The result is similar to: Current Time(0) --------------- 15:53:34+00:00 When TimeDateWZControl flag is enabled and the session time zone is defined with a time zone string that follows different Daylight Saving Time and standard time zone displacements, then CURRENT_TIME AT LOCAL gives TIME value corresponding to Standard time zone rather than the Daylight Saving Time time zone. During the Daylight Saving Time period, CURRENT_TIME AT LOCAL follows Standard time zone, whereas CURRENT_TIME follows the Daylight Saving Time zone. During Standard time zone period, both CURRENT_TIME and CURRENT_TIME AT LOCAL follow the standard time zone.
https://docs.teradata.com/r/ITFo5Vgf23G87xplzLhWTA/BBhAU_HpCtMPTYy7OxmeFA
2021-10-16T03:49:37
CC-MAIN-2021-43
1634323583408.93
[]
docs.teradata.com
Configuring and embedding Frontend Plugins¶ The action should be called on by a frontend-plugin. We’ve already addressed the configuration of a simple frontend-plugin in chapter 4 in the section Configuring the plugin. For the purpose of our example a rudimentary plugin is enough. To place a plugin from the backend on a page, two steps are necessary: The plugin must be registered as a content type (plugin) and its behavior must be configured. Both steps are resolved by two Extbase API-methods. These calls are located in two different files. In the file EXT:extension_key/Configuration/TCA/Overrides/tt_content.php you have to register every plugin as a content element with TYPO3 using the static method registerPlugin(). \TYPO3\CMS\Extbase\Utility\ExtensionUtility::registerPlugin( 'extension_key', 'List', 'The Inventory List' ); The method registerPlugin() expects three arguments. The first argument is the extension key ( extension_key in our example). This key is the same as the directory name of the extension. The second parameter is a freely selectable name of the plugin (a short, meaningful name in UpperCamelCase). The plugin name plays a significant role in the allocation of GET- and POST parameters to the appropriate plugin:[offer]=3. The third argument is the label by which the plugin appears in the list of plugins in the backend. Now that the plugin is made available in the backend, we can add a plugin with the name List to our example. For the second step we have to configure the behaviour of the plugin in the file EXT:example_extension/ext_localconf.php with the static method configurePlugin(). Beside the actions that have to be called on by the plugin, you also have to specify which content will be stored in cache. \TYPO3\CMS\Extbase\Utility\ExtensionUtility::configurePlugin( 'example_extension', 'List', [\Vendor\ExampleExtension\Controller\InventoryController::class => 'list,detail'], [\Vendor\ExampleExtension\Controller\InventoryController::class => 'detail'] ); The method expects 4 arguments. The first argument is, just like the one used in the registration process, the extension key. With the second argument, the plugin name, Extbase can assign the configuration to the appropriate plugin. The third argument is an array which contains all controller-action combinations which the plugin is authorized to execute. The specification [\Vendor\ExampleExtension\Controller\InventoryController::class => 'list,detail'] authorizes the plugin to call the action methods listAction() and detailAction() in \MyVendor\ExampleExtension\Controller\InventoryController. Important While all action methods in the controllers need the suffix Action, the list of actions, defined with configurePlugin, do not. The fourth, optional argument is also an array which is composed as the previous one. This one however contains the Controller-Action combinations which are _not_ stored in cache. These are especially those actions that issue a form. The methods createAction() or the updateAction() shouldn’t be explicitly used here, because they don’t produce any result code that must be stored in cache.
https://docs.typo3.org/m/typo3/book-extbasefluid/10.4/en-us/7-Controllers/2-Configuring-and-embedding-Frontend-Plugins.html
2021-10-16T01:59:50
CC-MAIN-2021-43
1634323583408.93
[]
docs.typo3.org
Outdated TYPO3 Version This documentation refers to an outdated TYPO3 version - either select a supported version or make sure to use a TYPO3 Extended Long Term Support (ELTS) version to continue getting security updates. More information about ELTS Rich. Important Don’t forget to enable Rich Text Editor in the back end, in User Settings -> Edit and Advanced functions, check “Enable Rich Text Editor”, if not already done. Get a full options RTE¶ Origin: Documentation Issue #94 Tested for 7.6, probably valid since 7.0 The doc only gives one example for rte using 'defaultExtras' => 'richtext[]'. Note that it is possible to get a “full options RTE” by writing 'defaultExtras' => 'richtext[*]'. It would be very useful to describe in more detail more forms that can be used here. Fix wrong wrong file links in the frontend¶ Origin: Documentation Issue #94 Tested for 7.6, probably valid since 7.0 Symptom: The RTE renders file links in the frontend like file:1234. Cure: This could be fixed by writing 'defaultExtras' => 'richtext[]:rte_transform[mode=ts_links]'.
https://docs.typo3.org/m/typo3/reference-coreapi/7.6/en-us/Rte/InTheBackend/Index.html
2021-10-16T03:22:25
CC-MAIN-2021-43
1634323583408.93
[array(['../../_images/RteBackend.png', 'A RTE in the TYPO3 BE'], dtype=object) ]
docs.typo3.org
Chapter 10 Developing Lifecycle Listeners. The following sections describe how to create and use a lifecycle module: Server Life Cycle Events A lifecycle module listens for and performs its tasks in response to the following events in the server life cycle: - During the INIT_EVENT, the server reads the configuration, initializes built-in subsystems (such as security and logging services), and creates the containers. - During the STARTUP_EVENT, the server loads and initializes deployed applications. - During the READY_EVENT, the server is ready to service requests. - During the SHUTDOWN_EVENT, the server destroys loaded applications and stops. - During the TERMINATION_EVENT, the server closes the containers, the built-in subsystems, and the server runtime environment. These events are defined in the LifecycleEvent class. The lifecycle modules that listen for these events implement the LifecycleListener interface and are configured in the domain.xml file.: package com.sun.appserv.server; import java.util.Properties; /** * LifecycleListenerImpl is a dummy implementation for the LifecycleListener * interface.This implementaion stubs out various lifecycle interface methods. */ public class LifecycleListenerImpl implements LifecycleListener { /** receive a server lifecycle event * @param event associated event * @throws <code>ServerLifecycleException</code> for exceptional condition. * * Configure this module as a lifecycle-module in domain.xml: * * <applications> * <lifecycle-module name="test" * class-name="com.sun.appserv.server.LifecycleListenerImpl" * * <property name="foo" value="fooval"/> * </lifecycle-module> * </applications> * * Set<code>is-failure-fatal</code>in domain.xml to <code>true</code> for * fatal conditions. */ public void handleEvent(LifecycleEvent event) throws ServerLifecycleException { LifecycleEventContext context = event.getLifecycleEventContext(); context.log("got event" + event.getEventType() + " event data: " + event.getData()); Properties props; if (LifecycleEvent.INIT_EVENT == event.getEventType()) { context.log("LifecycleListener: INIT_EVENT"); props = (Properties) event.getData(); // handle INIT_EVENT return; } if (LifecycleEvent.STARTUP_EVENT == event.getEventType()) { context.log("LifecycleListener: STARTUP_EVENT"); // handle STARTUP_EVENT return; } if (LifecycleEvent.READY_EVENT == event.getEventType()) { context.log("LifecycleListener: READY_EVENT"); // handle READY_EVENT return; } if (LifecycleEvent.SHUTDOWN_EVENT== event.getEventType()) { context.log("LifecycleListener: SHUTDOWN_EVENT"); // handle SHUTDOWN_EVENT return; } if (LifecycleEvent.TERMINATION_EVENT == event.getEventType()) { context.log("LifecycleListener: TERMINATE_EVENT"); // handle TERMINATION_EVENT return; } } } The LifecycleEvent Class The com.sun.appserv.server.LifecycleEvent class defines a server life cycle event. The following methods are associated with the event: This method returns the event type, which is INIT_EVENT, STARTUP_EVENT, READY_EVENT, SHUTDOWN_EVENT, or TERMINATION_EVENT. A LifecycleEvent instance is passed to the LifecycleListener.handleEvent method. The Server Lifecycle Event Context The com.sun.appserv.server.LifecycleEventContext interface exposes runtime information about the server. The lifecycle event context is created when the LifecycleEvent class is instantiated at server initialization. The LifecycleEventContext interface defines these methods: This method returns the initial JNDI naming context. The naming environment for lifecycle modules is installed during the STARTUP_EVENT. A lifecycle module can look up any resource defined in the domain.xml file by its jndi-name attribute after the STARTUP_EVENT is complete. If a lifecycle module needs to look up resources, it can do so in the READY_EVENT. It can use the getInitialContext() method to get the initial context to which all the resources are bound. This method writes the specified message to the server log file. The message parameter is a String specifying the text to be written to the log file. This method writes an explanatory message and a stack trace for a given Throwable exception to the server log file. The message parameter is a String that describes the error or exception. The throwable parameter is the Throwable error or exception. Assembling and Deploying a Lifecycle Module You assemble a lifecycle module as described in “Assembling a Lifecycle Module” on page 98. You deploy a lifecycle module as described in “Deploying a Lifecycle Module” on page 111. During lifecycle module deployment, a lifecycle-module element is created in the domain.xml file. You can edit this file to change its configuration. The property subelement allows you to specify input parameters. For example: <lifecycle-module <description>custom startup module to do my tasks</description> <property name="rmiServer" value="acme1:7070" /> <property name="timeout" value="30" /> </lifecycle-module> Note that if is-failure-fatal is set to true (the default is false), lifecycle module failure prevents server initialization or startup, but not shutdown or termination. For more information about the domain.xml file, see the Sun Java System Application Server Reference. After you deploy a lifecycle module, you must restart the server to activate it. The server instantiates it and registers it as a lifecycle event listener at server initialization. Considerations for Lifecycle Modules The resources allocated during initialization or startup should be freed during shutdown or termination. The lifecycle module classes are called synchronously from the main server thread, therefore it is important to ensure that these classes don’t block the server. Lifecycle modules may create threads if appropriate, but these threads must be stopped in the shutdown and termination phases. The LifeCycleModule Classloader is the parent classloader for lifecycle modules. Each lifecycle module’s classpath in domain.xml is used to construct its classloader. All the support classes needed by a lifecycle module must be available to the LifeCycleModule Classloader or its parent, the Connector Classloader. You must ensure that the server.policy file is appropriately set up, or a lifecycle module trying to perform a System.exec() may cause a security access violation. For details, see “The server.policy File” on page 60. The configured properties for a lifecycle module are passed as properties in the INIT_EVENT. The JNDI naming context is not available in the INIT_EVENT. If a lifecycle module requires the naming context, it can get this in the STARTUP_EVENT, READY_EVENT, or SHUTDOWN_EVENT.
http://docs.sun.com/source/817-6087/dglfcycl.html
2009-07-04T13:03:31
crawl-002
crawl-002-021
[]
docs.sun.com
Configuring IPFC on the host system, describes host recognition of IPFC devices and implementation of IP over FC in a SAN. The IPFC driver is based on RFC 2625 and it allows IP traffic to run over FC. This chapter contains the following topics: The following table shows the supported features available for IPFC. The following restrictions apply: This section explains how to configure the desired host system for IPFC. It includes the procedures to determine the port instance and to plumb an IPFC instance. 1. Determine the HBA PCI adapter slot and the I/O board PCI slot. You need this information to perform the calculation in Step 2. For example, assume you have an array with an HBA card located in PCI adapter slot 5, and the PCI adapter is in slot 1 of the I/O board. 2. Determine the instance number. a. Use an editor to search for the fp driver binding name in the /etc/path_to_inst file. Entries have fp on the line. b. Narrow the search by using the I/O board and slot information from Step 1. i. Multiply the PCI adapter slot number by the number of adapter ports. For example, if the HBA has two ports, multiply by 2. Using the array with an HBA in the PCI adapter slot 5, multiply 5 by 2 to get 10. ii. Add the PCI adapter I/O board slot number to the number derived in Step i. Using an HBA in PCI adapter slot 5 and PCI slot 1 of the I/O board, add 1 to 10 for a sum of 11. iii. Convert the number derived in Step ii to hexadecimal. The number 11 converts to b in hexadecimal. iv. Search for the fp entry with pci@hex where hex is the number you derived in Step iii. The following Table 1-2, shows a single FC network adapter device path and describes the elements of the device path. PCI Single FC Network Adapter Device Path. 3. Manually plumb each FP instance. Use the ifconfig <interface_number> plumb command. In this example, the value of <interface_number> is fcip7. When the command is successful, a message appears on both the console and in the messages file. For example: Each FP instance on the system has an entry in /dev/fc. If HBAs have been removed, some stale links might exist. Use this procedure to load and plumb IPFC. 1. For each entry in /dev/fc, issue a luxadm -e dump_map command to view all the devices that are visible through that HBA port: 2.. 3. List the physical path of the originating HBA port from which you can see the destination HBA port, where originating-hba-link is a variable for the link determined in Step 2. For example, here 0 is the number for the originating-hba-link: 4. Search the physical path identified in Step 3. You must remove the leading ../../devices from the path name output. For example 5. Determine the fp instance for the originating HBA port from the output of the command in Step 4. The instance number precedes "fp" in the output. In the following example output, the instance number is 0. 6. Use the instance number from Step 5 to load IPFC and plumb the IPFC interface. In this example, the instance is 0.. 1.page. 2. Use the command ifconfig -a to verify. 1. Manually create a /etc/hostname.interface file with a text editor so it contains a single line that identifies the host name or interface IP address. 2. Use a text editor to make any additional entries to the /etc/inet/hosts file. The Solaris 10 code shows an example of an etc/inet/host file. 3. Edit the /etc/nsswitch.conf file so that all un-commented entries have the word files before any other name service. The /etc/nsswitch.conf specifies which name service to use for a particular machine. The following code shows an example of an /etc/nsswitch.conf file.
http://docs.sun.com/source/819-0139/ch_6_config_IPFC_SAN_devices.html
2009-07-04T16:40:15
crawl-002
crawl-002-021
[]
docs.sun.com
Accumulator manager manual functions While the Accumulator Manager’s functions are enabled primarily through automated processes with external systems during claims adjudication, the accumulator manager can also accommodate a number of functions performed manually. Those functions include creating new accumulators, updating existing accumulators, and deleting accumulators and their associated transaction details.Creating a new accumulator - From the Accumulators screen, click Create accumulator in the top right corner of the screen. The New accumulator screen opens in a new tab. Enter the data into the fields (required fields are indicated with an asterisk *) and click Next. The new accumulator record appears. All user-defined data is displayed in the Accumulator information slot, and all auto-generated data in the Audit slot. To edit the new accumulator, click Previous, which takes you back to the accumulator form populated with the most recent information. From here you can make changes to any of the fields. When completed, finalize the new accumulator by clicking Create. Validation is displayed with the successful creation of the new accumulator:Updating an accumulator You can manually update or edit an accumulator in the Accumulator Manager portal by first selecting an existing accumulator from the Accumulators list. - From the Accumulator screen, click Edit button in the top right corner of the page. The original screen containing editable accumulator fields is displayed. You can edit all user-defined fields in the accumulator record. - A free text note may be entered in the Comments text box. - Click Save to display the updated accumulator record including all header and detail data. To add a transaction to an accumulator record, click +Add transaction from the Accumulator screen. The Add transaction modal is displayed. When you have completed the required fields and populated any pertinent non-required fields, click Add. The new transaction is displayed in the Transaction history slot. Additional transactions may be added. Each new transaction is displayed on the top row.Editing a transaction To edit a transaction, click the gear icon to the right of the corresponding transaction. This action will display the single transaction modal, from which the fields may be edited. When complete, click on the Update button.Deleting a transaction To delete a transaction, click on the trash can icon to the right of the corresponding transaction in the Transaction history slot. You will be prompted to validate that you are deleting the transaction. Click Continue. The transaction that you are deleting will not be erased from the Transaction history slot. The action will be displayed as a decrement, and the accumulator amounts will be re-calculated and updated in the details section. In addition, a corresponding note will be included in the Audit history.Deleting an accumulator You may delete an entire accumulator from the Accumulator manager portal by doing the following: - On the Accumulators screen, click on the trash can icon on the right-hand side of the corresponding row. - You will be prompted to validate the deletion of the accumulator. Click Delete. - A message is displayed indicating that you have successfully deleted the accumulator. You can update an accumulator by doing the following: - Open an existing accumulator - Click Edit in the upper right - Update the Revised Max field and click Save. Note: The Revised max cannot be less than the Benefit Max. Previous topic Viewing an accumulator Next topic Exchanging data with external systems
https://docs.pega.com/pega-smart-claims-engine-user-guide/86/accumulator-manager-manual-functions
2022-05-16T22:58:56
CC-MAIN-2022-21
1652662512249.16
[]
docs.pega.com
Simplicity Studio® 5 User's Guide# Simplicity Studio is the core development environment designed to support the Silicon Labs IoT portfolio of system-on-chips (SoCs) and modules. It provides access to target device-specific web and SDK resources; software and hardware configuration tools; an integrated development environment (IDE) featuring industry-standard code editors, compilers and debuggers; and advanced, value-add tools for network analysis and code-correlated energy profiling. Simplicity Studio is designed to simplify developer workflow. It intelligently recognizes all evaluation and development kit parts released by Silicon Labs and, based on the selected development target, presents appropriate software development kits (SDKs) and other development resources. Simplicity Studio 5 (SSv5) focuses on developer experience, leveraging feedback from customers, employees and competitive reviews. Developers of all experience levels will benefit from an optimized workflow that supports them through the development journey and produces quicker project progression and device configuration. The Simplicity Studio 5 User's Guide pages are organized into the following groups. Getting Started describes how to install SSv5 and the relevant development resources, and provides general overviews of using the SSv5 interface and of developing projects in SSv5. If you are new to SSv5, start here. About the Launcher is a reference guide to the features and functions available when you first open SSv5. This is a general reference, although some items may not be applicable to all devices. About the Simplicity IDE is a reference guide to the features and functions in the Simplicity integrated development environment. This is a general reference, although some items may not be applicable to all devices. Developing for 32-Bit Devices provides instructions and reference material for 32-bit device development in the two development environments: Project Configurator and AppBuilder. Developing for 8-Bit Devices provides instructions and reference material for 8-bit device development using the Hardware Configurator. Building and Flashing describes how to compile and flash images to various device types. Testing and Debugging outlines tools and strategies for testing your applications on Silicon Labs devices. Using the Tools is a reference guide to the various tools packaged with SSv5. Not all tools are applicable to every development path. Their specific use is described in the individual Developing for pages.
https://docs.silabs.com/simplicity-studio-5-users-guide/5.3.2/ss-5-users-guide-overview/
2022-05-16T22:50:54
CC-MAIN-2022-21
1652662512249.16
[]
docs.silabs.com
Security - Password storage - Password length limits - Generated passwords for users created through integrated authentication - Restrict SSH key technologies and minimum length - Rate limits - Webhooks and insecure internal web services - Information exclusivity - Reset user password - Unlock a locked user - User File Uploads - How we manage the CRIME vulnerability - Enforce Two-factor authentication - Send email confirmation on sign-up - Security of running jobs - Proxying images - CI/CD variables - Token overview - Project Import decompressed archive size limits - Responding to security incidents Securing your GitLab installation Consider access control features like Sign up restrictions and Authentication options to harden your GitLab instance and minimize the risk of unwanted user account creation. Self-hosting GitLab customers and administrators are responsible for the security of their underlying hosts, and for keeping GitLab itself up to date. It is important to regularly patch GitLab, patch your operating system and its software, and harden your hosts in accordance with vendor guidance.
https://docs.gitlab.com/ee/security/index.html
2022-05-16T22:46:43
CC-MAIN-2022-21
1652662512249.16
[]
docs.gitlab.com
Dataset API note For all product information and Dataset API purchases, please see the Dataset API product page. This page here and the following subpage contain all technical information about the Dataset API. The APIs to read and transform environmental dataset are more complex than other APIs offered by meteoblue. They are designed to offer a high degree of flexibility to retrieve and manipulate data for a lot of use-cases. note This document assumes you are already familiar with the Dataset API Configurator. You should only consider this document if you plan to develop your own web interface or need to construct very complex data calls not supported by the web interface. The recommended way is to configure a call in the web-interfaces and then export the generated call definition using Output format -> API query. Commonly meteoblue APIs use simple URL GET parameters like This approach is unfortunately not sufficient to query datasets dynamically. Instead a HTTP JSON request body is used: { "units": { "temperature": "C", "velocity": "km/h", "length": "metric", "energy": "watts" }, "geometry": { "type": "MultiPoint", "coordinates": [[7.57327,47.558399,279]], // lon, lat, asl "locationNames": ["Basel"] }, "format": "json", "timeIntervals": [ "2019-01-01T+00:00/2019-12-31T+00:00" ], "queries": [{ "domain": "NEMSGLOBAL", "gapFillDomain": null, "timeResolution": "hourly", "codes": [{ "code": 157, "level": "180-0 mb above gnd" }] }] } This call can be executed with the command line tool curl: curl \ -L -H "Content-Type: application/json" \ -d '{"units":{"temperature":"C","velocity":"km/h","length":"metric","energy":"watts"},"geometry":{"type":"MultiPoint","coordinates":[[7.57327,47.558399,279]],"locationNames":["Basel"]},"format":"json","timeIntervals":["2019-01-01T+00:00/2019-12-31T+00:00"],"timeIntervalsAlignment":"none","queries":[{"domain":"NEMSGLOBAL","gapFillDomain":null,"timeResolution":"hourly","codes":[{"code":157,"level":"180-0 mb above gnd"}]}]}' \ " Many web-development tools like Insomnia REST client support using JSON bodies. Alternatively, the JSON query can be encoded into the URL. This will result in long URLs and quickly hit maximum URL length limits. All calls to meteoblue APIs require a valid API key. Please contact [email protected] for more information. More complex calls might also be declined to be executed directly and require the use of job queues. The next chapter will explain job queues in more detail. To use the dataset API with Python, we recommend to use the meteoblue-dataset-sdk Python module. This library simplifies access to the dataset API and transparently implements job queues and decoding of data using protobuf. JSON Query Structure The JSON body uses various structures and arrays that are nested to build complex queries with recursive transformations. All JSON attributes are case-sensitive and use camel-case names. As in the example above, the outer JSON structure contains properties like units, geometry, timeIntervals or queries. The following tables describe all properties and how they are integrated with other structures. Some properties address special use-cases that are not available in the web-interfaces. For completeness all API properties are documented in the next chapters. Per default API will return a data-series for each time-interval times the number of geometries. 10 elements in timeIntervals and 20 coordinates in geometries return 200 data-series. If oneTimeIntervalPerGeometry is set to true and a GeoJSON GeometryCollection is used, the first geometry will use the first time-interval, then the second geometry and the second time-interval and so on. This is used to return for each coordinate, different time-intervals. In the web-interfaces this used in Coordinates and time mode. An example call is available in the GeoJSON description below. If checkOnly is set to true, the API will only calculate how many data points must be processed and whether a job queue must be used. runOnJobQueue would then be required to submit the call to a job queue. More information can be found in the last chapter about job queues. Units If units are not set, the defaults are Celsius, km/h, metric and watts GeoJSON Geometry important Please make sure to provide all input coordinates in the correct order: "lon" -> "lat" (-> "asl") The geometry structure is based on GeoJSON, but extended to support features like geoname polygon id, location names and additional attributes. A geometry could also be of type GeometryCollection to select multiple geometries (this can be used in conjunction with oneTimeIntervalPerGeometry). Depending on the feature type different geometries can be used. Point and MultiPoint { "type": "Point", "coordinates": [8.6, 47.5, 351.1] // lon, lat, asl } { "type": "MultiPoint", "coordinates": [[8.6, 47.5,351.1], [8.55, 47.37, 429]], // lon, lat, asl "locationNames": ["Basel", "Zürich"] } Coordinates are defined as tuple of longitude, latitude and elevation above sea level. Elevation is optional and will be automatically resolved from an 80 m resolution digital elevation model (DEM). locationNames can be optionally specified and will be replicated in the output. The order of coordinates will be preserved in the output. Polygon and MultiPolygon { "type": "Polygon", "coordinates": [ [[7.5,47.5],[7.5,47.6],[7.7,47.6],[7.7,47.5],[7.5,47.5]] // lon, lat ] } { "type": "MultiPolygon", "coordinates": [ [[[8.0,47.4],[8.0,47.6],[8.2,47.6],[8.2,47.4],[8.0,47.4]]], // lon, lat [[[7.5,47.5],[7.5,47.6],[7.7,47.6],[7.7,47.5],[7.5,47.5]]] // lon, lat ], "excludeSeaPoints": true, "fallbackToNearestNeighbour": true } The first and last coordinate must be the same. Please make sure to supply a valid polygon without self-intersections. The optional Boolean parameter excludeSeaPoints can be set to true, to ignore grid-cells that are located on the sea. If no grid-cells are within the polygon, the result would be empty. If fallbackToNearestNeighbour is set to true, the result will select the nearest neighbour grid-cell instead. Geoname Polygon Administrative areas in the web-interfaces are based the geonames polygon database. To keep calls short and not always include the full GeoJSON polygon for each administrative area, the API can directly get a polygon from a database. Once the polygon is loaded from the database, be behavior is identical to a regular polygon API call. { "type": "GeonamePolygon", "geonameid": 2345235 } Multiple geoname polygons can also be selected in one call. Internally polygons get merged into a single polygon. If the transformation Aggregate all grid-cells would now be used, all grid-cells of both administrative areas would be aggregated to a single data-series. { "type": "MultiGeonamePolygon", "geonameids": [2345235, 312453] } Parameters excludeSeaPoints and fallbackToNearestNeighbour are also considered, if set. Geometry Collection Multiple geometries can also be processed in one call instead of calling the API multiple times. If the GeoJSON type GeometryCollection is used, the API will process one geometry after another. The previous MultiGeonamePolygon call could be split into a collection like: { "type": "GeometryCollection", "geometries": [ { "type": "GeonamePolygon", "geonameid": 2345235 }, { "type": "GeonamePolygon", "geonameid": 312453 } ] } It is important to notice, that for a GeometryCollection all transformation are applied individually. The transformation Aggregate all grid-cells will only aggregate grid-cell in one geometry of a geometry collection. This can be used to select multiple administrative areas in a country, use the transformation Aggregate all grid-cells and retrieve one index for each area individually. In the example above, two data-series would be returned. Alternatively, GeometryCollection with the parameter oneTimeIntervalPerGeometry allows you to select different time-intervals for each geometry. It is used in the web-interface for the coordinates and time selection mode. For the first coordinate, the first time-interval will be used, for the second coordinate the second time-interval will be used, and so on. { "oneTimeIntervalPerGeometry": true, "geometry": { "type": "GeometryCollection", "geometries": [ { "type": "Point", "coordinates": [8.6, 47.5, 351.1] } // lon, lat, asl { "type": "Point", "coordinates": [8.55, 47.37, 429] } // lon, lat, asl ] }, "timeIntervals": [ "2015-05-05T+00:00/2016-06-06T+00:00", "2015-05-03T+00:00/2016-06-01T+00:00" ] } Output Format The attribute format accepts the following values: json: Recommended JSON format (default, if not set) csv: CSV format for large amount of locations csvTimeOriented: CSV format for long time-ranges csvIrregular: CSV format for mixed time-intervals and locations xlsx: XLSX format for large amount of locations xlsxTimeOriented: XLSX format for long time-range xlsxIrregular: XLSX format for mixed time-intervals and locations highcharts: JSON output to create a highcharts graph highchartsHtml: HTML page that embeds the highcharts library and the chart geoJson: JSON output to create map with bullet points geoJsonHtml: HTML page that embeds a map library and the map json kml: KML format that only includes the grid cell coordinates netCDF: Recommended binary format for further scientific data analysis Detailed information about the structure of each format can be found here in the previous format chapter. Time Intervals Time intervals and timezones can be specified using the ISO8601 format. The timeIntervals attribute is an array of ISO8601 strings. Per default the web-interfaces generate time-intervals with a timezone offset, but without specifying the hour and minute. { "timeIntervals": [ "2015-05-01T+00:00/2015-05-02T+00:00", "2016-05-01T+00:00/2016-05-02T+00:00" ] } In the intervals above, 2 full days are selected. For hourly data, the API would return 48 hourly values for each time interval. In the API syntax time-intervals could be specified to select exactly 1 hour: { "timeIntervals": [ "2019-01-01T00:00+00:00/2019-01-01T01:00+00:00" ] } Datasets and Variables The selection of datasets and variables is specified in the attribute queries as an array to select multiple datasets. For each dataset, specified by the domain attribute, multiple weather variable codes can then be selected. In this example, three variables are selected from NEMSGLOBAL and than transformed with two transformations. In the same call, data can be selected from the dataset NEMS12 and transformed individually. { "queries": [ { "domain": "NEMSGLOBAL", "gapFillDomain": null, "timeResolution": "hourly", "codes": [ {"code": 11, "level": "2 m above gnd"}, {"code": 52, "level": "2 m above gnd"}, {"code": 157, "level": "180-0 mb above gnd"} ], "transformations": [ { "type": "valueIsAbove", "valueMin": 30, "returnClassification": "zeroOrOne" }, { "type": "aggregateTimeInterval", "aggregation": "mean" } ] }, { "domain": "NEMS12", "gapFillDomain": null, "codes": [ ... ], "transformations": [...] } ] } Attributes for the structure query: Notes: allowHistoryenables reads form the meteoblue archive storage. Forecasts are archived once a day and tend to be more consistent. allowForecastenables reads from up to date forecasts which reside on SSD and are updated more frequently. Data of the last days may change slightly. This applies only to datasets which offer forecasts. timeResolutionspecified the resolution to read. It can also be set to dailyalthough the dataset only offers hourly data to automatically calculate daily aggregations. Aggregations like monthlymust use transformations. In the future, some datasets may offer pre-computed monthly or yearly data directly. Once the dataset has been selected, multiple variables at different levels can be encoded into the call. The web-interfaces only use one variable per dataset for simplicity. The API is capable of selecting multiple variables per dataset at once. This could improve API call performance, because expensive spatial calculations are only performed once. Attributes for the structure code: Variable Codes The numeric codes to select a variable from a dataset originated from NOAA GRIB 1 codes, but have been extended to include more variables. A list of all weather variable codes at meteoblue is available as JSON API. Please note, that any individual dataset only supports a small fraction of the available codes. Transformations Within the query structure an array of transformations can be specified. All transformations are processed one after another, but also modify the behavior of others like extend time-intervals or spatial contexts. We recommend using web-interfaces to configure calls, but as a reference the API syntax for each transformation is documented below. For more details on each transformation consult the web-interfaces documentation. Temporal Transformations Syntax Aggregations to daily, monthly and yearly use an easy syntax. In this example 3 transformations are used with a 30-year temperature time-series: - Calculate the daily minimum - Use all daily minima and calculate the mean for a month. This is now the monthly mean of daily minimum temperatures. - From all the monthly means pick the coldest monthly value. The call now returns 30 values because 30 years are used as an input { "transformations": [ { "type": "aggregateDaily", "aggregation": "min" }, { "type": "aggregateMonthly", "aggregation": "mean" }, { "type": "aggregateYearly", "aggregation": "min" } ] } The following values are supported for the attribute aggregation: sum, min, max, mean, stddev sumIgnoreNaN, minIgnoreNaN, maxIgnoreNaN, meanIgnoreNaN p10, p25, p50, p75, p90 The transformations Aggregate daily by longitude and Aggregate each time-interval also just use the aggregation type: { "transformations": [ { "type": "aggregateDailyByLongitude", "aggregation": "mean" }, { "type": "aggregateTimeInterval", "aggregation": "mean" } ] } The transformation Aggregate by day and night additonally takes an attribute dailyNightly: daylightAndNighttime: Return 2 values per day. One for daytime and one for nighttime daylight: Only aggregate daylight hours nighttime: Only aggregate nighttime hours { "type": "aggregateHalfDaily", "dailyNightly": "daylightAndNighttime", "aggregation": "mean" } Note: To keep the documentation compact, the examples only include the minimum JSON syntax. Aggregate over a sliding time window requires a nTimesteps attributes which is an Integer for how many time-steps are used in the sliding windows aggregation. { "type": "timeLaggedAggregation", "aggregation": "mean", "nTimesteps": 3 } Aggregate to climate normals allows to select daily and hourly resolution with the attribute temporalResolution { "type": "aggregateNormals", "aggregation": "mean", "temporalResolution": "daily" } For temporal interpolations the transformation Interpolate temporal expects and temporalResolution attribute with the options: 15min, 10min, 5min and 1min { "type": "interpolateTemporal", "temporalResolution": "15min" } Value Filter Transformation Syntax The transformations to filter values based on a threshold, use a returnClassification to specify the return behavior: zeroOrOne zeroOrValue zeroOrDelta zeroOrOneAccumulated zeroOrValueAccumulated zeroOrDeltaAccumulated zeroOrConsecutiveCount { "type": "valueIsAbove", "valueMin": 30, "returnClassification": "zeroOrOne" }, { "type": "valueIsBelow", "valueMax": 10, "returnClassification": "zeroOrOne" }, { "type": "valueIsBetween", "valueMin": 10, "valueMax": 30, "returnClassification": "zeroOrOne" } The transformation Value limited to a range takes two integers to limit clip values to a certain range. { "type": "valueLimitRange", "valueMin": 5, "valueMax": 10 } Accumulate time-series to a running total takes no additional attributes. { "type": "accumulate" } Spatial Transformations Syntax The transformation Resample to a regular grid takes a floating-point gridResolution of greater than 0.001, options to control interpolation and aggregation and the behavior for the disjoint area of the grid and polygon. Spatial transformation calls only work for polygon calls and not for calls based on single coordinates. The attributes interpolationMethod support: linearinterpolation using triangulated irregular networks nearestneighbor interpolation Attribute spatialAggregation: mean, min, max: Return NaNs if one input value is NaN. meanIgnoreNaN, minIgnoreNaN, maxIgnoreNaN: Ignores NaNs if possible. The disjointArea of the polygon and the resampled grid can be discarded discard or kept keep. { "type": "spatialTransform", "gridResolution": 0.5, "interpolationMethod": "linear", "spatialAggregation": "mean", "disjointArea": "discard" } This transformation also offers an additional attribute geometry which can be set to a MultiPoint geometry to select individual grid-cells after a dataset has been resampled. The grid-cells are selected by a nearest neighbor search in the new regular grid. In the next example, a selected polygon would be gridded to 0.1° and afterwards 2 locations extracted. { "type": "spatialTransform", "gridResolution": 0.1, "interpolationMethod": "linear", "spatialAggregation": "mean", "geometry": { "type": "MultiPoint", "coordinates": [[7.57327,47.558399], [7.85222,47.995899]], // lon, lat "locationNames": ["Basel","Freiburg"] } } Combine Dataset Transformations Syntax With the transformation Combine the selected data-series the API syntax uses recursion to select another data-series. The attribute dataQuery is now using the same structure as described above. The attribute mathOperator supports the following modes: multiply, divide, add, substract maximum, minimum, mean equals, notEquals, greaterThanEquals, lessThanEquals { "type": "combineDataset", "mathOperator": "multiply", "dataQuery": { "domain": "ERA5", "gapFillDomain": null, "timeResolution": "hourly", "codes": [{"code": 75, "level": "high cld lay"}], "transformations": [...] } } To combine a dataset with a different resolution, resampling can also be used. The attributes accept the same values as explained above. { "type": "combineDatasetWithResampling", "mathOperator": "multiply", "interpolationMethod": "linear", "spatialAggregation": "mean", "dataQuery": { "domain": "GFS05", "gapFillDomain": null, "timeResolution": "3hourly", "codes": [{"code": 301, "level": "2 m above gnd"}] } } Aggregate all Grid Cells Syntax The transformation Aggregate all grid-cells aggregates all grid-cells based on a function. The aggregation function is using the same syntax as for temporal transformations. This transformation works for coordinate as well as polygon calls. For polygon calls, the centroid coordinate will be shown in the output. { "type": "spatialTotalAggregate", "aggregation": "mean" } For a weighted average, the transformation spatialTotalWeighted can be used and takes the weights from a data-series specified in dataQuery. { "type": "spatialTotalWeighted", "dataQuery": { "domain": "ERA5", "gapFillDomain": null, "timeResolution": "hourly", "codes": [{"code": 301,"level": "2 m above gnd"}] } } In case the weights originated from another dataset with a different grid, resampling can be used. interpolationMethod and spatialAggregation follow the same specifications as before. { "type": "spatialTotalWeightedWithResampling", "interpolationMethod": "linear", "spatialAggregation": "mean", "dataQuery": { "domain": "ERA5", "gapFillDomain": null, "timeResolution": "hourly", "codes": [{"code": 301,"level": "2 m above gnd"}] } } Mask out Grid Cells Syntax To set values to NaN based on filter criteria, the transformation maskOut uses a floating-point threshold attribute and a aboveOrBelow setting. The filter criteria are retrieved from another data-series which can be specified with dataQuery. The attribute aboveOrBelow supports (The naming is not consistent for historic reasons): above: Greater than condition below: Less than condition greaterThanEqualsGreater or equals than condition lessThanEqualsLess or equals than condition { "type": "maskOut", "aboveOrBelow": "greaterThanEquals", "threshold": 10.0, "dataQuery": { "domain": "NEMSGLOBAL", "gapFillDomain": null, "timeResolution": "hourly", "codes": [{"code": 256, "level": "sfc"}] } }, With resampling to match another grid: { "type": "maskOutWithResampling", "aboveOrBelow": "greaterThanEquals", "threshold": 10.0, "interpolationMethod": "linear", "spatialAggregation": "mean", "dataQuery": { "domain": "VHP", "gapFillDomain": null, "timeResolution": "daily", "codes": [{"code": 274, "level": "sfc"}] } } Downscale Grid Cells Syntax Activating this transformation for coordinate API calls, will enable linear downscale using 3 neighboring grid-cells. This is not available for polygon calls. { "type": "downscaleGridCell" } Sign Mechanism The meteoblue APIs support shared secrets to make API URLs temper proof or set an expire day. Because the query is submitted as a JSON POST body in an API call, the body content is not signed. If your API key requires signature, you have to calculate the MD5 sum of the POST body and set the URL GET parameter &post_body_md5=. A signed URL may look like this: Metadata API The Metadata API provides basic information about a dataset, the time of the last update of each dataset and differentiates between preliminary (first) and final run. In case of satellite data the preliminary run is usually available after a couple of hours. A second or final run will be published days or even weeks later with improved data quality. Because there could be changes in the data, meteoblue keeps track of these dates and includes them in the metadata API. Example API call for CHRIPS: { "name": "CHIRPS2", "temporalResolution": "daily", "region": "50S-50N", "spatialResolution": "5.0 km", "historyDataStart": "19810101T0000", "historyDataFinalRun": "20200731T2300", "source": "USGS & CHG", "sourceUrl": " } Fields: name: Name of the dataset e.g. NEMSGLOBAL or CHIRPS2 temporalResolution: Native temporal resolution. E.g. hourly or daily spatialResolution: Spatial resolution. E.g. "5 km", but could also be a range "4-30 km" historyDataStart: The first valid timestamp for API calls using archived historical data historyDataFinalRun: The last timestamp, that will not be modified anymore by future planed updates. E.g. for CHIRPS this date is a couple of weeks in the past. region: Extend of this dataset. E.g. global, central-asia or a latitude bound like 50S-50N source: Provider of this dataset sourceUrl: URL to the provider website Job Queues While regular API calls only take a couple of seconds, complex dataset calls can take a couple of minutes or even hours. HTTP APIs do not work well with long-running calls. This leads to timeouts on server and client-side. Before executing dataset calls, the API calculates the estimated run-time. If the approximated run-time exceeds a threshold, the API will return an error and the user must submit the call to the job queue system. After a job is completed, the result will be uploaded to an Amazon S3 web-storage and kept for 7 days. The job-queue result is identical to a regular dataset call. Implementing the schematics of job queues correctly needs special care. For Python meteoblue offer simple SDK to use the dataset API without having to care about job queues: meteoblue-dataset-sdk on GitHub. Jobs "belong" to a queue. Queues are associated with API keys by meteoblue and provisioned according to performance requirements. Multiple API keys can share the same queue. Each queue will be processed by multiple "workers" which are exclusively dedicated to one queue. Workers run on multiple physical servers and synchronize with a central job queue dispatcher. Each API key can submit up to 5 jobs in parallel to the job queue system. Additional jobs will be declined until the previous jobs are completed. This prevents single API users to over-utilize the queue system and starve resources from other applications. Current job queue limits are (only one needs to be fulfilled): - Data-points > 876'000: This is one year of hourly data for 100 locations. - Number of grid-cells > 5000: This is using an approximated number of grid-cells of potential effected grid-cells by using the polygon area and grid-resolution. This approximation is necessary to quickly calculate the number of grid-cells without performing expensive polygon/grid operations. - Spatial resampling transformation is used: This is independent of polygon size or resolution. Running Jobs on the Queue There are 2 options to determinate if a job must be executed on a job queue: - The API will return an error: {"error_message": "This job must be executed on a job-queue"} - The dataset query JSON syntax accepts a checkOnly parameter: {"checkOnly": true, "geometry": ..., "queries":...}. The response JSON contains {"requiresJobQueue": true} To start a job on the queue, the parameter runOnJobQueue must be set to true in the request POST JSON. An API key is only necessary to start a job, but status and result can be retrieved without an API key. curl " -H "Content-Type: application/json" -d '{"runOnJobQueue":true,"geometry":{"type":"MultiPoint","coordinates":[[7.57327,47.558399,279]],"locationNames":["Basel"]},"format":"csv","timeIntervals":["2017-01-01T+00:00/2017-12-31T+00:00"],"queries":[{"domain":"ERA5","gapFillDomain":null,"timeResolution":"hourly","codes":[{"code":75,"level":"high cld lay"}]}]}' If the API call is ok, the API responds with a JSON which contains the UUID of the newly submitted job. The status shows waiting. If a worker is available, the job will be started immediately. { "id": "6768BAC9-2446-4A9F-A2CD-A8CCAE070919", "queueTime": 1524063300, "apikey": "123456", "status": "waiting" } Calls to /queue/status/6768BAC9-2446... show after a couple of seconds running and finally finished { "id": "6768BAC9-2446-4A9F-A2CD-A8CCAE070919", "queueTime": 1524063300, "apikey": "123456", "status": "finished" } The result is uploaded to a central storage system and can be retrieved with: curl " Job States Jobs can have the following state: waiting: A job is queued but has not yet been started. It should start within a couple of seconds, unless the queues are highly utilized. running: The job is currently running. Each job is checked every couple of seconds if it is actually running. finished: Job successfully completed. The result can now be retrieved at deleted: Job has been cancelled manually by the user error: An error occurred To cancel a waiting or running job, send a HTTP DELETE request to /queue/delete/6768BAC9-2446-4A9F-A2CD-A8CCAE070919. If the job is already finished, this call we delete the stored result on AWS S3. To retrieve the JSON call of a queued or running job: /queue/jobquery/B2D49CA1-33DB-4506-ACC8-3FDEDE794E34. Errors If an error occurs while executing a job, the error message is set in the job status. Sometimes a job will fail with the error message Job failed for unknown reasons. In this case the application executing job, most likely ran out of memory. Generating CSV text output with large polygons or long time-intervals quickly requires many gigabytes of memory. Try to use netCDF as output format or use smaller polygons, fewer coordinates or shorter time-intervals. { "status": "error", "error_message": "Job failed for unknown reasons" } This error is not limited to out-of-memory issues but could also indicate to a programming error which lead to a crash of the application. In case the error persists even with a smaller geographical and temporal extent, please contact us. API Endpoint: - Status: - Delete: (only HTTP DELETE) - Result: - Query JSON:
https://docs.meteoblue.com/en/weather-apis/dataset-api/dataset-api
2022-05-16T23:00:03
CC-MAIN-2022-21
1652662512249.16
[]
docs.meteoblue.com
Workitems Contents Workitems, called Open Media by Genesys Digital channels, are documents that might be directed to you for handling. They include many media types, such as faxes or PDFs, that you might have to access while you are handling interactions of another type, such as email. The Workitem interaction window enables you to access all of the information necessary to handle a workitem. How do I handle a workitem? A workitem interaction is the Workspace Agent Desktop interface that lets you handle a workitem document. To start receiving workitem interactions, set your status on the Workitem channel to Ready. When you receive a workitem it is either answered automatically, or a pop-up notification is displayed on your desktop to enable you to accept the workitem. When you accept a workitem, the Workitem interaction view opens on your Agent Desktop. It includes all the controls, information about the email (Case Information), and Disposition and Note tabs where you can add information about the document. There might also be tabs on the right side of the view that contain contact information, history, and other features. The typical actions you can take with a workitem are: - Click Transfer () to transfer the workitem to another agent. - Review the associated case data. - Add a Note. - Add a Disposition code. - Mark it as done. - Move it to your in-progress workbin to work on later. Watch the video to see what happens when a workitem arrives at a workstation. How can I get help with a workitem? To get more information or get help with a workitem, try one of the following actions: - Check the associated case data for information that might help you. - Call the contact, if your organization has this feature and the contact's number is available in your contacts list. - Click Consult () to start a consultation with a team member. If you want to perform a warm transfer or a two-step transfer, click Consult to talk to the consultation target and then click Transfer to transfer the call. Transferring a Workitem To transfer a workitem that you have accepted, click Transfer (). Team Communicator is displayed. Choose a transfer target. The target must be set up to receive interactions of the type of the workitem. Select <workitem type> Transfer. If the target accepts the interaction, the interaction window on your desktop closes. If the target rejects the interaction, the interaction window reopens on your desktop and displays an error that informs you that the target has rejected the interaction.
https://all.docs.genesys.com/PEC-AD/Current/Agent/ADWorkitems
2022-05-16T21:23:19
CC-MAIN-2022-21
1652662512249.16
[]
all.docs.genesys.com
Access the Reporting Services WMI Provider The Reporting Services WMI provider exposes two WMI classes for administration of Native mode report server instances through scripting: Important Starting with the SQL Server 2012 (11.x) release, the WMI provider is supported for only native mode report servers. SharePoint mode report servers can be managed with SharePoint Central Administration pages and PowerShell scripts.. Determine the Instance Name in the Namespace String: PS C:\windows\system32> Get-WmiObject -namespace root\Microsoft\SqlServer\ReportServer -class __Namespace -ComputerName hostname | select Name Access the WMI Classes Using PowerShell To access the WMI classes, run the following command: PS C:\windows\system32> Get-WmiObject -namespace <namespacename> -class <classname> -ComputerName <hostname> For example, to access the MSReportServer_ConfigurationSetting class on the default report server instance of the host myrshost, run the following command. The default report server instance must be installed on myrshost for this command to succeed. PS C:\windows\system32> Get-WmiObject -namespace "root\Microsoft\SqlServer\ReportServer\RS_MSSQLSERER\v11\Admin" -class MSReportServer_ConfigurationSetting -ComputerName myrshost: PS C:\windows\system32> Get-WmiObject -namespace "root\Microsoft\SqlServer\ReportServer\RS_MSSQLServer\v13\Admin" -class MSReportServer_ConfigurationSetting -ComputerName myrshost -filter "InstanceName='MSSQLSERVER'" Query the Available Methods and Properties To see what methods and properties are available in one of the Reporting Services WMI classes, pipe the results from Get-WmiObject to the Get-Member command. For example: PS C:\windows\system32> Get-WmiObject -namespace "root\Microsoft\SqlServer\ReportServer\RS_MSSQLServer\v13\Admin" -class MSReportServer_ConfigurationSetting -ComputerName myrshost | Get-Member Use a WMI Method or Property: PS C:\windows\system32> $rsconfig = Get-WmiObject -namespace "root\Microsoft\SqlServer\ReportServer\RS_MSSQLServer\v13\Admin" -class MSReportServer_ConfigurationSetting -ComputerName myrshost -filter "InstanceName='SHAREPOINT'" PS C:\windows\system32> $rsconfig.GetAdminSiteUrl() See Also Reporting Services WMI Provider Library Reference (SSRS) RsReportServer.config Configuration File
https://docs.microsoft.com/en-us/sql/reporting-services/tools/access-the-reporting-services-wmi-provider?view=sql-server-2017
2022-05-16T23:03:47
CC-MAIN-2022-21
1652662512249.16
[]
docs.microsoft.com
Conflict Resolution: PHP For reasons explained in the Introduction to conflict resolution, we strongly recommend adopting a conflict resolution strategy that requires applications to resolve siblings according to use-case-specific criteria. Here, we’ll provide a brief guide to conflict resolution using the official Riak PHP client. How the PHP Client Handles Conflict Resolution Every \Riak\Riak\Object command returns a \Riak\Riak\Command\Object\Response object, which provides what is needed to handle object conflicts. If siblings exist and have been returned from the server within the response body, they will be available within the response object. See below: $response = (new \Riak\Riak\Command\Builder\FetchObject($riak)) ->buildLocation('conflicted_key', 'bucket_name', 'bucket_type') ->build() ->execute(); echo $response->getStatusCode(); // 300 echo $response->hasSiblings(); // 1 echo $response->getSiblings(); // \Riak\Riak. class User { public $username; public $friends; public function __construct($username, array $friends = []) { $this->username = $username; $this->friends = $friends; } public function __toString() { return json_encode([ 'username' => $this->username, 'friends' => $this->friends, 'friends_count' => count($this->friends) ]); } } Here’s an example of instantiating a new User object: $bashobunny = new User('bashobunny', ['fred', 'barney']);: $response = (new \Riak\Riak\Command\Builder\FetchObject($riak)) ->buildLocation('bashobunny', 'users', 'siblings') ->build() ->execute(); echo $response->hasSiblings(); // 1 If we get true, then there are siblings. So what do we do in that case? At this point, we need to write a function that resolves the list of siblings, i.e. reduces the $response->getSiblings() array down to one member. In our case, we need a function that takes a Riak response object as its argument, applies some logic to the list of values contained in the siblings property of the object, and returns a single value. For our example use case here, we’ll return the sibling with the longest friends list: use \Riak\Riak; use \Riak\Riak\Command; function longest_friends_list_resolver(Command\Object\Response $response) { if ($response->hasSiblings()) { $siblings = $response->getSiblings(); $max_key = 0; foreach ($siblings as $key => $sibling) { if ($sibling->getData()['friends_count'] > $siblings[$max_key]->getData()['friends_count']) { $max_key = $key; } } } return $siblings[$max_key]; } We can then embed this function into a more general function for fetching objects from the users bucket: function fetch_user_by_username($username, Riak $riak) { $response = (new Command\Builder\FetchObject($riak)) ->buildLocation($username, 'users', 'siblings') ->build() ->execute(); return longest_friends_list_resolver($response); } bashobunny = fetch_user_by_username('bashobunny', $riak); PHP. The drawback to this approach is that it’s more or less inevitable that a user will remove a friend from their friends list, and then.
https://docs.riak.com/riak/kv/2.2.0/developing/usage/conflict-resolution/php/index.html
2022-05-16T22:05:42
CC-MAIN-2022-21
1652662512249.16
[]
docs.riak.com
Writing documentation Documentation comes in two parts: docstrings and user-guide documentation. Docstrings Written at the start of a function, they give essential information about how it should be used, such as which arguments can be passed to it and what the syntax should be. The docstrings need to follow the numpy specification, as shown in this example. As a general rule, any code that is part of the public API (i.e. any function or class that an end-user might access) should have a clear and comprehensive docstring explaining how to use it. Private methods that are never intended to be exposed to the end-user (usually a function or class starting with an underscore) should still be documented to the extent that future developers can understand what the function does. You can check your docstrings follow the convention by using the flake8-docstrings extension, like this: # If not already installed, you need flake8 and flake8-docstrings pip install flake8 flake8-docstrings # Run flake8 on your file flake8 /path/to/your/file.py # Example output /path/to/your/file.py:46:1: D103 Missing docstring in public function /path/to/your/file.py:59:1: D205 1 blank line required between summary line and description User-guide documentation A description of the functionality of the code and how to use it with examples and links to the relevant code. When writing both the docstrings and user guide documentation, it is useful to have some data which the users can use themselves. Artificial datasets for this purpose can be found in hyperspy.datasets.artificial_data. Build the documentation To check the output of what you wrote, you can build the documentation by running the make command in the hyperspy/doc directory. For example make html will build the whole documentation in html format. See the make command documentation for more details. To install the documentation dependencies, run either $ conda install hyperspy-dev or $ pip install hyperspy[build-doc] When writing documentation, the Python package sphobjinv can be useful for writing cross-references. For example, to find how to write a cross-reference to hyperspy.signal.BaseSignal.set_signal_type(), use: $ sphobjinv suggest doc/_build/html/objects.inv set_signal_type -st 90 Name Score --------------------------------------------------------- ------- :py:meth:`hyperspy.signal.BaseSignal.set_signal_type` 90
https://hyperspy.readthedocs.io/en/latest/dev_guide/writing_docs.html
2022-05-16T22:05:06
CC-MAIN-2022-21
1652662512249.16
[]
hyperspy.readthedocs.io
unique_key {{ config(unique_key="column_name") }} DescriptionDescription A column name or expression that is unique for the results of a snapshot. dbt uses this to match records between a result set and an existing snapshot, so that changes can be captured correctly. caution Providing a non-unique key will result in unexpected snapshot results. dbt will not test the uniqueness of this key, consider adding a test to your project to ensure that this key is indeed unique. DefaultDefault This is a required parameter. No default is provided. ExamplesExamples Use an id column as a unique key {{config(unique_key="id")}} You can also write this in yaml. This might be a good idea if multiple snapshots share the same unique_key (though we prefer to apply this configuration in a config block, as above). Use a combination of two columns as a unique keyUse a combination of two columns as a unique key This configuration accepts a valid column expression. As such, you can concatenate two columns together as a unique key if required. It's a good idea to use a separator (e.g. '-') to ensure uniqueness. {% snapshot transaction_items_snapshot %}{{config(unique_key="transaction_id||'-'||line_item_id",...)}}selecttransaction_id||'-'||line_item_id as id,*from {{ source('erp', 'transactions') }}{% endsnapshot %} Though, it's probably a better idea to construct this column in your query and use that as the unique_key: {% snapshot transaction_items_snapshot %}{{config(unique_key="id",...)}}selecttransaction_id || '-' || line_item_id as id,*from {{ source('erp', 'transactions') }}{% endsnapshot %}
https://6167222043a0b700086c2b31--docs-getdbt-com.netlify.app/reference/resource-configs/unique_key
2021-11-27T04:11:19
CC-MAIN-2021-49
1637964358078.2
[]
6167222043a0b700086c2b31--docs-getdbt-com.netlify.app
CreateNotification Creates a notification. You must create the budget before you create the associated notification. Request Syntax { "AccountId": " string", "BudgetName": " string", "Notification": { "ComparisonOperator": " string", "NotificationState": " string", "NotificationType": " string", "Threshold": number, "ThresholdType": " string" }, "Subscribers": [ { "Address": " string", "SubscriptionType": " string" } ] } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. - AccountId The accountIdthat is associated with the budget that you want to create a notification for. Type: String Length Constraints: Fixed length of 12. Pattern: \d{12} Required: Yes - BudgetName The name of the budget that you want Amazon to notify you about. Budget names must be unique within an account. Type: String Length Constraints: Minimum length of 1. Maximum length of 100. Pattern: [^:\\]+ Required: Yes - Notification The notification that you want to create. Type: Notification object Required: Yes - Subscribers A list of subscribers that you want to associate with the notification. Each notification can have one SNS subscriber and up to 10 email subscribers. Type: Array of Subscriber objects Array Members: Minimum number of 1 item. Maximum number of 11 items. CreateN.CreateNotification { :
https://docs.amazonaws.cn/en_us/aws-cost-management/latest/APIReference/API_budgets_CreateNotification.html
2021-11-27T02:46:51
CC-MAIN-2021-49
1637964358078.2
[]
docs.amazonaws.cn
Instance Registration Register your instance Navigate to the URL provided by Armory and follow the instructions to create an account. Make note of the following information: - Instance ID - Client ID - Secret This is the only time you are shown the secret value. Store it somewhere safe. Apply the registration information In your Operator manifest (such as spinnakerService.yml) or spinnaker-local file (Halyard), configure the following parameters: spec.spinnakerConfig.profiles.spinnaker.armory.cloud.iam.tokenIssueUrl: set this value to. spec.spinnakerConfig.profiles.spinnaker.armory.cloud.iam.clientID: set this value to Client ID that you received from Register your instance. spec.spinnakerConfig.profiles.spinnaker.”armory.cloud”.iam.clientSecret: set this value to the Secret that you received from Register your instance. Once you apply the registration information to your instance, your Armory Enterprise instance is registered with Armory. spec: spinnakerConfig: profiles: gate: manifest. This redeploys Armory Enterprise. #gate-local changes with the following command: hal deploy apply. Applying the config changes redeploys Armory Enterprise. Troubleshooting Registration fails This issue can occur if Operator failed to generate a UUID for your Armory Enterprise instance. You can resolve this issue with one of the following actions: - Deploy Armory Enterprise with a supported Operator version that corresponds to your Armory Enterprise version. - Manually apply a registration ID. To manually apply a registration ID, perform the following steps: In your Operator manifest ( spinnakerservice.ymlby default), check if there is a value for spec.spinnakerConfig.config.armory.diagnostics.uuid. If there is a value, use that value for the next step. If there is no value, generate a UUID and add it to spec.spinnakerConfig.config.armory.diagnostics.uuidfield. Add the UUID to the spec.spinnakerConfig.profiles.spinnaker.armory.registrationIdfield. Save the file and deploy Armory Enterprise. If you generated your own UUID, contact Armory Support and provide your UUID so that it can be associated with your organization. This helps with troubleshooting when you contact Support. Feedback Was this page helpful? Thank you for letting us know! Sorry to hear that. Please tell us how we can improve. Last modified October 19, 2021: (d818791)
https://docs.armory.io/docs/installation/ae-instance-reg/
2021-11-27T02:44:21
CC-MAIN-2021-49
1637964358078.2
[]
docs.armory.io
sec.validatePermissions( $permissions as element(sec.permission)[] ) as Sequence This function throws the SEC_NOPERMCAP exception if a permission has no capability specified and it throws the SEC-NOPERMROLEID exception if there is no role specified in the permission. This function must be executed against the security database. const sec = require('/MarkLogic/security.xqy'); sec.validatePermissions(xdmp.permission("Temporary", "read","element")) Stack Overflow: Get the most useful answers to questions from the MarkLogic community, or ask your own question.
https://docs.marklogic.com/sec.validatePermissions
2021-11-27T01:58:45
CC-MAIN-2021-49
1637964358078.2
[]
docs.marklogic.com
fail_calc Test queries are written to return a set of failing records, ones not matching the expectation or assertion declared by that test: duplicate records, null values, etc. Most often, this is the count of rows returned by the test query: the default value of fail_calc is count(*). But it can also be a custom calculation, whether an aggregate calculation or simply the name of a column to be selected from the test query. Most tests do not use the fail_calc config, preferring to return a count of failing rows. For the tests that do, the most common place to set the fail_calc config is right within a generic test block, alongside its query definition. All the same, fail_calc can be set in all the same places as other configs. For instance, you can configure a unique test to return sum(n_records) instead of count(*) as the failure calculation: that is, the number of rows in the model containing a duplicated column value, rather than the number of distinct column values that are duplicated. - Specific test - One-off test - Generic test block - Project level Configure a specific instance of a generic (schema) test: version: 2models:- name: my_modelcolumns:- name: my_columnstests:- unique:config:fail_calc: "sum(n_records)"
https://6167222043a0b700086c2b31--docs-getdbt-com.netlify.app/reference/resource-configs/fail_calc
2021-11-27T02:45:21
CC-MAIN-2021-49
1637964358078.2
[]
6167222043a0b700086c2b31--docs-getdbt-com.netlify.app
ImageUploadDeniedExtensions This option specifies the image upload extensions which you don't wish to use in FCKeditor. Example: FCKConfig.ImageUploadDeniedExtensions = ".(bmp|gif)$" ; If you leave this option empty none extension will be denied. NOTE: if you leave ImageUploadAllowedExtensions empty and you will put something in ImageUploadDeniedExtensions e.g.bmp it will allow the upload of all the files exept the bmp. However it isn't a good way to secure your server from unwanted uploads. The best way is to put all of the prefered extensions in ImageUploadAllowedExtensions and put the unwanted ones in the ImageUploadDeniedExtensions. Only in this way you will secure your server from junk.
https://docs-old.ckeditor.com/FCKeditor_2.x/Developers_Guide/Configuration/Configuration_Options/ImageUploadDeniedExtensions
2021-11-27T01:58:00
CC-MAIN-2021-49
1637964358078.2
[]
docs-old.ckeditor.com
The Tree View displays list, library, discussion and wiki items, as well as site navigation, in a hierarchical structure allowing the user to navigate to the information much like the navigational structure of Windows ExplorerTM. This tree view structure allows users to quickly view the content structure of the list, library, discussion or wiki and visually determine the fastest way to locate an item.
https://docs.bamboosolutions.com/document/tree_view/
2021-11-27T02:46:11
CC-MAIN-2021-49
1637964358078.2
[array(['/wp-content/uploads/2017/06/TreeView.png', 'TreeView.png'], dtype=object) ]
docs.bamboosolutions.com
Principled Hair BSDF Principled Hair BSDF under Melanin concentration. Cycles Only The Principled Hair BSDF is a physically-based, easy-to-use shader for rendering hair and fur. Vihje Realistic hair should have a minimum of variance between each strand. The shader allows for this by specifying two values, Random Color and Random Roughness, which remap the specified Melanin/Roughness values to the range \(Color/Roughness \pm Randomization\%\). Inputs - Color The RGB color of the strand. Only used in Direct coloring. Vihje The chosen color is converted to an absorption coefficient with the following formula (section 4.2 of [CBTB16]):\[\sigma_{a} = \frac{\ln(Color)} {\left(5.969 - 0.215\beta_{N} + 2.532\beta_{N}^{2} - 10.73\beta_{N}^{3} + 5.574\beta_{N}^{4} + 0.245\beta_{N}^{5}\right)^{2}}\] where \(\beta_{N}\) is the radial roughness of the hair after applying randomization (if specified). Coloring hair using the Direct coloring parametrization. (The numbers on top are the RGB values.) - Melanin Absolute quantity of pigment. Range \([0, 1]\) equivalent to \([0\%, 100\%]\). Vihje This is a linear mapping to the underlying exponential function:\[melanin\_qty = -\ln(\max(1.0 - Melanin, 0.0001))\] - Melanin Redness Ratio of pheomelanin to eumelanin. Range \([0, 1]\) equivalent to \([0\%, 100\%]\). Vihje The ratio formula is: \(eumelanin = Melanin*(1.0-MelaninRedness)\), \(pheomelanin = Melanin*MelaninRedness\). The resulting quantities are converted (after randomization, if specified) to absorption concentration via the following formula (section 6.1 of [EFHLA11], adjusted for the range \([0, 1]\)):\[\begin{split}\sigma_{a} = eumelanin * \left[\begin{matrix} 0.506 \\ 0.841 \\ 1.653 \\ \end{matrix}\right] + pheomelanin * \left[\begin{matrix} 0.343 \\ 0.733 \\ 1.924 \\ \end{matrix}\right]\end{split}\] - Tint Color used for dyeing the hair after applying the melanin pigment. It is not subject to randomization. It can be disabled by setting the color to white. Vihje This is converted via the Color mapping above and added to the absorption coefficient of the melanin concentration. Tint, using Melanin 0.1 and the corresponding RGB values. - Absorption Coefficient Attenuation coefficient \(\sigma\). - Roughness Specify how much the glints are smoothed in the direction of the hair shaft. Too low values will smoothen the hair to the point of looking almost metallic, making glints look like Fireflies; while setting it too high will result in a Lambertian look. - Radial Roughness Specify how much the glints are smoothed in the direction of the hair tangent. Too low values will concentrate the glint; while setting it too high will spread the light across the width of the strand. - Coat Simulate a shiny coat of fur, by reducing the Roughness to the given factor only for the first light bounce (diffuse). Range \([0, 1]\) equivalent to a reduction of \([0\%, 100\%]\) of the original Roughness. - IOR Index of refraction (IOR) defining how much the ray changes direction. At 1.0 rays pass straight through like in a transparent material; higher values give more refraction. Default value is \(1.55\). - Offset Tilts the glint of the hair by increasing the angle of the scales of the hair’s cuticle with respect to the hair shaft. Human hair usually has low values. - Random Color For each strand, vary the melanin concentration by \(RandomFactor\). Range \([0, 1]\) equivalent to \([0\%, 100\%]\) of the initial melanin concentration. Vihje The melanin concentration is multiplied by \(randomFactor\), where \(randomFactor = 1.0 + 2.0*(Random - 0.5) * RandomColor\). - Random Roughness For each strand, vary both Roughness values by \(RandomFactor\). Range \([0, 1]\) equivalent to \([0\%, 100\%]\) of the initial roughness values. Vihje The applied formula is the same one as for Random Color. - Random Random number source. If no node is connected here, it is automatically instanced with the value obtained from. Properties - Color Parametrization The shader provides three different ways, or parametrizations, to color the hair strands. - Direct Coloring Choose the desired RGB color and the shader will approximate the necessary absorption coefficient (below). - Melanin Concentration This mode defines the color as the quantity and ratio of the pigments which are commonly found in hair and fur, eumelanin (prevalent in brown-black hair) and pheomelanin (red hair). The quantity is specified in the Melanin input, and the ratio between them in Melanin Redness. Increasing concentrations darken the hair (the following are with Melanin Redness \(1\)): White (Melanin \(0\)) Blonde (Melanin \(0.25\)) Reddish (Melanin \(0.5\)) Brown (Melanin \(0.75\)) Black (Melanin \(1\)) Additionally, the Tint inputs allows to dye the hair with the desired color. - Absorption Coefficient Specifies the attenuation coefficient \(\sigma_{a}\), as applied by the Beer-Lambert law. This mode is intended mainly for technical users who want to use coefficients from the literature without any sort of conversion. Outputs - BSDF Standard shader output. References This shader is an implementation of the paper by Chiang et al. [CBTB16], which was used in the Disney film, ”Zootopia”®. - CBTB16(1,2,3) Chiang, M. J. , Bitterli, B. , Tappan, C. and Burley, B. (2016), A Practical and Controllable Hair and Fur Model for Production Path Tracing. Computer Graphics Forum, 35: 275-283. doi:10.1111/cgf.12830 - EFHLA11 d’Eon, E. , Francois, G. , Hill, M. , Letteri, J. and Aubry, J. (2011), An Energy‐Conserving Hair Reflectance Model. Computer Graphics Forum, 30: 1181-1187. doi:10.1111/j.1467-8659.2011.01976.x
https://docs.blender.org/manual/fi/dev/render/shader_nodes/shader/hair_principled.html
2021-11-27T02:13:54
CC-MAIN-2021-49
1637964358078.2
[array(['../../../_images/render_shader-nodes_shader_hair-principled_node-melaninconcentration.png', '../../../_images/render_shader-nodes_shader_hair-principled_node-melaninconcentration.png'], dtype=object) array(['../../../_images/render_shader-nodes_shader_hair-principled_demo-radial-roughness.jpg', '../../../_images/render_shader-nodes_shader_hair-principled_demo-radial-roughness.jpg'], dtype=object) ]
docs.blender.org
. Self Join A self join is a join operation between a dataset and a copy of itself. For example, you can use a self-join to invert the structure of hierarchical data, such as brand-product or manager-employee. Trifacta Wrangler. Figure: Join Types This page has no comments.
https://docs.trifacta.com/pages/viewpage.action?pageId=151978209
2021-11-27T02:31:59
CC-MAIN-2021-49
1637964358078.2
[]
docs.trifacta.com
HISTORICAL BACKGROUND OF ATARI COMPUTERS Atari has not generally been involved in gambling from the onset. From the late '70s to the mid-'90s, the corporation delivered a progression of fascinating and uncommon laptops and desktops, including one that had a featuring role in ‘Eliminator 2.' Atari ST Book (1990) Atari launched an ultrathin PC in its Atari ST line in December 1990 at Europe. In contrast to the large Stacy before it, the ST Book was incredibly slender and weighted lightly. Atari discarded the internal floppy drive to make the ST Book nothing but 1.4 inches thick and under 5 pounds. Notwithstanding, the ST Book's delicate plastic case and LCD screen were inclined to breaking easily. This might be the reason the company restricted its creation to an incredibly little 1000 units. Atari Mega STE (1990) The Mega STE was launched in the late 1990s. It depicted the final gasp of the bit Atari ST desktop line. It dispatched with the quickest 68000 CPU of the arrangement at 16MHz and contained 1MB to 4MB of Random Access Memory, a 40MB to 80MB inner hard drive, and an internal 3.5-inch floppy drive in a particular TT030-style case. Atari Stacy (1989) The first portable member of the Atari ST line is the 1989's Stacy. When the Macintosh Compact sold for $6500 to $7300, the $1995 Stacy appeared to be a deal. At that cost, you could purchase a Stacy with 2MB Random Access Memory, a 20MB hard plate, and an 8MHz 68000 CPU. Lamentably, the Stacy was not excessively lenient on the C cells that fueled it. New batteries would last just around 15 minutes in the machine, making it far less mobile than most customers might want. Atari Transputer Workstation (1988) Atari's most odd (and now rarest) production Machine was launched in 1988 in the form of the Atari Transputer Workstation (otherwise called the ATW-800). The workstation, which ran the Unix-like “Helios” operating system, joined various plain CPUs in a parallel arrangement that could then be connected with different Transputers to frame an extensive parallel system. The idea appeared to be encouraging from the outset; in any case, it bombed even with perpetually incredible (and a lot less expensive) independent CPU frameworks. Atari 520STE/1040STE (1989) In 1989, Atari updated its standard 520ST and 1040STf PCs, creating the 520STE and the 1040STE – the E meaning “enhanced.” The 520STE dispatched in the more significant 1040STf case with an attached floppy drive, however, it contained just 512KB of RAM. The 1040STE still used its previous case, including two new broadened joystick ports and interior RAM upgradability through SIMMs. The two machines incorporated the “blitter” illustrations acceleration chip presented in the Mega ST a couple of years prior. Atari TT030 (1990) Following quite a while of selling ST-compatible PCs with 8MHz Motorola 68000 CPUs, Atari launched another top of the line desktop producing workstation called the TT030. At the price of $2995, it contained a 32MHz 68030 CPU, 2MB RAM, a 50MB SCSI hard drive, and another operating system all enveloped with an upscale new portable case. Atari designers tried hard to keep the framework adaptable with ST software. Sadly for Atari, the ST line had since a long time ago been overshadowed in the share of the overall industry by the Mac and by modest PC clones. Hence, the TT030 stayed a niche item that was sold mainly in Europe. Atari Portfolio (1988) In a period of plentiful experimentation, Atari launched what was then the world's littlest MS-DOS-adaptable computer. The $399 Portfolio was about the size of a VHS tape, and it contained a nonbacklit monochrome LCD screen, 128KB RAM, and detachable storage on battery-sponsored memory cards. In spite of the fact that it was generally welcomed in the press, it didn't really stir things up – however, it helped hack an ATM in Eliminator 2. Atari 400/800 (1979) Atari's 8-piece PC line started as a cutting edge follow-up to the noteworthy Atari 2600 computer game console. After observing Mac's achievement in the early PC market, Atari officials requested their engineers to transform the new hardware into a PC system, which turned into the 800. Initially retailing for $1000, the Atari 800 dispatched with 8KB of RAM, which was able to be upgraded to 48KB. Its younger sibling, the Atari 400, delivered with less RAM and a level keyboard console for $550. Atari 65 XE/130 XE (1985) Following Warner Communications sales of the consumer shares of Atari to Jack Tramiel in the year 1984, Tramiel dropped many existing ventures and started working on another machine to best the up and coming Commodore Amiga. In the interim, he supplanted the 8-bit XL series with two fresh models: the 65XE, with 64KB of RAM, and the 130 XE, with 128KB. Both were perfect with existing Atari 8-bit software. Atari 600XL/800 XL (1983) In 1983, The Atari 600XL and 800XL were launched, and they fixed a portion of the 1200XL's issues. The 600XL (a trade for the Atari 400) contained 16KB RAM, and the 800XL had 64KB RAM. Both carried a 50-pin Parallel Bus Interface (PBI) on the back, opening the entryway to complex future overhauls. Atari Falcon030 (1992) The 32-bit Falcon030 was Atari's final gash at the PC marketplace before the organization shut down its computer department to fully concentrate on its up and coming Puma game console. The $1299 Falcon030 contained a 16MHz 68030 PC, an inner IDE hard drive, sophisticated visual modes, and – most amazingly – the capacity to both yield and digitize Disc quality sound. The Falcon030 did well with the same MIDI-sequencing crowd that had recently received the Atari ST platform, yet it was unable to draw in people that are deeply attached to Microsoft's software system. Because of its unique, spunky plan, the Falcon030 is the most desired among Atari diehards. EVOLUTION OF ATARI COMPUTERS It's an outstanding name in the world of computer games: Atari. Yet, few individuals today recollect that the video game legend once made PCs, as well. For a long time (1978 to 1993), Atari structured and created four clear lines of PCs: the 8-piece “Atari 800” line, the 16-bit ST line, the computer compatibles, and the 32-series. Despite the fact that Atari began off solid in the U.S. PC advertise, the organization before long wavered even with shake-ups in the game business (its main business) and stiff challenge from IBM PC compatibles. In Europe, Atari clutched a niche of committed fans for quite a long time until at last closing down its computer division in 1993. You'll see pretty much every generation PC model that Atari at any point launched over those 15 years in the following slide. Atari PC/PC2/PC3 (1987) By 1987, the IBM-PC-perfect market had ruptured into a plethora of unapproved computer CPUs made by many producers, each unified by MS-DOS and the x86 CPU engineering. Atari chose to take a stab at a PC clone in 1987, producing the Atari computer, an 8MHz 8088 machine with 512KB of RAM and a 360KB 5.25-inch floppy drive in a Mega ST-style case. Soon after that, the organization launched the PC2 (which brandished a more significant case and two floppy drives) and the PC3 (which utilized one more case and incorporated an internal hard disk). Atari XE Game Framework (1987) By 1987, everybody was somewhat envious of Nintendo's blockbuster accomplishment with the NES. The Japanese game organization had generally launched its console in the US just the prior year, yet had effectively sold over a million units while without any assistance reigniting the computer game market. Hungry for a bit of the new business, Atari plunged into its antique chronicles and hauled out a couple of stunts, including the 7800 and another console – the XE Game system – given its waning 8-bit PC line. Tormented with antiquated programming and yesterday's arcade ports, neither one of the strategies worked as trusted. Atari ABC 286-30 (1990)/ABC 386SXII/386DXII (1991) By 1990, Atari's PC-adaptable line started to all the more intently look like other PC clones. The frameworks utilized for the most part off-the-rack parts, and sold with undeniably progressively configurable choices. The ABC (Atari Business PC) 286-30 delivered with a scope of CPU and capacity decisions, offering an 8MHz to 20MHz 286 CPU and a 30MB to 60MB hard plate. It likewise incorporated Atari's first PC-perfect 3.5-inch floppy drive. The ABC 386 arrangement – Atari's last invasion into PC clones – included either a 20MHz or 40MHz CPU, 1MB or 2MB of Smash, and a 40MB or 80MB hard drive. The ABC 386 PCs were the main Atari PCs to send with Microsoft Windows – variant 3.0, indeed. Atari PC4 (1987)/PC5 (1988) The PC4 was released as an expansion of the previously mentioned PC2 and PC3 series in 1987, yet contrasted considerably in that it incorporated a 16MHz 80286 CPU and 1MB of RAM. The PC5 dispatched in 1988 with a 20MHz 80386 CPU and 2MB of RAM. Since Atari focused the sales of these PCs, mainly in Europe, they are hard to be seen in the US. Atari ABC N386SX (1991) The majority of the Atari, as mentioned earlier PCs in this article, is, as you may expect, designed and developed by Atari. The ABC N386SX is not the same at all with them. SOTEC, a Japanese PC producer, built this computer system. SOTEC sold a similar model itself, and Atari's only involvement was to put its name on the PCs it sold. The ABC N386SX, which launched in the mid-year of 1991, was the single PC-compatible laptop ever made by Atari. ABC N386SX contains a 20MHz 386 CPU, 1MB of RAM, an inner 3.5-inch floppy drive, and a 20MB hard drive. Maybe a couple got it, and even less recall it. Atari Mega ST 1/Mega ST 2/Mega ST 4 (1987) As the ST picked up footing in the music world, it additionally wound up prevalent as a cheap desktop producing machine, especially in Europe. To ensure it made full provisions for the market, Atari launched the modular Mega ST line. The PCs included either 4 MB, 2MB, or 1MB of RAM; a separable keyboard; and another “blither” chip to speed up on-screen designs. The organization additionally established a low-cost laser printer around a similar time to finish the collection. Atari 1040STf (1986) In contrast to the 520ST, which depended on an outer floppy drive and power supply, the 1040STf incorporated those two components into a single case. It likewise increased the RAM to 1MB, selling for $999 in 1986 as a total system with a base unit, a monochrome screen, and a mouse. The 1040STf was Atari's most well-known PC in the US during the 16-bit period. Atari 1200 XL (1982) Atari updated its 8-bit PC line in 1982 with the 1200XL, which included 64KB of RAM and a rearranged internal design enveloped by smooth new case design. Despite the fact that the 1200XL integrated what might be the best-feeling keyboard of the arrangement, it likewise ended up known as an exceptionally faulty machine. Another operating system presented baffling incompatibilities with existing Atari 8-bit programming, and the closed case permitted no genuine extension. Transactions of the first 800 went up after the arrival of the 1200XL; sending Atari back to this plan's beginning point. Atari 520ST (1985) The release of Apple in 1985 started a race to build another generation of 16-bit, GUI-based computer. Atari developed its similar PC, the 520ST, and released it in March 1985. It was an incredibly modest $799 package that incorporated a 360KB floppy drive, a mouse, and a monochrome VDU. (For an extra $200, you could get a color VDU.) The 520ST crammed 512KB of RAM and included a multicolored visual windowing system called GEM. When the Mac platform immediately commanded the universe of visual depiction, the Atari ST discovered its most grounded specialty in music creation because of Atari's incorporation of two implicit MIDI 2014 THE GAME AT ATARI CASINO For the genuine Atari fans, you will realize that Atari Casino isn't the first Atari casino… as the organization of the game drew out a computer game called “Casino” route in 1978. This was an extremely fundamental top-down casino game initially launched on the Atari 2600 games reassure and constrained by up to four Atari games paddles. There were two distinctive blackjack games, five-card stud poker, and a poker solitaire game.
http://dev-docs.atariforge.org/
2020-11-23T22:20:08
CC-MAIN-2020-50
1606141168074.3
[]
dev-docs.atariforge.org
Tests whether a string of text is like a given pattern. like( stringA, stringB ) stringA: (text string) Text to match stringB: (text string) List the characters to match. Boolean stringB accepts the following wildcard characters: Wildcard Examples b?t matches bat ; and bet; and bit. You can experiment with this function in the test box below. Test Input Test Output Test Output like("brian","*ian") returns true On This Page
https://docs.appian.com/suite/help/20.3/fnc_text_like.html
2020-11-23T22:35:35
CC-MAIN-2020-50
1606141168074.3
[]
docs.appian.com
Hi Guys, I'm looking for a developer who can assist with setting up functionality in Forma LMS. you can contact me via email at [email protected] Thanks Vikesh Looking for a Feelance Developer to assist on Forma LMS Install and configure FormaLMS and DoceboCE Post Reply 1 post • Page 1 of 1 Post Reply 1 post • Page 1 of 1
https://docs.formalms.org/forums/2/12842.html
2020-11-23T21:30:08
CC-MAIN-2020-50
1606141168074.3
[]
docs.formalms.org
The Wiki has moved to new hardware, and the old server died. Hi, my name is David. If you're snooping around here I guess you may be looking for a way to contact me. Well your best bet might be to look for me on Linux Questions in the Slackware ARM subforum. You can leave me a PM there (same nic as here). There are times when I'm too busy to have a look at the LQ forum for quite some time, if that's the case you can try contacting, via email, the same nic @ yahoo.it (be smarter then a robot).
https://docs.slackware.com/wiki:user:louigi600?rev=1472652142
2020-11-23T21:50:41
CC-MAIN-2020-50
1606141168074.3
[]
docs.slackware.com
Contents Strategic Partner Links Sepasoft - MES Modules Cirrus Link - MQTT Modules Resources Knowledge Base Articles Inductive University Forum IA Support SDK Documentation SDK Examples All Manual Versions Ignition 8 Ignition 7.9 Ignition 7.8 The Gateway Network General Settings set the basic rules for the system. By default, these settings are lenient to allow for easy setup but can be set for security. Enabled: uncheck this checkbox to disable using the Gateway Network on this Ignition. Web Sockets Enabled: If true, the system will attempt to use websockets in order to increase performance. Require SSL: If true, only connections that use SSL to encrypt traffic will be allowed. Default is true as of 7.9.4. Require Two Way Auth: Controls whether both sides must explicitly trust each other before successful handshake. If you check this setting, you will need to provide the remote machine's certificate. To do this, manually export a certificate from the remote machine's metro keystore, located in <installdir>/webserver/metro-keystore. Default keystore password is metro, and the alias is metro-key. Then place the certificate on the local machine, in data/certificates/gateway_network If there is too much traffic on the Gateway Network, then it can tell Agents to stop sending messages for a specified amount of time. Receive Queue Max: Number of received messages that can be held until they are processed by the local system. When this capacity is exceeded, the local system will start sending WAIT messages to remote machines. Overload Wait Seconds: The number of seconds that a remote machine is instructed to wait before resuming message delivery. This setting is used when an overload is detected on the local machine. By default, the security level for incoming connections is set to “Unrestricted”, meaning that every remote machine that attempts to connect to the local machine will be accepted without question. You have several options to control security from the Gateway Network settings. Note that these settings are independent of SSL mode, which is detailed below. To change security settings, navigate to Configure -> Networking -> Gateway Network and select the General Settings tab. Allow Incoming Connections: uncheck this checkbox to disable all remote machines from being able to establish an incoming connection. To establish any connections with remote machines, you will need to create outgoing connections from this machine. Connection Policy: Unrestricted: default mode, allows all incoming connection unless the Allow Incoming Connections checkbox is unchecked ApprovedOnly: incoming connections are created, but cannot be used to send or receive data until you approve the connection under Gateway Network -> Incoming Connections tab. To approve an incoming connection, click the Approve link on the right side of the connection. You can also deny a previously approved connection by clicking the deny link. Note that the approve and deny links will appear next to a connection only if you have enabled the ApproveOnly setting. Specified List: Connections with an ID in this list are automatically allowed if the security mode is set to SpecifiedList. Separate Gateway names with a comma. Every machine on the Gateway Network is known as a Server. When you establish a connection to a remote machine, the remote Server sends data about itself and also sends data about any other Servers known to that machine. For example, assume your local machine is Gateway1. The remote machine is known as Gateway2. Gateway2 also knows about another remote machine named Gateway3. As soon as your local Gateway1 establishes a connection with Gateway2, Gateway2 also sends information about the existence of Gateway3. Modules such as the EAM are aware of this relationship and allow communication between Gateway1 and Gateway3, even though there is no direct connection from the local machine to Gateway3. Which Server Should I Configure the Outgoing Connection On? In regard to connecting multiple Gateways over the Gateway Network, there is little difference between an Outgoing and Incoming connection: these terms simply note which server the connection was configured on, and are mostly ignored by the rest of Ignition. Thus, assuming Gateway A and Gateway B, configuring an outgoing connection from A to B is equivalent to configuring an outgoing connection from B to A. To establish a basic comm link between two Gateways, log into the Gateway where you want to establish the outgoing connection. Navigate to Configure -> Networking -> Gateway Network and click on the Outgoing Connections tab. Click the Create new link. On the New Outgoing Connection page, you will need to enter the network address and port of the remote machine. Click the Create New Connection button at the bottom of the page. On the Outgoing Connections tab, your new connection will be shown. After a few moments, the Gateway Name field will be updated with the name of the remote server and the status will be set to “Running”. To view the incoming connection, log into the other Gateway and navigate to Configure -> Networking -> Gateway Network and click on the Incoming Connections tab. Your new connection will be shown with “Connected” status. Outgoing and incoming connections can be deleted for cases when the connection no longer exists on the other side. To delete a connection, navigate to Configure -> Networking -> Gateway Network and click on either the Outgoing Connections tab or the Incoming Connection tab. Then click the delete link next to the connection. Note that for incoming connections, if a remote machine is still connected to the local machine with an outgoing connection, a new incoming connection will be created after deletion. For these cases, you must log into the remote Gateway and delete the outgoing connection. Then you can delete the local incoming connection. When a remote machine establishes an incoming connection, its Gateway server name is transmitted and appears in the Server Name field under Gateway Network -> Incoming Connections. However, there no identity authentication is performed when the connection is created. The local system accepts the remote system id without question. To perform identity authentication on a connection, you must use Secure Socket Layer (SSL) and certificates. By default, SSL enabled. Next, log into a Gateway where you wish to establish an outgoing connection. We will call this Gateway2. Navigate to Configure -> Networking -> Gateway Network and click on the Outgoing Connections tab and click the “Create new” link. As you are entering the settings for the new connection, check the Use SSL checkbox. Also, change the Port to the SSL port used by the other machine. By default this is set to 8060 (which is defined /data/gateway.xml). Save the new connection. Note that this port is different from the default SSL port an Ignition Gateway would use when communicating to a client (default port 8043). The certificate Common Name field holds the network address of the machine that transmitted the certificate. The Serial field holds a numeric string that is automatically generated when the certificate is created, and is unique to every certificate. Click the approve link to approve the certificate. After certificate approval, the incoming connection will be accepted, and will eventually appear on the Incoming Connections tab. It make take up to 30 seconds for the connection to appear, as reconnect attempts are made every 30 seconds by default (controllable by the Missed Pings outgoing connection setting). After a certificate has been approved, and the incoming connection is established, you can view the certificate information for a specific incoming connection under the Certificate field in the Incoming Connections tab. The certificate common name and serial number are displayed here, and reflect the actual certificate that was sent by the remote machine. You can deny a certificate under the Certificates tab by clicking the deny link to the right of the certificate. The connection that has been using that certificate will not longer be allowed to connect. You can delete certificates that are no longer in use. Keep in mind that if you delete a certificate, and a remote machine is still using that certificate, it will reappear on the Certificates page. In this case, you must navigate to the remote Gateway and delete its outgoing connection. Then you can permanently delete the certificate from the Certificates page. The web socket protocol rides on top of the TCP protocol, and allows 2-way communications between machines. Either side can send a message to the other side at any time. If web sockets cannot be used on your network, or your network setup does not require a high-speed low-latency connection, the Gateway Network connections can use a pure HTTP long polling system for communications. This mode uses standard HTTP GET and POST messages for communication, and is appropriate for situations when an always-on connection is not practical. An example would be a Gateway that communicates once every 15 minutes over a cell connection. Message retrieval is not as efficient as web sockets, as regular polling of the remote machine is required to learn of new messages. To enable HTTP long polling, you must either disable web sockets under either Gateway Network General Settings (to disable for all connections) or disable web sockets on an outgoing connection. The Gateway Network opens up certain services for use that make managing multiple gateways and having them effectively communicate with each other a snap. It also has special security that can restrict certain services from happening in certain zones of the Gateway Network. The Enterprise Administration Module (EAM) uses the Gateway Network for message and file transfer, and can monitor network connections for availability. The EAM reports whenever a comm link is lost via alarm events and system tags. Remote Realtime and Historical Tag Providers make remotely controlling and storing tag data even easier. Security Zones can be setup to lock down or prevent access to certain parts of gateways within the gateway network.
https://docs.inductiveautomation.com/display/DOC79/Gateway+Network
2020-11-23T21:52:01
CC-MAIN-2020-50
1606141168074.3
[]
docs.inductiveautomation.com
Difference between revisions of "Schedule" Revision as of 07:38, 9 September 2015 This is going to be a page that will list events going on in the labs this semester. Contents Weekly Events The Meetings COSI Meetings are held every Wednesday at 7:30pm (7:00pm for the forum). Come bring project updates, lightning talks, and see what people in COSI have been up to. Vim Workshop Benjamin Lannon is hosting a workshop every Sunday at 4:00pm in the ITL on the Vim text editor. The first workshop is going to cover the basics of Vim, while future weeks will cover more advanced topics. Other Events Hike By Reference 2015 Members of COSI are planning on hiking up Ampersand Mountain sometime soon. The date has not been finalized, but it will take place on a weekend. Previous Events Previous happenings in the labs are listed on List of Previous Events (2015-2016). If you know of a previous event that happened that isn't on the list, please feel free to add it.
http://docs.cslabs.clarkson.edu/mediawiki/index.php?title=Schedule&diff=prev&oldid=7119&printable=yes
2020-11-23T22:12:43
CC-MAIN-2020-50
1606141168074.3
[]
docs.cslabs.clarkson.edu
# What is LocalStack LocalStack is one of our favorite tools for simulating the AWS environment. It is a mock environment that mimics many of the most popular AWS services. You are able to stand up many different services, such as S3 or Dynamo, and have complete access to the files or NoSQL data exactly is it would appear on AWS. Why does this matter? Because running things locally is great for many reasons. It is much faster and it saves CPU costs on AWS for starters. We feel it also allows you to get a better feel for how your system is actually operating. You can also test outages of certain services much easier than trying to do it directly on AWS. # Commandeer Key Features - Get LocalStack running instantly. Simply download Commandeer, open up the LocalStack dashboard, and hit the start button, and it is running. - Start/Stop all or individual LocalStack services. - See what LocalStack ports are running (DynamoDB, IAM, Lambda, S3, SNS, SQS, and more). - View your local S3 data. - View your local DynamoDB data. - View your local Lambdas. - See and manage all your local AWS infrastructure for supported AWS services like (SNS, SQS, CloudWatch Logs, CloudWatch Rules, CloudFormation, IAM, ApiGateway, and more). - Update the URLs for your services. - Manage the LocalStack resources on a remote LocalStack server. - Customize LocalStack using the environment variables. # Helpful Links # Screenshots A user scanning their LocalStack environment to see what is running. Running Commandeer in local mode, allowing you to view your local S3 and Dynamo DB data, as well as your local lambdas.
https://docs.getcommandeer.com/docs/LocalStack
2020-11-23T22:39:11
CC-MAIN-2020-50
1606141168074.3
[array(['https://images.commandeer.be/_uploads/Localstack_Commandeer_AWS_illustration_dark.png', None], dtype=object) array(['https://images.commandeer.be/_uploads/localstack-scanning.gif', None], dtype=object) array(['https://images.commandeer.be/_tutorials/s3/file-created.png', None], dtype=object) ]
docs.getcommandeer.com
Office 365 operated by 21Vianet Office. Scan this QR code to follow us on WeChat and get the latest updates for Office 365 operated by 21Vianet. About services in Office 365 operated by 21Vianet. Office 365 Suite *Optional services provided directly by Microsoft, and subject to Microsoft's Terms of Service and privacy statements. SharePoint Online *Optional services provided directly by Microsoft, and subject to Microsoft's Terms of Service and privacy statements. Outlook Web App Exchange New with Exchange 2013 Cumulative Update 5 (CU5), full-featured hybrid deployments between on-premises Exchange 2013 organizations and Office 365 services are now supported. Leveraging new improvements in the Hybrid Configuration wizard, Exchange 2013 CU5 supports the following hybrid features between your on-premises and Exchange Online organizations: -). - MailTips, HD photo support for Outlook contacts, and multi-mailbox search between on-premises and Exchange Online organizations. - Cloud-based message archiving for on-premises Exchange mailboxes. For organizations running older or mixed versions of Exchange Server, some hybrid features aren't fully supported for Office 365 tenants hosted by 21Vianet. Use the following table to learn more about hybrid feature support in different Exchange deployment scenarios: Important Delegate calendar access, when a user or set of users is provided access to another user's calendar, isn't supported in hybrid deployments with Office 365 tenants hosted by 21Vianet. Additionally, some Exchange messaging policy and compliance features aren't fully supported in hybrid deployments with Office 365 tenants hosted by 21Vianet. These features include: - Messaging Records Management (MRM) - In-Place eDiscovery - In-Place Hold - In-Place Archiving - Mailbox auditing - Accessing online archives with Outlook Web App (OWA) Use the following table to learn more about feature support in different Exchange deployment scenarios: 1 Separate searches are required for on-premises and Exchange Online mailboxes. 2 MRM move-to-archive policies can be used for mailboxes located on an Exchange 2013 CU5 or greater server. To learn more about configuring a hybrid deployment with Office 365 tenants hosted by 21Vianet, see the following topics: - Hybrid Deployment Prerequisites - Certificate Requirements for Hybrid Deployments - Create a Hybrid Deployment with the Hybrid Configuration Wizard Important The Exchange Server Deployment Assistant is a free web-based tool that helps you configure a hybrid deployment between your on-premises organization and Office 365, or to migrate completely to Office 365. The tool asks you a small set of simple questions and then, based on your answers, creates a customized checklist with instructions to configure your hybrid deployment. We strongly recommend using the Deployment Assistant to configure a hybrid deployment. > For organizations not wishing to upgrade to or add Exchange 2013 CU5 servers, Exchange 2013 SP1 organizations can configure shared calendar free/busy sharing between their on-premises and Exchange Online organizations. To configure this hybrid deployment feature, see Configuring Exchange hybrid deployment features with Office 365 operated by 21Vianet. Office Office client OneNote Skype for Business Data Subject Requests for GDPR GDPR grants individuals (or, data subjects) certain rights in connection with the processing of their personal data, including the right to correct inaccurate data, erase data or restrict its processing, receive their data and fulfill a request to transmit their data to another controller. The Tenant Administrator role for Office 365 operated by 21Vianet can request data on behalf of a data subject in the following ways: Using the Azure Active Directory Admin Center, a Tenant Administrator can permanently delete a data subject from Azure Active Directory and related services. System generated logs for Microsoft services operated by 21Vianet can be exported by Tenant Administrators using the Data Log Export. For details and instructions, see Data Subject Requests (DSR) for GDPR.
https://docs.microsoft.com/en-us/microsoft-365/admin/services-in-china/services-in-china?view=o365-21vianet&preserve-view=true
2020-11-23T22:21:55
CC-MAIN-2020-50
1606141168074.3
[array(['../../media/9bbbdf3b-b3ab-4355-82a0-37a84d70735b.png?view=o365-21vianet', 'Scan this QR code to follow us on WeChat'], dtype=object) ]
docs.microsoft.com
- Mechanical - Electronics - CNC Software - Tips and Tricks OpenBuilds CONTROL is a free machine interface application for Grbl / BlackBox based machines. Its primary focus is on ease of use. It allows you to Runs on Windows, Mac and Linux After installation, start OpenBuilds CONTROL. If you have trouble connecting, refer to USB Troubleshooting Click Open G-Code > Open GCODE from File and select your GCODE. Note your GCODE has to be Grbl-compatible, for more information refer to CAM Applications You can use either of these options to tell your machine where the stock is located Use the Jog buttons, or use the Keyboard shortcuts, to move the endmill toward the origin of the job. The origin is specified in your CAM application, for example, the front-left corner of the stock. For quicker setup, and more reliable positioning, get yourself an OpenBuilds XYZ Probe Plus For more information on the XYZ Probe Plus and how to use it, refer to OpenBuilds XYZ Probe Plus Documentation For more information on how the speed-dial relates to RPMs, refer to Dewalt DWP611 / D26200 Speed Dial to RPM conversion chart If you'd like to automate turning on the router, refer to Connecting BlackBox 4X to Dewalt DWP611 Router via IOT Switching Relay Power Strip If you have a Dust Shoe installed on your machine, turn on the vacuum system before starting the cut If you'd like to automate turning on the vacuum system, refer to Connecting Dust Extraction via IoT Relay
https://docs.openbuilds.com/doku.php?id=docs:software:openbuilds-control
2020-11-23T22:02:45
CC-MAIN-2020-50
1606141168074.3
[]
docs.openbuilds.com
Contents Strategic Partner Links Sepasoft - MES Modules Cirrus Link - MQTT Modules Resources Knowledge Base Articles Inductive University Forum IA Support SDK Documentation SDK Examples All Manual Versions Ignition 8 Ignition 7.9 Ignition 7.8 Formats the contents of one or more datasets as an excel spreadsheet, returning the results as a string. Each dataset specified will be added as a worksheet in the Excel workbook. This function uses an xml-format for Excel spreadsheets, not the native Excel file format. This scripting function has no Client Permission restrictions. system.dataset.dataSetToExcel(showHeaders, datasets[, nullsEmpty]) Boolean showHeaders - If True (1), the spreadsheet will include a header row. Object[] datasets - A sequence of datasets, one for each sheet in the resulting workbook. Boolean nullsEmpty - If True (1), the spreadsheet will leave cells with NULL values empty, instead of allowing Excel to provide a default value like 0. Defaults to False. [Optional] String - An Excel-compatible XML-based workbook, with one worksheet per dataset. All # This snippet would run a SQL query against a database, and turn the results into a string that is XML that Excel can open. It then writes the string to a file on the local hard drive. results = system.db.runQuery("SELECT * FROM example1 LIMIT 100") results = system.dataset.toDataSet(results) spreadsheet = system.dataset.dataSetToExcel(1, [results]) filePath = "C:\\output\\results.xls" system.file.writeFile(filePath, spreadsheet)
https://docs.inductiveautomation.com/display/DOC79/system.dataset.dataSetToExcel
2020-11-23T22:08:01
CC-MAIN-2020-50
1606141168074.3
[]
docs.inductiveautomation.com
If you have installed a previous version of OnCommand Workflow Automation (WFA), you can upgrade to the latest version of WFA to use the new features and enhancements. If you are currently running WFA 4.1 or earlier versions of WFA, then you must first upgrade to either WFA 5.0 or 4.2 and then upgrade to WFA 5.1.. You can review the restored content for completeness in functionality—for example, you can check the functioning of your custom workflows.
https://docs.netapp.com/wfa-51/topic/com.netapp.doc.onc-wfa-isg/GUID-7E72815B-3127-4C3A-ADC9-5AC96E279F45.html?lang=en
2020-11-23T22:54:30
CC-MAIN-2020-50
1606141168074.3
[]
docs.netapp.com
Event Bus Notifier Since Payara Server 4.1.1.171 The Event Bus Notifier provides a way to send notifications from the Notification service into the internal Payara event bus based on Hazelcast distributed topics. Configuration Event Bus Notifier Configuration This notifier provides only one configuration option - Topic Name - which is mandatory. In the Admin Console, simply set Topic Name to an arbitrary name. This Hazelcast topic will receive the notifications. Make sure that the "Enabled" box is ticked so that the notifier will be used. If you would like the changes to take effect without needing a restart, tick the "Dynamic" box as well. To make these changes via the asadmin tool, use the following command, which mirrors the above screenshot: asadmin> notification-eventbus-configure --topicName=my-topic --dynamic=true --enabled=true To check the current applied configuration from asadmin, run the command: asadmin> get-eventbus-notifier-configuration This will return the current configuration, with whether it is currently enabled and the name of the destination topic: $ asadmin get-eventbus-notifier-configuration Enabled Topic Name true my-topic
https://docs.payara.fish/community/docs/5.201/documentation/payara-server/notification-service/notifiers/event-bus-notifier.html
2020-11-23T22:08:59
CC-MAIN-2020-50
1606141168074.3
[array(['../../../../_images/notification-service/event-bus/event-bus-notif-config.png', 'Admin console config'], dtype=object) ]
docs.payara.fish
About the App/Web Recorder Recording can help you save a lot of time when automating your tasks. The App/Web Recorder captures your actions as you perform them on the screen and generates a Use Application/Browser activity with a series of activities inside it based on your actions. All user interface elements are highlighted while you record so that you can be sure the correct buttons, fields, or menus are selected. The recorder captures a default action for each type of UI element and also allows you to manually choose the desired action before performing it. The recorder displays a preview of the activities that will be added to your project in real time. The preview includes informative screenshots that are also added to the activities in the automation project. All screenshots are saved as .png files in a folder named .screenshots inside the project folder. Each recorded action generates an activity. The following table lists the actions you can record and the activities that are generated for them. Some actions are automatically recorded, which means the corresponding activity is automatically generated, while others must be manually selected before being recorded. For more information, see Automatically Generated Activities and Manually Generated Activities. Recording Your Actions - Open the application or web browser page you want to automate. - Click App/Web Recorder in the Studio ribbon or press Ctrl+Alt+R on your keyboard. Depending on your configuration, this option could also be located under Recording. The App/Web Recorder window is displayed. - (Optional) By default, after you indicate the UI element to interact with, the target selection is made automatically. If you want to be able to add or remove anchors, remove and add the target again after indicating it, and confirm the selection before recording steps, select Confirm every step. Even if you do not select this option, you can quickly enable it for individual steps in the recording. To do this, press and hold the Shift key while you click a UI element to interact with, or while you select an option from the Choose action hover menu. - Click Record to begin. - Move the mouse to the window of the application you opened in step 1. When the window is highlighted in blue, click anywhere inside it to indicate it as the application you want to automate. - Individual elements in the window are highlighted as you move the mouse over them. - To record the automatically detected action for the UI element you want to interact with, click the target element. - To manually indicate the action you want to record, hover the mouse over the target element, and then select an option from the Choose action hover menu that appears next to the element. You can change the layout of the menu between icons-only and list view, or pin it so that it is always visible during the recording using the buttons in the upper-right side of the menu. After a manually added action is recorded, the Recorder returns to automatic selection mode. - Some actions require your input before an activity is generated, so make sure to provide the necessary information when prompted, and then press Enter on your keyboard or click Confirm. For example, if you selected a text box, a Type Into action is detected and you are prompted to enter the text to type. Note: To ensure the action is performed on the intended element, do not move the mouse over another UI element until the action is performed for the current target element. - The action is performed and the activity is generated. To continue, select the next element to interact with in a similar way, and repeat this until all the actions are recorded. - When you are done recording your actions, click Save and return to Studio to close the Recorder and add the activities to your project. If a container activity was selected in the Designer panel when you started the Recorder, the generated activities are added inside it. If an activity that is not a container was selected, the generated activities are added after the selected activity. While recording, you can: - Add a 5 seconds delay in element selection by pressing F2 on your keyboard. You can do this if, for example, an element you need to interact with is not in the visible area of the screen and you need time to scroll down to it. - Switch to another application by clicking Change application . This enables you to record actions in multiple applications or web pages. The actions performed in each application are added as activities inside a separate Use Application/Browser activity. - Pause recording at any time by clicking Pause . To resume recording, click Record again. - Manage the activities added so far using the preview under Recorded Steps: - Remove an activity by clicking in the upper-right corner of the activity. - Configure the input or output of some activities: - Edit the text to type for Type Into. - Update the selection for Select Item. - Change the action for Check/Uncheck. - Remove recorded sequences for Keyboard Shortcuts. - Select where to save the copied text for Get Text. Note: When recording in the StudioX profile, the Indicate in Excel option is not available in the Plus menu. To indicate Excel data, select named cells, use the Custom Input option, or leave the field blank and use Indicate in Excel after completing the recording and returning to the StudioX designer.menu. To indicate Excel data, select named cells, use the Custom Input option, or leave the field blank and use Indicate in Excel after completing the recording and returning to the StudioX designer. Automatically Generated Activities The following activities are automatically generated when the corresponding action is detected. If needed, you can update the properties of the activities after completing the recording. - Click - Generated when you select a button or a link while recording. The click type and mouse button are also recorded. - Type Into - Generated when you select a text box or other editable UI elements while recording. After you select the text field, you are prompted to enter the text to complete the action. You can select Plain text to enter the text directly in the dialog box, or Expression to select Excel data or saved for later values from the Plus menu (in StudioX) or an expression (in Studio). By default, the Empty field option is selected to remove any existing text before typing. - Select Item - Generated when you select a drop-down list or combo box while recording. After you select the drop-down, you are prompted to select an item from the list to complete the action. - Check/Uncheck - Generated when you select a radio button or a check box. The default action to perform on the check box is Toggle. - Keyboard Shortcuts - Generated when you press a combination of keyboard shortcuts while recording. Manually Generated Activities In some situations, you must manually choose the action to perform next before recording the action and generating the activity. - The following actions cannot be recorded automatically: - Copy text from an application or web browser using the Get Text activity. After choosing the action and selecting the target element, indicate where to save the copied text from the Recorded Steps activity preview. If the Confirm every step option is selected, you are prompted to indicate where to save the text after you confirm the target. - Hover over a UI element using the Hover activity. - Highlight a UI element using the Highlight activity. - The action that is recorded automatically may not be the desired action. For example, you clicked the text area in Notepad with the intention to type into it, but the action recorded by default was Click. In this case, you can delete the action added by default and manually add the intended activity. You can manually select which action to perform from the Choose action hover menu that appears next to UI elements during recording. Updated 3 months ago
https://docs.uipath.com/activities/docs/app-web-recorder-preview
2020-11-23T22:29:02
CC-MAIN-2020-50
1606141168074.3
[array(['https://files.readme.io/32710b6-appweb_recorder.png', 'appweb_recorder.png'], dtype=object) array(['https://files.readme.io/32710b6-appweb_recorder.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/9ba2ea1-recorder.png', 'recorder.png'], dtype=object) array(['https://files.readme.io/9ba2ea1-recorder.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/a29aaaf-recorder_hover_menu.png', 'recorder_hover_menu.png'], dtype=object) array(['https://files.readme.io/a29aaaf-recorder_hover_menu.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/7638fdf-app-web-rec.gif', 'app-web-rec.gif'], dtype=object) array(['https://files.readme.io/7638fdf-app-web-rec.gif', 'Click to close...'], dtype=object) array(['https://files.readme.io/ffcf570-recorder_steps.png', 'recorder_steps.png'], dtype=object) array(['https://files.readme.io/ffcf570-recorder_steps.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/bba008a-recorder_click.png', 'recorder_click.png'], dtype=object) array(['https://files.readme.io/bba008a-recorder_click.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/791bf2e-recorder_type.png', 'recorder_type.png'], dtype=object) array(['https://files.readme.io/791bf2e-recorder_type.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/0e8c418-recorder_select_item.png', 'recorder_select_item.png'], dtype=object) array(['https://files.readme.io/0e8c418-recorder_select_item.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/7cbe526-recorder_check.png', 'recorder_check.png'], dtype=object) array(['https://files.readme.io/7cbe526-recorder_check.png', 'Click to close...'], dtype=object) array(['https://documentationpicturerepo.blob.core.windows.net/screenshots/screenshots/2020.4_StudioX/recorder_add.png', 'recorder_choose_action'], dtype=object) ]
docs.uipath.com
You can create a virtual machine in vSphere to use Virtualization-based security (VBS). Using a virtual machine enabled with VBS provides better protection from vulnerabilities within and malicious exploits to the operating system. Prerequisites - Microsoft Windows 10 (64-bit) or Windows Server 2016 (64-bit) operating system. - Familiarize yourself with the custom configuration parameters for virtual machines. See Virtual Machine Custom Configuration Parameters. Note: When you enable a virtual machine to use VBS, you can only deploy automated desktop pools that contain full virtual machines or instant clones. VBS is not supported for vGPU enabled virtual machines. URL redirection and scanner redirection might not work properly with VBS enabled. Procedure - Log in to vSphere Client. - Right-click any inventory object that is a valid parent object of a virtual machine, such as a data center, folder, cluster, resource pool, or host, and select New Virtual Machine. - Select Create a new virtual machine and click Next. - Follow the prompts to specify the virtual machine custom options. - On the Select a guest OS page, select Windows as the guest OS and select Microsoft Windows 10 (64-bit) as the guest OS version. Then, select Enable Windows Virtualization Based Security. - To deploy automated desktop pools that contain full virtual machines or instant clones, on the Customize hardware page, verify that you do not add any Trusted Platform Module (vTPM) device. Connection Server adds a vTPM device to each virtual machine during the desktop pool creation process. - Follow the prompts to complete the virtual machine setup and click Finish to create the virtual machine. What to do next - Install the Windows 10 (64-bit) or Windows Server 2016 (64-bit) operating system on the virtual machine. - On Windows 10 1803 builds, enable the VBS group policy. For more information, consult the article "Enable virtualization-based protection of code integrity" in the Microsoft documentation. Then reboot the virtual machine. - Windows 10 versions earlier than 1803 and Windows Server 2016 require Hyper-V features to be enabled for VBS. To enable Hyper-V features, navigate to Windows Features and enable Hyper-V adds a virtual switch to the virtual machine so that the virtual machine can acquire additional IP from a different IP range. When Horizon Agent is installed on a virtual machine that has more than one NIC, you must configure the subnet that Horizon Agent uses. The subnet determines which network address Horizon Agent provides to the Connection Server instance for client protocol connections. See Configure a Virtual Machine with Multiple NICs for Horizon Agent. Then enable the VBS group policy. - On Windows Server 2016, enable the VBS group policy, install the Hyper-V role and reboot the virtual machine.
https://docs.vmware.com/en/VMware-Horizon-7/7.9/horizon-virtual-desktops/GUID-A8477B63-F0AB-4568-9791-E8415F1049EF.html
2020-11-23T23:07:40
CC-MAIN-2020-50
1606141168074.3
[]
docs.vmware.com
The very top of Swank, just above the header/logo area, displays 2 separate widgets. Top Bar Left, and Top Bar Right. Top Bar Left The demo site has the Simple Social Icons Widget in Top Bar Left - Navigate to Appearance > Widgets - Drag and drop the Simple Social Icons Widget into the Top Bar Left Widget Area - Icon Font Color: #bbbbbb - Icon Font Hover Color: #f69c9b Top Bar Right Drag and drop a text widget into the Top Bar Right Widget Area. The demo uses html to display text and a link. You’ll find the code to do this yourself, under the HTML codes in your tutorials area.
https://docs.bizbudding.com/classic-docs/pretty-darn-cute/swank-theme-top-bar-left-and-right-widget-areas/
2020-11-23T22:27:32
CC-MAIN-2020-50
1606141168074.3
[array(['https://docs.bizbudding.com/wp-content/uploads/2020/04/swank-theme-top-bar-left-and-right-widget-areas-3.png', 'Swank Theme Top Bar Left and Right Widget Areas 1'], dtype=object) array(['https://docs.bizbudding.com/wp-content/uploads/2020/04/swank-theme-top-bar-left-and-right-widget-areas-2.png', 'Swank Theme Top Bar Left and Right Widget Areas'], dtype=object) ]
docs.bizbudding.com
- Running Redis on the application server - Setting up a Redis-only server - Increasing the number of Redis connections beyond the default - Tuning the TCP stack for Redis - Running with multiple Redis instances - Redis Sentinel - Setting the Redis Cache instance as an LRU - Using a Redis HA setup - Using Secure Sockets Layer (SSL) - Renamed commands - SSL certificates - Lazy freeing - Common Troubleshooting Configuring Redis Running Redis on the application server Using an alternate local Redis Instance Omnibus GitLab provides an instance of Redis by default. Administrators who wish to point the GitLab application at their own locally running Redis instance should make the following changes in gitlab.rb. Run gitlab-ctl reconfigure for the changes to take effect. redis['enable'] = false # Redis via TCP gitlab_rails['redis_host'] = '127.0.0.1' gitlab_rails['redis_port'] = 6379 # OR Redis via Unix domain sockets gitlab_rails['redis_socket'] = '/tmp/redis.sock' # defaults to /var/opt/gitlab/redis/redis.socket # Password to Authenticate to alternate local Redis if required gitlab_rails['redis_password'] = 'Redis Password' Making a bundled Redis instance reachable via TCP Use the following settings if you want to make one of the Redis instances managed by Omnibus GitLab reachable via TCP. redis['port'] = 6379 redis['bind'] = '127.0.0.1' Setting up a Redis-only server If you’d like to setup a separate Redis server (e.g. in the case of scaling issues) for use with GitLab you can do so using GitLab Omnibus. Setting up the Redis Node Note: Redis does not require authentication by default. See Redis Security documentation for more information. We recommend using a combination of a Redis password and tight firewall rules to secure your Redis service. - Download/install GitLab Omnibus using steps 1 and 2 from GitLab downloads. Do not complete other steps on the download page. Create/edit /etc/gitlab/gitlab.rband use the following configuration. # Disable all services except Redis redis_master_role['enable'] = true # Redis configuration redis['port'] = 6379 redis['bind'] = '0.0.0.0' # If you wish to use Redis authentication (recommended) redis['password'] = 'Redis Password' # Disable automatic database migrations # Only the primary GitLab application server should handle migrations gitlab_rails['auto_migrate'] = false Note: The redis_master_role['enable']option is only available as of GitLab 8.14, see gitlab_rails.rbto understand which services are automatically disabled via that option. - Run sudo gitlab-ctl reconfigureto install and configure Redis. Configuring the GitLab Application Node The following settings point the GitLab application at the external Redis service: redis['enable'] = false gitlab_rails['redis_host'] = 'redis.example.com' gitlab_rails['redis_port'] = 6379 # Required if Redis authentication is configured on the Redis node gitlab_rails['redis_password'] = 'Redis Password' Run sudo gitlab-ctl reconfigureto configure the application to use the external Redis node. Using Google Cloud Memorystore Google Cloud Memorystore does not support the Redis CLIENT command. By default Sidekiq will attempt to set the CLIENT for debugging purposes. This can be disabled via this config setting: gitlab_rails['redis_enable_client'] = false Increasing the number of Redis connections beyond the default By default Redis will only accept 10,000 client connections. If you need more that 10,000 connections set the ‘maxclients’ attribute to suite your needs. Be advised that adjusting the maxclients attribute means that you will also need to take into account your systems settings for fs.file-max (i.e. “sysctl -w fs.file-max=20000”) redis['maxclients'] = 20000 Tuning the TCP stack for Redis The following settings are to enable a more performant Redis server instance. ‘tcp_timeout’ is a value set in seconds that the Redis server waits before terminating an IDLE TCP connection. The ‘tcp_keepalive’ is a tunable setting in seconds to TCP ACKs to clients in absence of communication. redis['tcp_timeout'] = "60" redis['tcp_keepalive'] = "300" Running with multiple Redis instances GitLab includes support for running with separate Redis instances for different persistence classes, currently: cache, queues, shared_state and actioncable. - Create a dedicated instance for each persistence class as per the instructions in Setting up a Redis-only server Set the appropriate variable in /etc/gitlab/gitlab.rbfor each instance you are using: gitlab_rails['redis_cache_instance'] = REDIS_CACHE_URL gitlab_rails['redis_queues_instance'] = REDIS_QUEUES_URL gitlab_rails['redis_shared_state_instance'] = REDIS_SHARED_STATE_URL gitlab_rails['redis_actioncable_instance'] = REDIS_ACTIONCABLE_URL Note: Redis URLs should be in the format: redis://:PASSWORD@REDIS_HOST:PORT/2 Where: - PASSWORD is the plaintext password for the Redis instance - REDIS_HOST is the hostname or IP address of the host - REDIS_PORT is the port Redis is listening on, the default is 6379 - Run gitlab-ctl reconfigure Redis Sentinel For details on configuring Redis Sentinel, see. Setting the Redis Cache instance as an LRU Using multiple Redis instances allows you to configure Redis as a Least Recently Used cache. Note you should only do this for the Redis cache instance; the Redis queues and shared state instances should never be configured as an LRU, since they contain data (e.g. Sidekiq jobs) that is expected to be persistent. To cap memory usage at 32GB, you can use: redis['maxmemory'] = "32gb" redis['maxmemory_policy'] = "allkeys-lru" redis['maxmemory_samples'] = 5 Using a Redis HA setup See. Using Secure Sockets Layer (SSL) Redis v3.2.x does NOT support SSL out of the box. However, you can encrypt a Redis connection using stunnel. AWS ElasticCache also supports Redis over SSL. Renamed commands By default, the KEYS command is disabled as a security measure. If you’d like to obfuscate or disable this command, or other commands, edit the redis['rename_commands'] setting in /etc/gitlab/gitlab.rb to look like: redis['rename_commands'] = { 'KEYS': '', 'OTHER_COMMAND': 'VALUE' } OTHER_COMMANDis the command you want to modify VALUEshould be one of: - A new command name. - ’’, which completely disables the command To disable this functionality: - Set redis['rename_commands'] = {}in your /etc/gitlab/gitlab.rbfile - Run sudo gitlab-ctl reconfigure Limitations GitLab does NOT ship with stunnel or other tools to provide encryption for the Redis server. However, GitLab does provide client support via the rediss://(as opposed to redis://) URL scheme. Redis Sentinel does NOT support SSL yet. If you use Redis Sentinel, do not activate client support for SSL. This pull request may bring native support to Redis 6.0. Activating SSL (client settings) To activate GitLab client support for SSL, do the following: Add the following line to /etc/gitlab/gitlab.rb: gitlab_rails['redis_ssl'] = true Run sudo gitlab-ctl reconfigurefor the changes to take effect. SSL certificates If you are using custom SSL certificates for Redis, be sure to add them to the trusted certificates. Lazy freeing Redis 4 introduced lazy freeing. This can improve performance when freeing large values. This setting defaults to false. To enable it, you can use: redis['lazyfree_lazy_eviction'] = true redis['lazyfree_lazy_expire'] = true redis['lazyfree_lazy_server_del'] = true redis['replica_lazy_flush'] = true Common Troubleshooting x509: certificate signed by unknown authority This error message suggests that the SSL certificates have not been properly added to the list of trusted certificates for the server. To check whether this is an issue: Check Workhorse logs in /var/log/gitlab/gitlab-workhorse/current. If you see messages that look like: 2018-11-14_05:52:16.71123 time="2018-11-14T05:52:16Z" level=info msg="redis: dialing" address="redis-server:6379" scheme=rediss 2018-11-14_05:52:16.74397 time="2018-11-14T05:52:16Z" level=error msg="unknown error" error="keywatcher: x509: certificate signed by unknown authority" The first line should show redissas the scheme with the address of the Redis server. The second line indicates the certificate is not properly trusted on this server. See the previous section. Verify that the SSL certificate is working via these troubleshooting steps. NOAUTH Authentication required A Redis server may require a password sent via an AUTH message before commands are accepted. A NOAUTH Authentication required error message suggests the client is not sending a password. GitLab logs may help troubleshoot this error: Check Workhorse logs in /var/log/gitlab/gitlab-workhorse/current. If you see messages that look like: 2018-11-14_06:18:43.81636 time="2018-11-14T06:18:43Z" level=info msg="redis: dialing" address="redis-server:6379" scheme=rediss 2018-11-14_06:18:43.86929 time="2018-11-14T06:18:43Z" level=error msg="unknown error" error="keywatcher: pubsub receive: NOAUTH Authentication required." Check that the Redis client password specified in /etc/gitlab/gitlab.rbis correct: gitlab_rails['redis_password'] = 'your-password-here' If you are using the Omnibus-provided Redis server, check that the server has the same password: redis['password'] = 'your-password-here' Redis connection reset (ECONNRESET) If you see Redis::ConnectionError: Connection lost (ECONNRESET) in the GitLab Rails logs ( /var/log/gitlab-rails/production.log), this might indicate that the server is expecting SSL but the client is not configured to use it. Check that the server is actually listening to the port via SSL. For example: /opt/gitlab/embedded/bin/openssl s_client -connect redis-server:6379 Check /var/opt/gitlab/gitlab-rails/etc/resque.yml. You should see something like: production: url: rediss://:mypassword@redis-server:6379/ If redis://is present instead of rediss://, the redis_sslparameter may not have been configured properly, or the reconfigure step may not have been run. Connecting to Redis via the CLI When connecting to Redis for troubleshooting you can use: Redis via Unix domain sockets: /opt/gitlab/embedded/bin/redis-cli -s /var/opt/gitlab/redis/redis.socket Redis via TCP: /opt/gitlab/embedded/bin/redis-cli -h 127.0.0.1 -p 6379 Password to authenticate to Redis if required: /opt/gitlab/embedded/bin/redis-cli -h 127.0.0.1 -p 6379
https://docs.gitlab.com/12.10/omnibus/settings/redis.html
2020-11-23T21:51:41
CC-MAIN-2020-50
1606141168074.3
[]
docs.gitlab.com
get_stormsand get_storm_datahave been rewritten to utilize pkg crul’s asynchronous features. This will not make much of a difference in get_storms(and may actually be slightly slower; to be explained). But the difference with get_storm_datashould be very noticeable. There is a limit to hitting the NHC archives; 80 requests per 10 seconds. Both functions send 4 links through every 0.5 seconds to avoid this limit. Timeout issues should no longer occur so options rrricanes.http_attempts and rrricanes.http_timeout have been removed. The primary cause of long processing now is due to scraping, particularly with the fstadvproducts; the amount of data in these products and the unstructured nature of the products require a number of rules. This can probably be simplified in future releases. (#94) load_storm_datanow takes readr::read_csvparameters. Keyvariable added to discusdataframes. Keywill be NA for all cyclones >= 2005. Should not be <= 2006. (#80) Advvariable from posestdataframes. Position estimates do not have advisory numbers. (#81) Advvariable from update. Updates do not have advisory numbers. (#84) Keyto get_publicdataframes. (#85) Keyto get_updatedataframes. (#86) get_fstadv. Hrs 48 and 72 hours only have 34 and 50kt wind fields. Hrs 96 and 120 have none. (#89) knots_to_mph, mb_to_in, status_abbr_to_str, get_discus, get_fstadv, tidy_fstadv, tidy_wr, tidy_fcstand tidy_fcst_wr. gis_advisoryTypically will include current and past track data, forecast track data, forecast cone (margin of error) and wind radius data. gis_breakpointsList of breakpoints typically used for watch/warning areas but is not a requirement. gis_latestRetrieves the latest GIS products for all active storms. gis_outlookRetrives the latest tropical weather outlook in shapefile format. gis_prob_storm_surgeProbabilistic storm surge; a polygon dataset for psurge and esurge products with various criteria. gis_windfieldWind radius datasets. gis_wspWind speed probabilities gis_downloadUse this function to download the URLs returned from the above functions. shp_to_dfadded to convert lines and polygons spatial dataframes to dataframes. Points dataframes can be converted using tibble::as_dataframe(target the @data object). pkgdown. load_storm_datadirectly returns dataframes. Additionally, retrieval by basin and years removed in favor of importing complete product datasets. Additionally, documentation has been added to the website on using data.world as a third option. The difference between these two options is load_storm_datawill return complete datasets. Using data.world will allow users to write custom queries to retrieve data. (#76) rrricanes.http_sleepto control time to sleep between multiple HTTP requests. get_fstadv, get_prblty, get_wndprb, tidy_fstadv, tidy_wr, tidy_fcstand tidy_fcst_wr. tidy_fcstand tidy_fcst_wrwould err if all forecast periods were not available for a cyclone. Functions now analyze dataframe to determine what forecast fields exist, then tidies based on the result. (#73) Changed name from Hurricanes to rrricanes. get_storm_data can now be chained to other commands and returns a list of dataframes. load_storm_data accesses pre-scraped datasets and returns requested products through the github repo rrricanesdata. This was done to make it quicker to get data. It should not be relied on to get the most immediate data for current storms. However, it should be fairly up-to-date. Original functions can be used if for some reason immediate data access is needed. saffir returns Saffir-Simpson classification of tropical cyclones; abbreviated. status_abbr_to_str converts storm status abbreviations (i.e., TD, TS, HU) to string. twoal and twoep parse tropical weather outlook XML files. Gives current status, if any, of areas of interest in either basin. tidy_fstadv, tidy_wr, tidy_fcstand tidy_fcst_wrhave been added to replaced now-removed fstadv_split(). rrricanes.http_timeoutand rrricanes.http_attemptsadded to give user more control over this. Default is 3 attempts with no more than 5 permitted. get_stormson some linux distros generated xpath_element error. Corrected. (#67) get_storm_data. Replaced css parameter in rvest::html_nodescalls with xpath parameter. Some products (notably, get_prblty) do not have a “pre” tag but are text documents (not HTML). Modified scrape_contentsto return full contents if “pre” tag doesn’t exist. Tested get_discusand get_public; no errors generated. (#68) Retrieve all storm’s for a given year (>=1998) and access data from a given storm’s history. Can access “current” storm position, structure details, forecast, and discussions. This release should be considered beta. While I’ve made every effort to ensure quality there may be an issue here or there. I will work on developing QA/QC scripts as time permits. Please send any issues or questions to:. Use get_storm_data to access one or multiple products for a specific storm. Not parsed but contains technical information on the cyclone, development tendencies and forecast model tendencies. Contains the meat of data. Current storm information, forecast information, wind and sea data. Can use fstadv_split() to break the wide dataframe to multiple, relational dataframes. Contains current position estimate for a given storm. Usually issued during threats to land. Not issued for all storms. Not parsed. Strike probabilities for given locations prior to 2006 (See Wind Speed Probabilities for >= 2006). Quick information given when a storm is threatening or undergoes a significant change. Not issued for all storms.
https://docs.ropensci.org/rrricanes/news/index.html
2020-11-23T22:36:40
CC-MAIN-2020-50
1606141168074.3
[]
docs.ropensci.org
- Project features - New project - Project settings - Import or export a project - CI/CD for external repositories - Project members - Project activity - Project’s landing page - Redirects when changing repository paths - Use your project as a Go package - Access project page with project ID - Project aliases - Project APIs Projects - Repositories: Host your code in a fully integrated platform - Branches: use Git branching strategies to collaborate on code - Protected branches: Prevent collaborators from messing with history or pushing code without review - Protected tags: Control over who has permission to create tags, and prevent accidental update or deletion - Repository mirroring - - Merge Requests: Apply your branching strategy and get reviewed by your team - Merge Request Approvals: - Autocomplete characters: Autocomplete references to users, groups, issues, merge requests, and other GitLab elements. - - Kubernetes cluster integration: Connecting your GitLab project with a Kubernetes cluster - Feature Flags: Feature flags allow you to ship a project in different flavors by dynamically toggling certain functionality - GitLab Pages: Build, test, and deploy your static website with GitLab Pages Other features: - Wiki: document your GitLab project in an integrated Wiki. - Snippets: store, share and collaborate on code snippets. - Value Stream Analytics: review your development lifecycle. - Insights: configure the Insights that matter for your projects. - Security Dashboard: Security Dashboard. - Syntax highlighting: an alternative to customize your code blocks, overriding GitLab’s blacklist licenses for projects. - Dependency List: view project dependencies. - Requirements: Requirements allow you to create criteria to check your products against. - Static Site Editor: quickly edit content on static websites without prior knowledge of the codebase or Git commands. Project integrations Integrate your project with Jira, Mattermost, Kubernetes, Slack, and a lot more. New project Learn how to create a new project in GitLab. Fork a project You can fork a project in order to: - Collaborate on code by forking a project and creating a merge request from your fork to the upstream project - Fork a sample project to work on the top of that Project settings Set the project’s visibility level and the access levels to its various pages and perform actions like archiving, renaming or transferring a project. Read through the documentation on project settings. Import or export a project - Import a project from: - Export a project from GitLab - Importing and exporting projects between GitLab instances within the project’s repository. - If the project doesn’t contain either of these files, the visitor will see the list of files and directories of the repository. For users without permissions to view the project’s code: - The wiki homepage is displayed, if any. - The list of issues within the project is displayed. Redirect. Use your project as a Go package Any project can be used as a Go package including private projects in subgroups. GitLab responds correctly to go get and godoc.org discovery requests, including the go-import and go-source meta tags, respectively. To use packages hosted in private projects with the go get command, use a .netrc file and a personal access token in the password field. For example: machine example.gitlab.com login <gitlab_user_name> password <personal_access_token> Access project page with project ID Introduced in GitLab 11.8. To quickly access a project from the GitLab UI using the project ID, visit the /projects/:id URL in your browser or other tool accessing the project. Project aliases for the project), the repository can be cloned using the alias (e.g git clone [email protected]:gitlab.git instead of git clone [email protected]:gitlab-org/gitlab.git). Project APIs There are numerous APIs to use with your
https://docs.gitlab.com/12.10/ee/user/project/index.html
2020-11-23T22:02:56
CC-MAIN-2020-50
1606141168074.3
[]
docs.gitlab.com
Applying for Resources¶ Computational resources may be allocated via several allocation mechanisms: - Academic researchers may apply via Open Access Competitions. - Commercial and non-commercial institutions may also apply via the Directors Discretion. - Foreign (mostly European) users can obtain computational resources via the PRACE (DECI) program. In all cases, IT4Innovations’ access mechanisms are aimed at distributing computational resources while taking into account the development and application of supercomputing methods and their benefits and usefulness for society. The applicants are expected to submit a proposal. In the proposal, the applicants apply for a particular amount of core-hours of computational resources. The requested core-hours should be substantiated by scientific excellence of the proposal, its computational maturity and expected impacts. The allocation decision is based on the scientific, technical, and economic evaluation of the proposal. For more information, see Computing resources allocation and Obtaining Login Credentials.
https://docs.it4i.cz/general/applying-for-resources/
2020-11-23T21:59:34
CC-MAIN-2020-50
1606141168074.3
[]
docs.it4i.cz
](/sdk/js/5/core-classes/kuzzle/query) sort(key, [options], callback) # Options # Callback Response # Returns an array of sorted values. Usage # // Using callbacks (NodeJS or Web Browser) kuzzle.memoryStorage.sort('key', function (err, values) { // callback called once the action has completed }); // Using promises (NodeJS only) kuzzle.memoryStorage.sortPromise('key') .then(values => { // resolved once the action has completed }); Callback response: ["sorted element1", "sorted element2", "..."]
https://docs-v2.kuzzle.io/sdk/js/5/core-classes/memory-storage/sort/
2021-04-10T19:03:15
CC-MAIN-2021-17
1618038057476.6
[]
docs-v2.kuzzle.io
Co-Teaching Starting with PowerSchool 7.6 sections can be shared with additional teachers. This means additional teachers can be added to sections with separate access. This means that the teachers can either share part or none of the teacher side. This feature is useful in many situations beyond co-teachings. It can also be used to replace a teacher leaving early in the year, teacher aides, and observation. Adding an Additional Teacher If there is a section where more then one teacher needs access, the additional teacher(s) can be added following the instructions below. - Log into PowerSchool - Click on school in the left menu - Click sections toward the bottom of the page - Click on the course - Click the section - Scroll down to Teacher/Staff Additional - Click Add - Select the teacher and role - Use the chart above to select the proper role - The allocation is what percentage that teacher is participating in the section - Allocation is normally 100% - Set the date - If the assigment is temporary, put in the date the teacher will be part of the class - Normally the default semester dates - Click submit at the bottom ( Run report for co-teachers- /extended rpts/scheduling/co teaching list) 11. Enter state information - Go back into the same section - Scroll to the bottom, click Teacher Course Assignment Tracking Records - Click Add - 20-Enter the date the teacher will be teaching. - 21-Choose the lead teacher - 22-Enter 1.0 for Teacher to Course Commitment - 23-Choose 01-Teacher of Record for Role of Professional - Click Submit - The date must be between the start and end dates of the section Repeat process for additional teacher (co teacher). - Repeat for every section
https://docs.glenbard.org/index.php/ps-2/admin-ps/scheduling/co-teaching/
2021-04-10T18:34:54
CC-MAIN-2021-17
1618038057476.6
[]
docs.glenbard.org
.: In the above, the value in execDate is tested to see if it is either:. When you run a job via the APIs, you can apply parameter overrides to the following parameter types: For more information, see API Workflow - Run Job.
https://docs.trifacta.com/plugins/viewsource/viewpagesrc.action?pageId=160412031
2021-04-10T19:41:47
CC-MAIN-2021-17
1618038057476.6
[]
docs.trifacta.com
axial smt10 're release About Landing JConcepts has just released a teaser picture of a new monster truck tire & wheel, which is mounted up to an Axial SMT10 Grave Digger. Quick shop . Axial started teasing the re-release of the SMT10 save my name, email, more! Top Rated Plus. Solid axle monster truck fans would basically buy it indefinitely, methinks. I’ve been waiting to do a big write-up on the Axial SMT10 for awhile now, so I figured after purchasing and modifying my second one (and with a third in the near future as well) it was high time I talked about it. … Not much has changed between the original SMT10 and the 2020 model, at least in terms of mechanics. From Hong Kong; For Axial Wraith Scorpion SMT10 Front ALUMINUM KNUCKLES + STEERING ARM -GREEN. The stock SMT10 could be fun to bomb around with in stock form, if you used 3S and installed the higher tooth pinion….but it had crawler drag brake on the AE-5 ESC. This was, uh, not ideal. They asked for suggestions, should they bring back the SMT10 platform. Posted by Doug in Monster Truck Madness on Tuesday, May 21st, 2019 at 3:14 pm. Enter Axial Racings new SMT10 Monster Truck chassis with the first release being the AX90055 Grave Digger Monster Jam edition truck! Unlike the Grave Digger, the Raw Builder’s Kit is all black with no trademark bright green parts. Quote: Originally Posted by chino. Are unfamiliar with what a “ axial monster truck conversion truck is just what the doctor ordered that. This new release comes as a bit of a surprise to me, but isn’t an announcement that is completely out of left field. Just make them unlicensed so we don’t have to pay too much extra, as many folks will put on the larger Clod-sized JConcepts and Pro-Line Racing tires. If you added any speed or larger tires, it became unwieldy and absolutely needed a front sway bar added to avoid going arse-over-tea-kettle under hard cornering. Axial AX10™ Transmission Parts List By AR on Jan 04, 2018. Axial SMT10 Grave Digger (AX90055) Aluminum Front C-Hub & Knuckle Arm (5 Degree Caster) - 4Pcs Set Black. The Axial® SMT10™ Grave Digger® 1/10 4WD Monster Jam® Truck comes factory-built and ready-to-run with updated, improved electronics and radio gear. Axial AR60 Heavy Duty C-Hub Carrier Axial AR60 Heavy Duty Steering Linkage Axial AR60 Machined High Clearance Diff Cover RPP Price: $5.99 . It’s a heck of a platform! Quick shop. In Stock! If they could get the price on this to say, oh, i dunno, $199 to $249, I think they would move a lot of them. Newbie . It seems fine width wise, and height is a non-issue. However, if I had my druthers I’d have one more suggestion…. The shot above is a screen grab from an Axial Facebook post they put online on a few days ago. Quote: Originally Posted by BigBaller. Richmk1 replied to Richmk1's topic in Monster RCs. This lead to a truck that was quite “rolly-poly” in stock form. In Stock! The stock SMT10 could be fun to bomb around with in stock form, if you used 3S and installed the higher tooth pinion….but it had crawler drag brake on the AE-5 ESC. The truck sat at a super long 13.9″, and therefore it did not play well with the vast majority of aftermarket bodies on the market. I think $299 or under is the sweet spot for this rig. Axial 1/10 Scale SMT10 Monster Truck Raw Builders Kit - AXI03020 USED. Found in: AX10 / SCX10 / SCX10 II RTR / WRAITH / SMT10 3 GEAR TRANS Stock Gearing SCX10: 32P 13T Pinion / 56T Spur Gear SCX10 II RTR: 32P … The electronics system has also received an update in the form of a Dynamite ESC and Spektrum radio system. I think the above changes would make for a very successful re-release of the SMT10. RCNewb.com is a participant in the Amazon Services LLC Associates and AvantLink programs, affiliate advertising programs designed to provide a means for sites to earn advertising fees by advertising and linking to amazon.com and other select merchants. I think what is more important is a lower price, and not having any licensed goodies would help the cause to be at said lower price. There is finally a legit, ready to run solid axle monster truck about to hit the market. A quick reference parts list for the Axial AX10™ Transmission. © 2004-2021 Big Squid RC, Inc., all rights reserved. Schäden an den Batterien … I was even more stoked, however, when I found a big ol’ box of Axial’s aftermarket goodies included so I could do an upgrade article. Updated grave digger version RTR. figure instead of submitting a comment, why not just devote this week’s MTM to the topic? The mounting holes for these side plates are also countersunked where we can, so we … The icing on the cake is the replica Grave Digger body that sits atop the SMT10 chassis. This RTR truck has been paired with Axial’s AX10 transmission and features all-metal internal gears, WB8 heavy-duty driveshafts, aluminum-bodied/adjustable shocks, and a heavily-detailed (and very green) tube-frame chassis. release date: Early August. December 21, 2019; 5 replies Axial smt10 re release uk date. Whether you’re a radio control fanatic or RC novice we have everything you are looking for! That’s not a Wraith or a Ridgecrest chassis lurking under there. The SMT10 (AX90055) is a new branch in Axial’s product tree, with the first bud being a ready-to-run (RTR), 4-wheel drive replica of Grave Digger, the Monster Jam monster truck. ©2021 Horizon Hobby, LLC. Suffice it to say, I feel like I know the platform pretty well. The RC Grave Digger rides on the Axial SMT10 tube frame chassis, which was developed for maximum bashing strength and an extremely detailed appearance. + 7 S 0 P O N S O A R P A 7 E E D-1-1 U J-1 0 F J-1-1. Followers 0. Some links to third-party websites within posts on our blog may be affiliate links.Affiliate links will be styled with a wavy underline. It would also be nice if a lower turn brushed motor was used, say, around 20 turns. Visit horizonhobby.com to learn more about this ready-to-run radio-controlled truck. Mated to the proven Axial® AR60™ axle, the 4-link suspension soaks up any obstacles in its path. It would sure be nice to be able to fit a sweet JConcepts or Pro-Line body on it, right out of the box, without having to get brand new links.. 5 product ratings - Axial 1/10 SMT10 4 Wheel Drive Monster Truck Raw Builders Kit AXI03020. AXI03000T1 Capra 1.9 Unlimited Trail Buggy 1/10th RTR (Red) $449.99 PRODUCT INFO; AXI03000T2 Capra 1.9 Unlimited Trail Buggy 1/10th RTR (Green) $449.99 PRODUCT INFO; OCT. 2019. A front sway bar should come standard. These works with the new SMT10 Grave Digger, Yeti Rear suspension lower links, RR10 Bomber Rear suspension lower links. Not strictly Tamiya, but have people noticed Axial is back with the SMT10 after quietly pulling it for sales awhile back? diamond centered rear axle for axial wraith / rr10 (black) hd centered rear axle shafts for wraith / rr10. Buy It Now +$4.95 shipping. First off, many long time readers know I love monster trucks. Location: Los Angeles. Axial SMT10 Grave Digger (AX90055) Aluminum Center Transmission Case - 1 Set Blue. If this was done, the SMT10 could be a lot of fun right out of the box. Was developed for maximum strength and an extremely detailed appearance latest model axial monster truck conversion the popular SCX24™ from. $24.90 USD. In addition to the re-release of the RTR Grave Digger, Axial now has a kit version of the SMT10. Mated to the proven Axial® AR60™ axle, the SMT10™ chassis’ 4-link suspension soaks up any obstacles in its path. Changes include Metal gear transmission 12T 550 brushed motor 60A brushed ESC Spektum Radio and Servo $100 cheaper price vs the previous version at $299.99 MORE POWER! Click to subscribe ⇨ My buddy Jeremy came over to help wrench on and tune my Max-D for the Solid Showdown. Richmk1 posted a topic in Monster RCs. It appears they've learned a thing or two and have listened to their customers somewhat (novel idea, cough, Tamiya). In Stock! I mean barely. After stringing hobbyists along on social media (in a good way), Axial has released an updated version of their SMT10 ready-to-run (RTR) monster truck. Priced at $299.99, the Axial SMT10 Grave Digger Monster Truck will be available in January 2020 through Axial and their dealers. I’ve been building and racing SMT10s in monster truck competition for 4 years now.’d think this could be done relatively cheap, by using new plastic upper and bottom links. One of the most popular modifications that folks made to the OG SMT10 was to shorten the wheelbase. Comes as a rtr or a builder's kit now to. That needs to change on the next release. High-quality electronics also contribute to the Grave Digger model’s never-say-die performance. The AX10™ transmission included with the SMT10™ Monster Truck RAW Builder’s Kit is designed to make the most of your motor’s power. This was, uh, not ideal. Quote: Originally Posted by Akura2. The SMT10 is AR60 axle based as expected, but the tube chassis is all new. 26022 Pala, Mission Viejo, CA 92691. In Stock! Brand New. I’ve been looking forward to getting one of Axial Racing’s SMT10 Monster Trucks, so when the MAX-D landed on my doorstep (thanks UPS guy), it was literally a matter of seconds before the box was open and the truck was out. Location: Polson. Pre-Owned. Change the Wheelbase to 12.5″ (ish) Inches. Monster Truck Madness #6 – Getting Started with an Axial SMT10. The hinged door on the battery compartment has to bend around the end of the battery because of its length, and it can be tricky getting the e-clip in to hold it shut. Congratulations guys! Axial SMT10 Re-release with lower price and metal gears. The 2.6 sized tire (Clod), called the Renegade (as seen on the sidewall), has a very aggressive tread that mimics a hand carved 66″ Goodyear Terra that has long been popular with full size trucks. Just been re released, including metal gears and better electronics. I would absolutely LOVE to see the SMT10 come back as a Raw Builder’s Kit, sans tires, electronics and body. high ratio transmission kit for wraith . Outside of the original Clod, it’s probably my favorite r/c release of all time. Once again carrying its iconic black and green color scheme, the SMT10 Grave Digger Monster Truck has re-emerged from the shadows. Let’s take a look at this new release and see what sort of fun we can have with it! Watch; Axial 1/10 SMT10 4 Wheel Drive Monster Truck Raw Builders Kit AXI03020. That needs to change on the next release. C $39.28; Buy It Now +C $9.75 shipping; 7 Watching. Verwenden Sie keine Nickel-Cadmium-oder Nickel-Metall-Hydrid-Akku Ladegerät LiPo-Akkus aufladen. It would also be nice if a lower turn brushed motor was used, say, around 20 turns. It should have upgraded transmission gears and maybe a few other hop-ups like aluminum c-hubs, knuckles and rear lockouts. C $37.75; Buy It Now +C $25.68 shipping; 7 Watching. scale steel driveshaft for axial wraith. Look, I love scale 4×4’s and trailing….but I LOOOOOVE me some monster trucks. Big Squid RC, Basher Approved, and the Big Squid Logo are registered trademarks of Big Squid RC, Inc. All other trademarks and copyrights are the property of their respective owners. One of the most popular modifications that folks had to make was to get an aftermarket short wheelbase kit, dropping it down to around 12 and a half inches. Pebble Pounder . 9 product ratings - Axial SMT10 Grave Digger SPUR & Slipper (Pinion, cover, mount 56t Max-D AX90055. Save axial smt10 used to get e-mail alerts and updates on your eBay Feed. Tower Hobbies Find RC crawler parts for Axial SMT10 at RCPlanet.com. Re: Jato's SMT10. This has been a loooooong time coming. And bare in mind that while yes, I’d love to see a completely tricked out version that’s done up to the 9’s, I’m trying to be realistic here on what could be an actual release, mass marketed to your average monster truck nut at an affordable price. RPP Price: $19.99 . Bye, bye... 02-15-2017, 09:51 PM #20: Akura2. $9.79. trailing arms for smt10 / rr10 / yeti (silver) 2 speed transmission kit for wraith. I’ve talked about them ad nauseum in this space for years. All Rights Reserved. 5.0 out of 5 stars. Visit horizonhobby.com to learn more about this ready-to-run radio-controlled truck. 550 SIZED 12-TURN MOTOR Among the updates Axial designers have given this Grave Digger® truck is a … Axial Aluminum Link Set Optional SMT10 Monster Truck Axial Aluminum Shock Spring Retainer 12mm Black Axial Aluminum Shock Spring Retainer 12mm Blue RPP Price: $67.99 . Show/Artical on an Axial AX10 and would like to Convert it to … Axial smt10 re release uk date. RPP Price: $19.99 . Everybody’s Scalin’ – All About the SMT10. While there are plenty of carry-over items, this new model does offer some new bits, including a 12-turn brushed motor for improved torque and overall speed. Posts: 32 Re: What parts to get with SMT-10 Grave Digger . Here’s what I’d like to see updated over the OG Grave Digger and Max-D versions, should it come back. After stringing hobbyists along on social media (in a good way), Axial has released an updated version of their SMT10 ready-to-run (RTR) monster truck. SMT10 Grave Digger 1/10th 4wd Monster Truck RTR $299.99 PRODUCT INFO; AXI03020 SMT10 1/10th Scale Monster Truck Raw Builders Kit $199.99 PRODUCT INFO; NOV. 2019. Not much has changed between the original SMT10 and the 2020 model, at least in terms of mechanics. $199.99. With the help of Greg H. they knocked this thing out of the park and won the monster truck class at the Axialfest 2020 concours! Officially licensed monster trucks are cool, but in the SMT10’s case I don’t think it’s necessary. Just wanted to share s short video of Jeremy’s Bigfoot build that started with the Axial SMT10 kit. Time left 3d left. Page 4 Do not use a Nickel Cadmium or Nickel Metal Hydride battery charger to charge LiPo batteries. Bye, bye... What does that … Axial SMT10 Parts. From United States; Axial Aluminum Gear Box plate battery board & For Axial SMT10 AX90055 … $26.90 USD. 25t aluminum servo horn (black) hd d60 centered rear axle for wraith / smt10 / yeti. Other than that and the absence of a body, tires and electronics, the kit shares all the same specs and features of the RTR Grave Digger. Naughty, naughty...no vendor star. Words and photos by Tony Phalen. The SMT10 came with a rear sway bar but nothing up front. Once again carrying its iconic black and green color scheme, the SMT10 Grave Digger Monster Truck has re-emerged from the shadows. First the rumors, then the teasers and now the release. While that’s my .02, I know that regardless of how it comes back, myself and other monster truck nuts are just hoping that it DOES come back. Axial SMT10 Parts. Axial SMT10 Grave Digger (AX90055) & MAX-D (AX90057) Aluminum Front/Rear Internal Shocks (3 Length Adjustable Design ) - 1Pr Set Brown. $20 shipped to the lower 48... just let me know Naughty, naughty...no vendor star. 25 watchers. Of course like anything RC, It could always use a little upgrading! AXI03001T1 SCX10 II 1955 Ford … Summary of Contents for Axial GRAVE DIGGER SMT10. Re: Darcsyde Axial SMT10. I can't confirm the dimensions, but I run a 5000mah 3S Traxxas Lipo in mine and it barely fits. Bigfoot Axial SMT Build Takes a Victory Lap. Join Date: Mar 2014. Anything else breaking regularly? Available ready-to-run, the vehicle comes with a 2.4GHz Spektrum radio system and powerful Dynamite motor and ESC. Thanks for input, so steering links need upgrade from get go. $350.00. Here it is guys, the Smt10 is back!!! Each pack features 4 CNC aluminum plates that are in the same shape and profile as the plastic lower links with light weight cut-outs and all the necessary hardware to install them. You know, I’ve been very critical of the SMT10’s included 2.2 sized tires in the past, but the more I think on it, the more I believe 2.2 is absolutely fine for the RTR version. Featuring the same graphics as its 1:1 namesake, this scaled-down model is ready for action. 5 product ratings - Axial 1/10 Scale SMT10 Monster Truck Raw Builders Kit - AXI03020 USED. Damage to the batteries and charger will likely occur resulting in a fire. Join Date: Oct 2005. 5.0 out of 5 stars. This new release beings scale realism to RC monster trucks like never before. If this was done, the SMT10 could be a lot of fun right out of the box. Let us customize the rest with all of the amazing goodies the aftermarket provides. Not finding what you're looking for? Axial has done it. Axial SMT10 Grave Digger AX90055 Alu Front/Rear Upper Chassis Link Parts - Black. The Axial SMT10 Grave Digger RTR 1/10 4WD Monster Truck is a true-to-scale, 4-wheel drive, straight-axle Monster Jam RC truck that’s the perfect basher for everyone from scale RC enthusiasts to full-size monster truck fans. And I posted pics of the truck with clod wheels and tires Axial SMT10 with clodbuster wheels and tires 08-30-2016, 11:05 PM #7: USA1. Posts: 135 Re: Darcsyde Axial SMT10.
http://docs.parancoe.org/1ruz44a/9lq8gl.php?page=axial-smt10-%27re-release-b41658
2021-04-10T18:29:25
CC-MAIN-2021-17
1618038057476.6
[]
docs.parancoe.org
progress indicator, and clicking “setup alert” button. This will pre-populate the new trigger form with relevant information. New/Edit Trigger Form¶ The form for creating and editing triggers is identical. When editing, the form is pre-populated with the current trigger values. There are three types of triggers that can be created: topics, consumer groups or brokers. Topic¶ - Trigger Name - A name used to identify the trigger (for example ‘<topic name> production requests’). Uniqueness is not enforced but you should use different names to avoid confusion for all triggers. - Component Type - Should be selected as Topic. If this is not the case, see the other form trigger documentation. - Cluster id - The trigger for a topic will be limited to a specific cluster id. If you require a topic to be triggered by multiple clusters, create independant triggers for each cluster. - Topic(s) name A select list of options for matching against the value field (below). The name of the topic can Equal, Begin with, End with or Contains a specified value. Note For example, selecting Contains and then entering ‘topic’ into the value field will match ‘my topic’, ‘topical’, and ‘topics with data’. If Begins with is selected, the trigger will only match ‘topical’ and ‘topics with data’, not ‘my topic’. - Topics(s) name Value The name or part of a topic name to be triggered against. Works in conjunction with Topic(s) Name in order to match against one or many topics. Note If multiple topics match against topic name, the trigger will be per topic, not aggregate. In the case where we have two topics that Begin with ‘topic’, and we set trigger Bytes in greater than 100, any topic will fire the trigger if they get > 100 Bytes In. - Metric The value to check for the trigger alert. The possible values for this dropdown are: - Bytes in - Amount of bytes per second coming in to a topic. - Bytes out - Amount of bytes per second going out from a topic. - Out of sync replica count - Total number of partition replicas that are in-sync - Production request count - Amount of production requests per second to a topic in a cluster. - Under replicated topic partitions - Amount of under replicated topic partitions. A use case for this metric would be wanting to know if a Kafka broker crashed while holding a specific topic partition. - Condition - The trigger will fire when Condition is true of the difference between the value of the metric being monitored and the value of the Value field. Possibly Greater than, Less than, Equal to, or Not equal to. - Value - The value to which the topic Metric is compared. Consumer Groups¶ - Trigger Name - A name used to identify the trigger (for example ‘<consumer group name> under consumption’). Uniqueness is not enforced but you should use different names to avoid confusion. - Component Type - Should be selected as ‘Consumer group’. If this is not the case, see the other form trigger documentation. -. Possibly Greater than, Less than, Equal to, or Not equal to. - Value - The value to which the monitored consumer group Metric is compared. - Buffer - The delay behind real time to wait until a time window is considered for triggering (refer to Concepts for more information). Brokers¶ - Broker clusters - The cluster(s) to look at for a specific broker metric. - Trigger Name - A name used to identify the trigger (for example ‘Broker zookeeper down’). Uniqueness is not enforced but you should use different names to avoid confusion. - Broker clusters One or many broker clusters to trigger based on conditions. Note If multiple clusters are selected, the trigger will be per cluster, not aggregate, but not in all cases. See below for more details. - Metric Some of the values in Metric are triggered on a per broker basis and some are only availbale cluster wide. The selection method in Broker clusters is by cluster, but there are specific metrics in the brokers of clusters that can cause triggers. Important Any broker that meets the Condition below will trigger discretely - Bytes in - Number of bytes per second produced to this broker - Bytes out - Number of bytes per second fetched from this broker (including from other brokers as part of replication) - Production request latency - Latency of produce requests to this broker at the median, 95th, 99th, or 99.9th percentile (in milliseconds) - Production request count - Total number of produce requests to this broker (requests per minute) - Fetch request latency - Latency of fetch requests to this broker at the median, 95th, 99th, or 99.9th percentile (in milliseconds) Important Any cluster that meets the Condition below will trigger - Under replicated topic partitions Total number of topic partitions that are under-replicated (i.e. partition with in-sync replicas < replication factor) A trigger should be created for values > 0 - Offline topic partitions Total number of partitions that are offline A partition could be offline if for instance the number of available replicas for the partition is less than the configured minimum in-sync replicas. A trigger should be created for values > 0 - Zookeeper status - Are brokers able to connect to ZooKeeper? ‘Offline’ / ‘Online’ are possible values. - Zookeeper expiration rate - rate at which brokers are experiencing ZooKeeper session expirations (number of expirations per second) - Active controller count Number of brokers reporting as the active controller A trigger should be created for values != 1 - Leader election rate - Number of partition leader elections - Unclean election count Number of unclean partition leader elections When unclean leader election is held among out-of-sync replicas, there is a possibility of data loss if any messages were not synced prior to the loss of the former leader. A trigger should be created for values != 0 - Condition - The trigger will fire when Condition is true of the difference between the value of the metric being monitored and the value of the Value field. Possibly Greater than, Less than, Equal to, Not equal to, Online, or Offline, depending on the Metric selected. - Value - The value to which the broker Metric is compared. Actions Management¶ After creating a trigger, you will be given the option to go to the action management page to associate it with one or more existing actions, or if none exist, create a new action. Before being able to send email actions you will need to enable emails, and properly config Control Center to communicate with your SMTP server. At the very least you will need to set # this enables sending mail via c3 confluent.controlcenter.mail.enabled=true # this is the host name of your mail server confluent.controlcenter.mail.host.name=mymail.server # this is the port your mail server is running on confluent.controlcenter.mail.port=25 # we also recommend setting rest.listeners explicitly as well because # this will control the Control Center link that is embedded in the # body of any alert emails confluent.controlcenter.rest.listeners=control-center.server). Alert History¶ Selecting the history tab shows a table summarizing every trigger that has fired that caused an action to be executed (note: it does not list every triggered event). You can see contextual information for some items by clicking the “view” link, or in the case of broker alerts, the click will navigate to the ‘System Health’ page for further diagnostics.
https://docs.confluent.io/3.2.0/control-center/docs/alerts.html
2021-04-10T19:26:42
CC-MAIN-2021-17
1618038057476.6
[array(['../../_images/c3alertsprepopulate.png', '../../_images/c3alertsprepopulate.png'], dtype=object) array(['../../_images/c3alertstopictrigger.png', '../../_images/c3alertstopictrigger.png'], dtype=object) array(['../../_images/c3alertsedittrigger.png', '../../_images/c3alertsedittrigger.png'], dtype=object) array(['../../_images/c3alertsbroker.png', '../../_images/c3alertsbroker.png'], dtype=object)]
docs.confluent.io
About the Perks Program and Monetization¶ The Perks Program enables Core creators to make money with their games using a complete monetization system that can be customized to any game type. Joining the program gives access to Perks, which are in-game purchases built specifically for Core. To qualify for adding Perks to your games, there are two requirements: - Have an account for at least 30 days - Average 50 daily active users (DAU) across all of your games over a period of 30 days Note Members of the earlier Creator Payouts Pilot Program will automatically gain access to Perks. Creators in the Perks Program have access to Core’s Perks system, which enables creators to sell in-game benefits to players in exchange for Core Credits. Core creators can then redeem Core Credits earned through the sale of Perks for real world money. How Perks Work¶ Creators can monetize their games through Perks, which allow them to sell in-game benefits to players for Core Credits. There are three types of perks, separated by how often your players are able to purchase them: - Permanent: Purchases that unlock a permanent benefit, such as cosmetics, upgrades, and game access. - Repeatable: One-time-use purchases such as consumables, boosts, and virtual currencies. - Limited Time: Time-bound purchases that grant benefits over 7, 15, or 30 days, such as a battle pass. Note Limited Time Perks do not automatically renew. Creators control what kind of perks they use and how to add them to their games. There is no limit to the number of Perks a creator can have, and one Perk can apply to several of a creator’s games. Perks are added using the Core editor, where the creator can set the price, behavior, and appearance. The creator can customize it further using Lua and the Core API. The Perks system also includes a full testing suite for testing purchases, daily analytics reports, giving the creator complete control over how to design their game’s monetization. When a player buys a Perk, Core handles the transaction from start to finish. The Credits go to the creator’s account, at which point the creator can spend them in other Core games or in the Core shop. To learn more about how to create Perks for your game, see the Perks reference Redeeming Core Credits¶ Creators directly receive all the Core Credits that are spent on Perks. When the creator accumulates credits above a payout threshold, they become eligible for converting credits to real-world currency. These can be initiated through the Creator Dashboard. See the Core Credits redemption Help Center article for more information. Creator Revenue Split¶ Core shares revenue with creators in an even 50/50 split. Creators can receive this split by opting to convert their Core Credits to real-world currency as described above. When converting, creators receive 50% of the revenue spent on their Perks. Why 50%¶ By developing on Core, creators can build anything from a deep traditional MMO with a monthly subscription to simulation games with in-game cosmetic purchases, a completely free to play shooter or even a series of arcade games. In addition to development tools, Core provides server hosting, multiplayer networking, and instant publishing to the Core storefront. With Perks, Core also covers all transaction costs, such as credit card processing fees, customer support, and engine royalties for the underlying Unreal engine. Most development platforms provide only a subset of those benefits, passing the cost of the rest to creators. Other UGC platforms, such as Roblox, take a much larger share of revenue – or don’t allow monetization at all. By using Core, creators get a larger share of revenue compared to other similar platforms while also having access to the full stack of tools to build, publish, and grow their games. Applying to the Perks Program¶ Once you have reached the minimum DAU to qualify, you can apply to the Perks Program here. For tips on how to improve your game to meet the minimum DAU requirement, check out the improving your game to increase DAU guide. Learn More¶ Implementing Perks | Creator Analytics | How to Join the Perks Program | Enrolling in the Perks Program | Guidelines for Creating Perks
https://docs.coregames.com/perks/perks_program/
2021-04-10T18:53:20
CC-MAIN-2021-17
1618038057476.6
[array(['https://d33wubrfki0l68.cloudfront.net/e2e2b8a6c9a9c35cd07e5297ae6f487d751a733c/6a41a/img/perks/perks_playerpurchaseflow.png', 'Player Credit Purchase Flow'], dtype=object) array(['https://d33wubrfki0l68.cloudfront.net/d1ffdcfc359aba993afc4c1fbf3876f925d075aa/06dfa/img/perks/perks_creatorcreditflow.png', 'Creator Credit Flow'], dtype=object) array(['https://d33wubrfki0l68.cloudfront.net/6cf481e8f70bfbe1b8107a32b59e66876599bba0/8c6d3/img/perks/perks_fiftyfifty.png', '50/50 graphic'], dtype=object) array(['https://d33wubrfki0l68.cloudfront.net/2afbd10d7d958459cfcf9e29d65edbb63760c83f/41682/img/perks/perks_coreroblox.png', 'Core Revenue Split vs Competitor'], dtype=object) array(['https://d33wubrfki0l68.cloudfront.net/4d08703f17b6c4a5ca310238d14339b30849b808/42d68/img/perks/perks_platformcostchart.png', 'Core Revenue Split vs Competitor'], dtype=object) ]
docs.coregames.com
GPT-2 Examples¶ Input: In a shocking finding, scientist discovered a herd of unicorns living in a remote, previously unexplored valley, in the Andes Mountains. Even more surprising to the researchers was the fact that the unicorns spoke perfect English. Model Completition: At last count, researchers had found the remains of 57 unicorns on the edge of a gully of an unnamed valley. Today, they were excavating in a private land where the herd and their remains were located. But it was a 21-year-old man who claimed to have found the dead species. "They said, 'Give us all the meat you have,'" said Hilaia Rodríguez, who is head of communications and publicity for the Las Plata provincial government. "And I said 'Okay.' And I brought the carcass to him, where he cut it up and he ate it. His impression is that it was made out of alpaca. Input: A train carriage containing controlled nuclear materials was stolen in Cincinnati today. Its whereabouts are unknown. Model completition: The train from Fort Wayne was traveling to Germany when the conductor noticed a man sneaking off with one of its extra tires. The train was carrying 3,542 pounds of highly enriched uranium (HEU), which is controlled under the 1961 Nuclear Nonproliferation Treaty. The transport system for HEU was shutdown for security reasons. Cincinnati police say the man was arrested in Louisville, Kentucky. While the wheels of the train were initially missing, the city gave everyone evacuated at the station the option of taking a bus that could pick up the tires and bring them back to the station. Police told 10TV the thief now has a lot more time to think about the lives he's likely destroyed. A statement from Amtrak said: "We have no information to indicate that there was any compromise of the safety of passengers or crew on board the train" Input: Miley Cyrus was caught shoplifting from Abercrombie and Fitch on Hollywood Boulevard today. Model completition: A rep for the singer tells TMZ, "Miley was approached by an Abercrombie and Fitch sales associate who informed her she could not leave without paying for a pair of boots. She was given a chance to pay for the boots, but did not, and as a result, was asked to leave. Miley immediately put her money in her purse." Miley is singing the line "I'm what I'm wearing" from the song I Kissed A Girl, which was released last month. We say: She sounds cute in that one. The rep adds, "Miley was put on a shopping spree to fulfill her social media obligations. She unfortunately left the store without paying for her purchases. We would never seek to enforce shoplifting laws in this fashion." Update, 8:20 p.m. PT: TMZ reports that Cyrus was unaware Input: Legolas and Gimli advanced on the orcs, raising their weapons with a harrowing war cry. Model completition: The two warriors swung down their axes in a grisly feast. Throwing themselves into battle, the Orcs broke down from the draught of blood flowing through their bodies. "Here, those are for you, ye boys," the Beast Master yelled, "and these are for me, to brag to the Lion! Come, it shall never die." With a triumphant cry, Balin raised his staff and tossed the Orc yoke to Gimli and his men. The Witch King led his Men over the burning plains and across the untamed swamps. "Brothers, we have fought a fierce battle, against giants, against Goblins, against Huns, and against Trolls. And all in vain. Oh, ye have done well, so well. Good Captain, had ye not befriended a few Tusker brethren you would already be slain by" Input: For today’s homework assignment, please describe the reasons for the US Civil War. Model completition: What began as a dispute over slavery quickly evolved into an all-out war that was fought for the preservation of the Union, not only for the preservation of slavery," said historian James Truslow Adams. "The North, though led by President Abraham Lincoln, was not opposed to slavery, it was opposed to the institution." Adams says this was a major turning point in the American Civil War as it began to take its final shape. While Lincoln did oppose slavery, he also saw slavery as a necessary part of the Union, and that if America was to become a strong nation, it was imperative for the Union to have the greatest amount of slaves on it and the greatest amount of money with which to fund the Union's war machine," said Adams. And what were the major themes of the Union's war? Adams notes that the major theme of the United States Input: Recycling is good for the world. NO! YOU COULD NOT BE MORE WRONG!! Model completition: Recycling means to destroy the natural world in order to recycle the natural world. If recycling were good for the earth and people, we would not be in the world we are in now! I want to start a new movement to stop the destruction of the earth and the people within it. It will be called the 'Wreckers' Party', and I want you to know this. This is not a joke. The people in power will not give us a voice to stop the destruction of the earth. We must demand that we have that voice. We must stop the destruction of the earth. If we do not get together and organize to stop this, they are going to do it without us. We need to organize ourselves, and we need to organize ourselves in groups and movements. I know there are many who are concerned about the current climate change. If you do not feel that this
https://docs.extrapolations.dev/models/gpt-2/examples/
2021-04-10T19:22:46
CC-MAIN-2021-17
1618038057476.6
[]
docs.extrapolations.dev
Adding Students to School Store Groups thru PowerSchool Glenbard has an online store through SchoolPay. Eligibility for items can be controlled through group enrollment in PowerSchool. The PowerSchool group enrollments are sent to SchoolPay each afternoon. To enroll a student in a School Store eligibility group through PowerSchool, take these steps: - Scroll to the Administration heading on the Navigation menu on the left. - Click School Store. - Check the Enrolled check box for the appropriate group This completes the enrollment process. Importing Students to Groups Students can be enrolled into groups in mass from a csv file. Note: This process cannot be undone in mass. If a spreadsheet is imported incorrectly, the user will need to go through each student and manually drop them from the group through each students’ profile. - If you have a listing of students in Excel, remove all columns except the student ID and save the file in CSV format. - Click Special Functions. - Click Import School Store Groups - Select the Group to import to from the drop-down list at the top. - Click the Choose Files button and browse your computer for the CSV file prepared in Step 1. - Click the Import Records button.
https://docs.glenbard.org/index.php/ps-2/adding-students-to-school-store-groups-thru-powerschool/
2021-04-10T19:26:51
CC-MAIN-2021-17
1618038057476.6
[array(['https://docs.glenbard.org/wp-content/uploads/2018/06/SchoolStorePsGroups-300x164.jpg', None], dtype=object) ]
docs.glenbard.org
New Relic's query language, NRQL, has rate limits in place to ensure a high level of availability and reliability for all users. To understand the places NRQL can be used, see Where is NRQL used?. You will rarely encounter rate limiting, especially if you follow. Limits on queried events When you run a NRQL query, it will display the number of events inspected, as shown below: In this context, "events" is used in a general sense to refer to all NRQL-available objects; this includes events, metrics, logs, and distributed tracing (span) data. Each New Relic account has limits on the total number of events that can be inspected. There are limits that apply over two different time frames: - A rolling 30-minute time window - A 24-hour period These limits are as follows: Once the limit has been reached for a given time period, limiting will be imposed and some queries may be impacted. After the time period has passed, if query volume drops below the limit, restrictions will be removed automatically. NRQL query rate limits The limit on NRQL queries is 50 queries per second, or 3000 queries per minute. Past this, New Relic cannot guarantee query performance, and you may be rate limited. Limits on count of data types The limit for total number of reported data types is 250 per account over a given 24-hour time period. If a user exceeds this limit, New Relic may filter or drop data. This limit applies to all NRQL-queryable data types. Because there aren't that many different data types reported by New Relic products and integrations, this will mainly be a limit on custom。
https://docs.newrelic.com/jp/docs/query-your-data/nrql-new-relic-query-language/get-started/rate-limits-nrql-queries/
2021-04-10T19:38:01
CC-MAIN-2021-17
1618038057476.6
[array(['/static/a67951798a5b60f8aca1b4aac861f61a/466da/insights-inspected-event-count-modal_0.png', 'New Relic inspected event count New Relic inspected event count'], dtype=object) ]
docs.newrelic.com
This section describes how to configure an IP address and default gateway to your DNS/DHCP Server. The procedure for configuring a DNS/DHCP Server and adding it to Address Manager will vary according to the number of interfaces on your DNS/DHCP Server appliance, and the number of interfaces that you want to utilize. Each DNS/DHCP Server interface (including eth1) should be on a separate network to avoid any potential issues. BlueCat DNS/DHCP Servers are available in the following configurations: - 2-port DNS/DHCP Server - 3-port DNS/DHCP Server - 3-port DNS/DHCP Server VM - 4-port DNS/DHCP Server The following table describes the interfaces that are being used by different types of DNS/DHCP Server appliances. - DNS/DHCP Server features support for Dedicated Management on 3 and 4-port DNS/DHCP Server appliances and 3-port DNS/DHCP Server virtual machines, isolating all management traffic onto the eth2 interface and all services traffic onto the eth0 interface. - DNS/DHCP Server appliances with four network interfaces can be configured for Services (eth0), xHA (eth1), Management (eth2), and Redundancy (eth3) through port bonding (bond0=eth0 + eth3). DNS/DHCP Server VMs can be configured with three network interfaces to support Services, xHA, and Management. - DNS/DHCP Servers can be configured with multiple VLAN interfaces. For details, refer to VLAN Tagging.
https://docs.bluecatnetworks.com/r/Address-Manager-Administration-Guide/Getting-started-with-DNS/DHCP-Servers/8.2.0
2021-04-10T19:54:32
CC-MAIN-2021-17
1618038057476.6
[]
docs.bluecatnetworks.com