content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
CLR Inside Out
CLR Hosting APIs
Alessandro Catorcini and Piotr Puszkiewicz
Contents
Hosting Managers
Memory Manager
Threading Manager
Synchronization Manager
I/O Completion Manager
Assembly Load Manager
CLR Configuration Manager
Using the Hosting APIs
Conclusion
Suppose you are developing a large application in native C++ and you want to allow your customers to extend this app so they can mold it to their needs. Allowing your customers to write the extensions in managed code inside the Microsoft® .NET Framework would make their development experiences much smoother than if they had to work with native code, but loading the common language runtime (CLR) into your application's process can be worrisome. What if an uncaught CLR exception ends up killing the application's process? What if the managed extensions used more memory than your application was willing to give up? You could solve these problems by launching the runtime in another process, but moving large amounts of data across process boundaries could soon become prohibitively expensive. Is there any way to take advantage of the benefits of managed code without such expenses?
These issues quite frequently arise in the development of extensible applications. The CLR provides a variety of functionality, resources, and libraries to reduce the cost of building extensions. However, developers of such extensible applications are understandably wary of allowing potentially unreliable and resource-hungry third-party code to execute inside of their applications. Fortunately, the CLR 2.0 hosting APIs allow application developers to control the CLR's resource consumption and to guarantee a specific level of reliability for code running within the CLR. By using the hosting APIs, developers of native hosts can execute managed code in-process with complete knowledge and control over how the CLR behavior can affect their application.
The CLR has always allowed a level of integration between itself and a host. In the .NET Framework 1.x, native applications were able to load a specific version of the CLR, to start and stop its execution, and to configure a limited number of settings. In version 2.0 of the Framework, the CLR allows for a much deeper integration. The updated hosting APIs provide layers of abstraction that let the host manage many of the resources currently provided by Win32®. Furthermore, the updated APIs extend the set of CLR functionality that is configurable by the host.
Hosting Managers
The CLR 2.0 hosting APIs are split into two primary sets: host managers and CLR managers. Host manager interfaces have names prefixed with IHost, and follow with a function description (as in IHostMemoryManager and IHostSecurityManager). The implementations of the host manager interfaces are provided by the developer of the host and are registered with the CLR, which uses these interfaces to make callbacks against the host for the duration of the process lifetime.
The CLR manager interfaces are prefixed with ICLR, and what follows ICLR in the name describes the function of the manager (as in ICLRTaskManager and ICLRSyncManager). CLR managers are handed from the CLR to the host and are implemented by the CLR. The host uses these interfaces to request that the CLR perform some specific action (for example, a host can use the provided ICLRGCManager to force a garbage collection).
For the purposes of this article, we'll group the CLR managers and host managers according to their intended uses, which will help clarify the key benefits of using the hosting APIs. We will include a list of the associated ICLR* and IHost* interfaces and their descriptions when covering an area of functionality.
Memory Manager
The memory manager lets the host provide an interface through which the CLR will request all memory allocations. It replaces both the Windows® memory APIs and the standard C CLR allocation routines. Moreover, the interface allows the CLR to inform the host of the consequences of failing a particular allocation (for example, failing a memory allocation from a thread holding a lock may have certain reliability consequences). It also permits the host to customize the CLR's response to a failed allocation, ranging from an OutOfMemoryException being thrown all the way up through the process being torn down. The host can also use this manager to recapture memory from the CLR by unloading unused app domains and forcing garbage collection. The memory manager interfaces are listed in Figure 1.
Figure 1 Memory Manager Interfaces
SQL Server™ 2005 demonstrates the usefulness of the memory manager hosting APIs. SQL Server 2005 operates within a configurable amount of memory and is often set to use nearly all of the physical memory on the machine. To maximize performance, SQL Server 2005 tracks all memory allocations in an attempt to ensure that paging never occurs. The server would rather fail a memory allocation than page to disk. To accurately track all allocations, SQL Server uses the memory manager hosting APIs such that any memory allocation requests by the CLR are made through SQL Server rather than directly to Windows. This gives SQL Server the ability to fail CLR allocation requests before paging occurs.
Threading Manager
The new hosting APIs abstract the notion of a Win32 thread and essentially let the host define the unit of scheduling and execution. The hosting APIs use the term "Task" to define this abstraction. As part of abstracting Win32 threads, tasks provide a variety of methods for modifying the CLR.
The threading manager can:
- Allow the host to provide an interface that the CLR will use to create and start new tasks.
- Provide the host with a mechanism to "reuse" or pool the CLR-implemented portion of a task.
- Support standard operations like start, abort, join, and alert.
- Implement a callback to notify the CLR when a task has been moved to or from a runnable state. When a call is moved from a runnable state, the CLR must be able to specify that the task should be rescheduled as soon as possible.
- Provide a way for the CLR to notify the host that a given task cannot be moved to a different physical OS thread and cannot have its execution blocked during a specified window.
- Allow the host to provide an implementation of the thread pool. The CLR must be able to queue work items, set and query the size of the thread pool, and so on.
- Provide notifications on both the host and CLR sides that the locale has been changed on a given task.
- Provide a means for the CLR (and user code) to adjust the priority of a task.
Figure 2 shows the threading manager interfaces.
Figure 2 Threading Manager Interfaces
Synchronization Manager
While letting the host create tasks is sufficient for many multithreaded applications, some hosts also require the ability to override the CLR's synchronization primitives. This ensures that locks are not taken on an operating system thread without the host's knowledge, and it allows CLR tasks to further integrate with the host's scheduling mechanism as well as permitting the host to perform deadlock detection.
Using the synchronization manager allows the host to provide implementations for several synchronization primitives to the CLR, including critical sections, events (both manual and auto-reset), semaphores, monitors, and reader/writer locks. See the synchronization manager interfaces in Figure 3.
Figure 3 Synchronization Manager Interfaces
I/O Completion Manager
The I/O completion manager lets the host provide a custom port implementation for the CLR to use in place of the default Windows I/O completion port functionality. The host provides a way for the CLR to bind a handle to an I/O completion port. In return, the CLR supplies a callback to be invoked by the host when an asynchronous I/O operation completes. In addition, this manager also allows the host to insert custom data at the end of the OVERLAPPED structure passed to the I/O routines. Figure 4 shows the I/O completion manager interfaces.
Figure 4 I/O Completion Manager Interfaces
Assembly Load Manager
Hosts relying on the old hosting APIs customized assembly-loading behavior by catching events on System.AppDomain that are thrown when an assembly, resource, or type cannot be found (AssemblyResolveEvent, ResourceResolveEvent, and TypeResolveEvent). The requested assembly was then loaded by the host using the overload of Assembly.Load that accepts a byte array and is passed back from the event. This approach has proven insufficient for a few reasons, including performance. Loading an assembly by specifying a managed byte array involves copying the bytes multiple times between managed and unmanaged memory before the assembly can be run by the host.
In version 2.0 of the hosting APIs, the host specifies which assemblies are to be loaded from the Global Assembly Cache (GAC) and fulfills all other requests directly. When binding to an assembly, the host can return the assembly as a pointer to unmanaged memory (an unmanaged IStream *). In effect, this allows a host to implement a custom assembly store. In fact, this is the primary reason that SQL Server 2005 implements its own custom assembly load manager. The assemblies that make up a database application in SQL Server 2005 are physically stored in the SQL Server database rather than as separate files on disk. SQL Server 2005 uses the assembly load manager APIs to efficiently load application assemblies out of the database while continuing to rely on the usual .NET Framework mechanisms to load the assemblies. Figure 5 shows the assembly load manager interfaces.
Figure 5 Assembly Load Manager Interfaces
CLR Configuration Manager
The CLR configuration manager gives access to interfaces (see Figure 6) that allow the host to configure several aspects of the CLR, including logically grouping tasks to simplify debugging, setting a custom heap dump configuration, registering callbacks for important CLR events, blocking certain functionality from being loaded into the CLR, and setting a critical failure escalation policy.
Figure 6 CLR Configuration Manager Interfaces
The concept of an escalation policy deserves some explanation. Critical failures include resource allocation failures (such as out-of-memory conditions) and asynchronous failures (such as stack overflows), and every type of failure has a default behavior associated with it, ranging from throwing an exception through process termination. In version 2.0, the CLR enables a host to override the default behavior when a critical failure occurs. Advanced hosts can define a chain of events that will be tried in a sequence determined by the host to respond to failures. For example, a host may decide that every out-of-memory failure will translate into aborting the thread or unloading the AppDomain.
SQL Server 2005 uses escalation policy in part to ensure process stability by escalating thread aborts to AppDomain unloads when resource failures occur in areas of code that could affect multiple tasks. For example, suppose a task that's holding a lock receives a failure when trying to allocate memory. In this scenario, aborting just the current task is not sufficient to ensure stability of the AppDomain because there may be other tasks in the domain waiting on the same lock or expecting the data that was being manipulated to be in a consistent state. For more information on escalation policies and reliability, see Stephen Toub's article "High Availability: Keep Your Code Running with the Reliability Features of the .NET Framework" in the October 2005 issue of MSDN®Magazine.
Using the Hosting APIs
In his article "No More Hangs: Advanced Techniques to Avoid and Detect Deadlocks in .NET Apps" in the April 2006 issue of MSDN Magazine, Joe Duffy developed a thin host that intercepted thread and synchronization primitive operations to perform deadlock detection (the source code is available at msdn.microsoft.com/msdnmag/issues/06/04/Deadlocks). The code shows how to use the hosting APIs.
The code in Figure 7 is similar to what you'll need in every host you write. It loads the CLR into the process, starts it, and loads an assembly into the default AppDomain, invoking a method on it. (Error checking code is omitted for clarity.)
Figure 7 Typical Host Main Method); }
The first task in hosting the CLR is getting a pointer to the ICLRRuntimeHost interface from the CLR:CLRHost, IID_ICLRRuntimeHost, (PVOID*)&pClrHost);
In the .NET Framework 1.x you'd use ICorCLRHost to communicate between the CLR and the host; in version 2.0, ICLRRuntimeHost is the recommended interface, as it allows you to access all the new opt-in functionality described here. Among other functions, ICLRRuntimeHost facilitates the exchange of hosting interface implementations between the CLR and the host. ICLRRuntimeHost::SetCLRControl allows the host to notify the CLR that some host-defined services need to be hooked up to override the CLR's default behavior. Our example uses the DHHostControl class, which implements IHostControl, to override some of the CLR's default managers:
DHHostControl *pHostControl = new DHHostControl(pClrHost); pClrHost->SetHostControl(pHostControl);
Once the host successfully calls SetHostControl, the CLR will use any interfaces defined in the host's IHostControl implementation in place of the default actions. In this example, DHHostControl overrides IHostTaskManager and IHostSyncManager by providing two classes in DHHostControl that inherit from these interfaces.
ICLRRuntimeHost also allows the host to get pointers to the CLR's implementation of the ICLR* interfaces. This is done by calling the ICLRRuntimeHost::GetCLRControl method to get a pointer to an ICLRControl object. ICLRControl lets the host access all other ICLR* interfaces through which the host can request specific actions from the CLR. Since the sample application is monitoring the CLR, it does not use the ICLR* interfaces.
Once you've loaded and started the CLR, the easiest way to load and run your managed code is to use the ExecuteInDefaultApp-Domain method on ICLRRuntimeHost. This will simply load a managed assembly and execute a method on it:
HRESULT hrExecute = pClrHost->ExecuteInDefaultAppDomain( pwzAssemblyPath, pwzAssemblyName, pwzMethodName, pwzMethodArgs, &retVal);
Conclusion
Unmanaged hosts are able to control very fine-grained details of the internal workings of the CLR. Using the new .NET Framework 2.0 hosting APIs, a host can, in fact, place itself between the CLR and the operating system and broker any request from the CLR. You can find more information in the online documentation of the hosting APIs and in the book Customizing the Microsoft .NET Framework Common Language Runtime by Steven Pratschner (Microsoft Press®, 2005).
Send your questions and comments to [email protected].
Alessandro Catorcini is a lead program manager in the CLR team. In Visual Studio 2005 he was responsible for the hosting API layer and CLR integration into SQL Server 2005.
Piotr Puszkiewiczis a program manager in the CLR team. He is currently responsible for the hosting API, the CLR exception system, and Managed Debugging Assistants. | https://docs.microsoft.com/en-us/archive/msdn-magazine/2006/august/clr-inside-out-clr-hosting-apis | 2020-05-25T14:59:48 | CC-MAIN-2020-24 | 1590347388758.12 | [] | docs.microsoft.com |
Geo¶
The Core module provides a form-field of the type “Location”. This form-field should be used whenever a Location is entered. In the Core the type “location” is an alias to a standard form field type “text”. So if the Geo module is inactive, a normal “text” field is used and nothing else happens.
But the Geo Module does a little bit more than just autocomplete the location. It uses a geo location service to enrich the entered data with geo coordinates and informations about the country and the region. If you enter “Frankfurt am Main”, the location will be defined as:
city = Frankfurt am Main region = Hessen Country = Germany corrdinates = [8.6820934,50.1106529], type:"Point"
This makes it possible to use the distance feature, when searching e.g. for jobs. The Geo module currently ca use two different geo location services.
What’s the differences between those services.
The Geo module can be easily configured to use one of the geo services by copying and modifying the Geo/config/Geo.options.local.php to the autoload directory of you YAWIK installation
<?php /** * Name of the used geo coder plugin. You can use 'photon' or 'geo'. Photon is recommended. */ $plugin = 'photon'; /** * Location of your geo coder server. If unsure, leave it unchanged. Possible values are: * - * - */ $geoCoderUrl = ''; // // Do not change below this line! // return [ 'options' => [ 'Geo/Options' => [ 'options' => [ 'plugin' => $plugin, 'geoCoderUrl' => $geoCoderUrl, ], ], ], ];
It it possible to configure | https://yawik.readthedocs.io/en/latest/modules/geo/index.html | 2020-05-25T15:10:07 | CC-MAIN-2020-24 | 1590347388758.12 | [] | yawik.readthedocs.io |
Open Access License
The USEFUL online journal is an open access journal. Articles published in USEFUL online journal are freely accessible to the global public under the terms of the Creative Commons Attribution License (), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
The DSpace at USEFUL.academy repositories will contain documents that will be made available via open access to. | https://docs.useful.academy/open-access-license/ | 2020-05-25T14:44:38 | CC-MAIN-2020-24 | 1590347388758.12 | [] | docs.useful.academy |
FAQ: General¶
File Upload does not work?¶
when trying to upload a file, the status wheel turns forever. Uploaded file is not stored.
This happens, if a javascript error occurse. You can only debug such a problem by using firebug or comparable developer tools. The MimeType of uploaded files is checked by default using the libmagic. Please make sure that:
- the fileinfo extention exists. On FreeBSD, this extension has to be installed. On Linux, this extention is normally included by default. Check it by:
php -m | grep fileinfo
- Make sure, your Webserver can access
/usr/share/misc/magic*. These files are referenced by YAWIK/vendor/zendframework/zend-validator/src/File/MimeType.php
- make sure the access is not restricted by an open_basedir setting
File Upload shows “An unknown error occured” on large files¶
When the upload seems to work, but at the end, it shows an “An unknown error occured”, and in the
log/error.log appears a line like
“ERR POST Content-Length of 16414047 bytes exceeds the limit of 8388608 bytes (errno 2)”
you should check that you set all required configuration values.
- The allowed max size must be set in the yawik configurations .e.g. for attachments in an application the option ‘attachmentsMaxSize’ in the file
config/autoload/applications.forms.global.phpmust be set appropriatly.
- The php.ini value of ‘upload_max_size’ must also be set accordingly. Either in the php.ini or (for apache) via ‘php_admin_value’
- Do not forget the ‘post_max_size’ php.ini option. | https://yawik.readthedocs.io/en/latest/faq/faq-core.html | 2020-05-25T13:01:52 | CC-MAIN-2020-24 | 1590347388758.12 | [] | yawik.readthedocs.io |
Troubleshooting tips¶
Diagnose: Customer complains they receive a HTTP status 500 when trying to browse containers¶
This entry is prompted by a real customer issue and exclusively focused on how
that problem was identified.
There are many reasons why a http status of 500 could be returned. If
there are no obvious problems with the swift object store, then it may
be necessary to take a closer look at the users transactions.
After finding the users swift account, you can
search the swift proxy logs on each swift proxy server for
transactions from this user. The linux
bzgrep command can be used to
search all the proxy log files on a node including the
.bz2 compressed
files. For example:
$ PDSH_SSH_ARGS_APPEND="-o StrictHostKeyChecking=no" pdsh -l <yourusername> -R ssh \ -w <redacted>.68.[4-11,132-139 4-11,132-139],<redacted>.132.[4-11,132-139] \ 'sudo bzgrep -w AUTH_redacted-4962-4692-98fb-52ddda82a5af /var/log/swift/proxy.log*' | dshbak -c . . ---------------- <redacted>.132.6 ---------------- Feb 29 08:51:57 sw-aw2az2-proxy011 proxy-server <redacted>.16.132 <redacted>.66.8 29/Feb/2012/08/51/57 GET /v1.0/AUTH_redacted-4962-4692-98fb-52ddda82a5af /%3Fformat%3Djson HTTP/1.0 404 - - <REDACTED>_4f4d50c5e4b064d88bd7ab82 - - - tx429fc3be354f434ab7f9c6c4206c1dc3 - 0.0130
This shows a
GET operation on the users account.
Note
The HTTP status returned is 404, Not found, rather than 500 as reported by the user.
Using the transaction ID,
tx429fc3be354f434ab7f9c6c4206c1dc3 you can
search the swift object servers log files for this transaction ID:
$ PDSH_SSH_ARGS_APPEND="-o StrictHostKeyChecking=no" pdsh -l <yourusername> -R ssh \ -w <redacted>.72.[4-67|4-67],<redacted>.[4-67|4-67],<redacted>.[4-67|4-67],<redacted>.204.[4-131] \ 'sudo bzgrep tx429fc3be354f434ab7f9c6c4206c1dc3 /var/log/swift/server.log*' | dshbak -c . . ---------------- <redacted>.72.16 ---------------- Feb 29 08:51:57 sw-aw2az1-object013 account-server <redacted>.132.6 - - [29/Feb/2012:08:51:57 +0000|] "GET /disk9/198875/AUTH_redacted-4962-4692-98fb-52ddda82a5af" 404 - "tx429fc3be354f434ab7f9c6c4206c1dc3" "-" "-" 0.0016 "" ---------------- <redacted>.31 ---------------- Feb 29 08:51:57 node-az2-object06011 "" ---------------- <redacted>.204.70 ---------------- Feb 29 08:51:57 sw-aw2az3-object006714 ""
Note
The 3 GET operations to 3 different object servers that hold the 3
replicas of this users account. Each
GET returns a HTTP status of 404,
Not found.
Next, use the
swift-get-nodes command to determine exactly where the
user’s account data is stored:
$ sudo swift-get-nodes /etc/swift/account.ring.gz AUTH_redacted-4962-4692-98fb-52ddda82a5af Account AUTH_redacted-4962-4692-98fb-52ddda82a5af Container None Object None Partition 198875 Hash 1846d99185f8a0edaf65cfbf37439696 Server:Port Device <redacted>.31:6202 disk6 Server:Port Device <redacted>.204.70:6202 disk6 Server:Port Device <redacted>.72.16:6202 disk9 Server:Port Device <redacted>.204.64:6202 disk11 [Handoff] Server:Port Device <redacted>.26:6202 disk11 [Handoff] Server:Port Device <redacted>.72.27:6202 disk11 [Handoff] curl -I -XHEAD "`http://<redacted>.31:6202/disk6/198875/AUTH_redacted-4962-4692-98fb-52ddda82a5af" <>`_ curl -I -XHEAD "`http://<redacted>.204.70:6202/disk6/198875/AUTH_redacted-4962-4692-98fb-52ddda82a5af" <>`_ curl -I -XHEAD "`http://<redacted>.72.16:6202/disk9/198875/AUTH_redacted-4962-4692-98fb-52ddda82a5af" <>`_ curl -I -XHEAD "`http://<redacted>.204.64:6202/disk11/198875/AUTH_redacted-4962-4692-98fb-52ddda82a5af" <>`_ # [Handoff] curl -I -XHEAD "`http://<redacted>.26:6202/disk11/198875/AUTH_redacted-4962-4692-98fb-52ddda82a5af" <>`_ # [Handoff] curl -I -XHEAD "`http://<redacted>.72.27:6202/disk11/198875/AUTH_redacted-4962-4692-98fb-52ddda82a5af" <>`_ # [Handoff] ssh <redacted>.31 "ls -lah /srv/node/disk6/accounts/198875/696/1846d99185f8a0edaf65cfbf37439696/" ssh <redacted>.204.70 "ls -lah /srv/node/disk6/accounts/198875/696/1846d99185f8a0edaf65cfbf37439696/" ssh <redacted>.72.16 "ls -lah /srv/node/disk9/accounts/198875/696/1846d99185f8a0edaf65cfbf37439696/" ssh <redacted>.204.64 "ls -lah /srv/node/disk11/accounts/198875/696/1846d99185f8a0edaf65cfbf37439696/" # [Handoff] ssh <redacted>.26 "ls -lah /srv/node/disk11/accounts/198875/696/1846d99185f8a0edaf65cfbf37439696/" # [Handoff] ssh <redacted>.72.27 "ls -lah /srv/node/disk11/accounts/198875/696/1846d99185f8a0edaf65cfbf37439696/" # [Handoff]
Check each of the primary servers, <redacted>.31, <redacted>.204.70 and <redacted>.72.16, for this users account. For example on <redacted>.72.16:
$ ls -lah /srv/node/disk9/accounts/198875/696/1846d99185f8a0edaf65cfbf37439696/ total 1.0M drwxrwxrwx 2 swift swift 98 2012-02-23 14:49 . drwxrwxrwx 3 swift swift 45 2012-02-03 23:28 .. -rw------- 1 swift swift 15K 2012-02-23 14:49 1846d99185f8a0edaf65cfbf37439696.db -rw-rw-rw- 1 swift swift 0 2012-02-23 14:49 1846d99185f8a0edaf65cfbf37439696.db.pending
So this users account db, an sqlite db is present. Use sqlite to checkout the account:
$ sudo cp /srv/node/disk9/accounts/198875/696/1846d99185f8a0edaf65cfbf37439696/1846d99185f8a0edaf65cfbf37439696.db /tmp $ sudo sqlite3 /tmp/1846d99185f8a0edaf65cfbf37439696.db sqlite> .mode line sqlite> select * from account_stat; account = AUTH_redacted-4962-4692-98fb-52ddda82a5af created_at = 1328311738.42190 put_timestamp = 1330000873.61411 delete_timestamp = 1330001026.00514 container_count = 0 object_count = 0 bytes_used = 0 hash = eb7e5d0ea3544d9def940b19114e8b43 id = 2de8c8a8-cef9-4a94-a421-2f845802fe90 status = DELETED status_changed_at = 1330001026.00514 metadata =
Next try and find the
DELETE operation for this account in the proxy
server logs:
$ PDSH_SSH_ARGS_APPEND="-o StrictHostKeyChecking=no" pdsh -l <yourusername> -R ssh \ -w <redacted>.68.[4-11,132-139 4-11,132-139],<redacted>.132.[4-11,132-139|4-11,132-139] \ 'sudo bzgrep AUTH_redacted-4962-4692-98fb-52ddda82a5af /var/log/swift/proxy.log* \ | grep -w DELETE | awk "{print $3,$10,$12}"' |- dshbak -c . . Feb 23 12:43:46 sw-aw2az2-proxy001 proxy-server <redacted> <redacted>.66.7 23/Feb/2012/12/43/46 DELETE /v1.0/AUTH_redacted-4962-4692-98fb- 52ddda82a5af/ HTTP/1.0 204 - Apache-HttpClient/4.1.2%20%28java%201.5%29 <REDACTED>_4f458ee4e4b02a869c3aad02 - - - tx4471188b0b87406899973d297c55ab53 - 0.0086
From this you can see the operation that resulted in the account being deleted.
Procedure: Deleting objects¶
Simple case - deleting small number of objects and containers¶
Note
swift-direct is specific to the Hewlett Packard Enterprise Helion Public Cloud.
Use
swiftly as an alternative.
Note
Object and container names are in UTF8. Swift direct accepts UTF8 directly, not URL-encoded UTF8 (the REST API expects UTF8 and then URL-encoded). In practice cut and paste of foreign language strings to a terminal window will produce the right result.
Hint: Use the
head command before any destructive commands.
To delete a small number of objects, log into any proxy node and proceed as follows:
Examine the object in question:
$ sudo -u swift /opt/hp/swift/bin/swift-direct head 132345678912345 container_name obj_name
See if
X-Object-Manifest or
X-Static-Large-Object is set,
then this is the manifest object and segment objects may be in another
container.
If the
X-Object-Manifest attribute is set, you need to find the
name of the objects this means it is a DLO. For example,
if
X-Object-Manifest is
container2/seg-blah, list the contents
of the container container2 as follows:
$ sudo -u swift /opt/hp/swift/bin/swift-direct show 132345678912345 container2
Pick out the objects whose names start with
seg-blah.
Delete the segment objects as follows:
$ sudo -u swift /opt/hp/swift/bin/swift-direct delete 132345678912345 container2 seg-blah01 $ sudo -u swift /opt/hp/swift/bin/swift-direct delete 132345678912345 container2 seg-blah02 etc
If
X-Static-Large-Object is set, you need to read the contents. Do this by:
Using swift-get-nodes to get the details of the object’s location.
Change the
-X HEADto
-X GETand run
curlagainst one copy.
This lists a JSON body listing containers and object names
Delete the objects as described above for DLO segments
Once the segments are deleted, you can delete the object using
swift-direct as described above.
Finally, use
swift-direct to delete the container.
Procedure: Decommissioning swift nodes¶
Should Swift nodes need to be decommissioned (e.g.,, where they are being re-purposed), it is very important to follow the following steps.
In the case of object servers, follow the procedure for removing the node from the rings.
In the case of swift proxy servers, have the network team remove the node from the load balancers.
Open a network ticket to have the node removed from network firewalls.
Make sure that you remove the
/etc/swiftdirectory and everything in it. | https://docs.openstack.org/swift/latest/ops_runbook/troubleshooting.html | 2020-05-25T15:40:33 | CC-MAIN-2020-24 | 1590347388758.12 | [] | docs.openstack.org |
Glue42 Search Service
Overview
The Glue42 Search Service (GSS) allows desktop applications to search arbitrarily among complex items across multiple search providers. The items can be
entities, such as "Clients", "Instruments", "Accounts" etc. Each entity is associated with a set of
fields, where each field has:
name- e.g.,
idetc.;
type:
- scalar types:
Boolean,
Int,
Long,
Double,
String,
DateTime;
Composite, which means that the field defines an object (defined again as a sequence of fields);
searchType-
None,
Partial,
Exactor
Both(supports
Exactand
Partialmatching);
isArray- a flag indicating that the field is an array;
displayName;
description;
metadata- an object with arbitrary key/value pairs;
Composite combined with the
isArray flag allows for the definition of regular (non-cyclic, directed) object graphs. Both the
Composite type and the
isArray flag implicitly turn off searching on that field.
Architectural Diagram
Dedicated JavaScript API
Client applications don't talk to providers directly. Similarly to the Glue42 Notification Service, clients send requests to the GSS Desktop Manager application and receive results over Interop streams. The GSS Desktop Manager is responsible for multiplexing the requests to the providers and demultiplexing the results back to the client applications. It can also optionally implement local caching with configurable expiration.
JavaScript client applications talk to the GSS Desktop Manager using a JavaScript API which internally utilizes Interop streaming. The JavaScript API exposes interfaces for searching (
GlueSearchService) and for creating search providers (
GlueSearchProvider).
Interop and REST Protocols
GSS providers can use 2 protocols to provide search results - Interop (subscription/push-based) and REST (request-based).
The GSS Desktop Manager configuration is typically held in the Configuration Manager and can be configured with user-specific overrides and modified during runtime. As with the Glue42 Notification Service, local file configuration can also be used for development/test/demo purposes.
Multiple Search Providers
There could be multiple search providers for each entity type - "Clients" can be provided by both Salesforce or Microsoft Dynamics, and by Outlook. Providers are not required to offer the same set of entity fields. This is to say, that the combination of all fields from all providers defines the entity type. However, all providers must use the same field descriptors for a given entity type they support. The only exception is the search type, where providers can specify different search type for any of the fields.
Search Mechanics
A client application issues a search query for an entity type, specifying one or more search fields and their values (selection), and can optionally limit the maximum number of entries that each provider should return, and the set of entity fields (projection) that are required in the result.
The query filter is a set of key/value pairs, where the key is the name of one of the entity type fields, and the value is the value to search for.
Depending on the search type, the relevant providers will perform exact or partial matching (it is an error to pass a field in the search filter, if the field search type is
None). Client applications can optionally pass both, a field value and the requested search type. In case the client wants the search to be performed on all searchable fields, or the client knows that providers offer full text search, then the client can specify a field
ANY, which will be dispatched to all providers.
Both, the entity type and filter, are required and any filtering is done at the provider side.
Full Text Search
A provider can add a special field, called
ANY, to an entity type definition, which allows clients to execute a partial search against all indexed/searchable fields for that entity, or a full text search, if the provider supports that.
Multiple Asynchronous Results from Multiple Providers
When a search is performed, the GSS Desktop Manager looks at the set of fields and finds the appropriate search providers. It then runs parallel queries against these providers. As each provider returns or starts streaming data, the GSS Desktop Manager will start streaming the results back to the client application.
The results from a search query are returned asynchronously and it is expected that different providers will return results with different latencies. Once all the results from all the providers are delivered to the client application, the search query fires a completion callback to indicate that the application should not expect more results. A provider can send multiple results for a single query, if it is multiplexing the requests itself, so the client should expect more than one result from a provider.
Since one query can return multiple results from multiple providers, each result is tagged with the provider name and carries a flag indicating whether this is the last result from this provider (in case the provider batches results or searches across various sources of data internally). Typically, the client code does not need to be aware of the number of providers.
Search Cancellation and Time Out
At any time, a client application can cancel a search request or change the query filter (e.g., the user typing a few more characters).
Also, a client can specify a search timeout - globally or per query. If a search request times out (regardless of whether data was received), the query state will become "timed out" and the completion callback will fire.
Limitations
- No filtering is performed at the side of the GSS Desktop Manager.
- No attempt is made to provide joins across entity types in GSS.
- Transformations and aliasing in a projection are not supported by GSS.
- No conflation or throttling is performed in the GSS Desktop Manager or by the JavaScript API.
References
GSS REST API Swagger Spec
Writing a GSS Search Provider
Overview
The GSS search provider can handle queries for 1 or more entity types.
Creating a Provider
Since the JavaScript GSS library is not part of Glue42, to create a GSS provider, instantiate
GlueSearchProvider, passing to it a reference to the Glue42 Interop library:
// creating a provider const provider = new gss.GlueSearchProvider(glue.interop);
In order for the provider to connect to the GSS Desktop Manager, you need to call the
start() method. This returns a
Promise, returning the provider, but also takes an optional callback in case you cannot use
Promise (in the Glue42 Browser you can use
Promise):
function (error: Error, result: GlueSearchProvider);
Example:
provider.start() .then((/* already have a ref (provider) */) => { // register entity type handler(s) }) .catch((err) => { // deal with the error });
Without using a
Promise:
provider.start((err, result) => { if (err) { // deal with the error } else { // register entity type handler(s) on result (or provider) } });
Registering Entity Types
In order to handle search requests for an entity type, the provider needs to register a handler for each entity type it supports by calling
addEntityType() on the provider instance. The
addEntityType() method requires a reference to an entity type object, and a callback function which will be called with the search request when a search is performed.
You can create an entity type using the classes
GssEntityType and
GssField. However, the library provides an easier way of building an entity type from a JavaScript (or
JSON) object -
GssEntityType.fromJS(jsObject).
Example:
// define an entity type as a JavaScript (or JSON) object const partyEntityType = { "name": "Party", // entity type name "properties": [ // entity fields { "name": "partyType", "type": "String", "searchType": "Exact", "displayName": "Party Type", "description": "specifies types of parties ('Client', 'Prospect', 'Lead', 'Market Lead', 'Other Contact', 'Authorized Contact') that can be searched...." }, { "name": "fullName", "type": "String", "searchType": "Partial", "displayName": "Full Name", "description": "Full Name of party." }, { "name": "advisor", "type": "Composite", "properties": [ { "name": "fullName", "type": "String", "displayName": "Advisor Name" }, { "name": "webId", "type": "String", "displayName": "Advisor WEBID" } ], "description": "Details of Banker/Advisor covering the party." } ] };
Here is an example of creating the entity type object from the
JSON above:
// create a GssEntityType out of the JavaScript/JSON object const entityType = gss.GssEntityType.fromJS(partyEntityType);
Once you have an entity type, you can register a search handler for it:
// register an entity type handler for the entity type // the handler is of type function (query: GssQueryRequest) provider.addEntityType(partyEntityType, handlePartySearch);
Implementing the Search Handler
The search handler function should accept a single argument of type
GssQueryRequest. The request object captures the entity type in the
entityType property, and the search query in its
query property. Once the provider is ready to push results, it can use the
push() method, or if there is an error handling the request, it can use the
error() one.
GSS Search Provider Example
Below is an example implementation of a GSS search provider, which performs searches by various identifiers by concurrently calling multiple backend (REST) services:
Defining the entity type (should be in a separate file):
const partyEntityType = { "name": "Party", "properties": [{ "name": "partyType", "type": "String", "searchType": "Exact", "displayName": "Party Type", "description": "specifies types of parties ('Client', 'Prospect', 'Lead', 'Market Lead', 'Other Contact', 'Authorized Contact') that can be searched. Herut (a GSS provider) will initially support 'Client' and 'Prospect' but eventually will support all party types." }, { "name": "portf", "type": "String", "searchType": "Exact", "displayName": "PORTF", "description": "Portfolio identifier." }, { "name": "netId", "type": "String", "searchType": "Exact", "displayName": "NETID", "description": "Legacy client identifier." }, { "name": "dynamicsId", "type": "String", "searchType": "Exact", "displayName": "DYNAMICS ID", "description": "Auto generated client Dynamics ID." }, { "name": "fullName", "type": "String", "searchType": "Partial", "displayName": "Full Name", "description": "Full Name of party." }, { "name": "salesForceId", "type": "String", "searchType": "Exact", "displayName": "SALESFORCEID", "description": "SalesForce identifier." }, { "name": "advisor", "type": "Composite", "properties": [{ "name": "fullName", "type": "String", "displayName": "Advisor Name" }, { "name": "webId", "type": "String", "displayName": "Advisor WEBID" } ], "description": "Details of Banker/Advisor covering the party." }, { "name": "dmDynamics", "type": "String", "searchType": "None", "displayName": "DM DYNAMICS ID", "description": "Auto generated client Dynamics ID for DM." }, { "name": "dmPortf", "type": "String", "searchType": "None", "displayName": "DM PORTF", "description": "Portfolio identifier for DM." }, { "name": "dmName", "type": "String", "searchType": "None", "displayName": "DM", "description": "Full Name of DM." }, { "name": "dmNetId", "type": "String", "searchType": "None", "displayName": "DM NETID", "description": "Legacy client identifier for DM." }, { "name": "investor", "type": "String", "searchType": "None", "displayName": "Investor", "description": "Full Name of Investor." } ] };
Implementation:
// define the entity type const entityType = gss.GssEntityType.fromJS(partyEntityType); // holds number of REST API calls in transit let apiCount = 0; // holds API call XHR's (which can be aborted) const apiCalls = []; // search results can overlap when using different identifiers; this is a "set" let uniqueClientsPerPortf = {}; // REST API specifics - REST API supports searches by various identifiers // the only difference being the search field (and its value, populated in the request) const requestSpecs = [ { searchKey: "DYNAMICS_SEARCH", searchValueField: "dynamics" }, { searchKey: "PORTF_SEARCH", searchValueField: "portf" }, { searchKey: "NETID_SEARCH", searchValueField: "netId" }, { searchKey: "SALESFORCEID_SEARCH", searchValueField: "salesForceId" }, { searchKey: "NAME_SEARCH", searchValueField: "firstName" }, { searchKey: "NAME_SEARCH", searchValueField: "lastName" }, { searchKey: "ACCOUNT_ID_SEARCH", searchValueField: "accountID" }, ]; // max number of concurrent REST API calls const apiMaxConcurrent = requestSpecs.length; // entity type registration const addPartyEntityType = (provider) => provider.addEntityType(partyEntityType, handlePartySearch); // resets search on each new search request const clearAll = () => { while (apiCalls.length) { xhr = apiCalls.pop(); xhr.abort(); } apiCount = 0; uniqueClientsPerPortf = {}; }; // filtering function discarding duplicates by a certain identifier const uniquePortf = (client) => !uniqueClientsPerPortf.hasOwnProperty(client.portf); const getURLParameter = (name) => decodeURIComponent((new RegExp(`[?|&]${name}=([^&;]+?)(&|#|;|$)`).exec(window.location.search) || [undefined, ""])[1].replace(/\+/g, "%20")) || undefined; // actual GSS search handler const handlePartySearch = (request) => { clearAll(); // by default, get the search URL from the Glue42 Browser let apiUrl = htmlContainer.getContext().url; if (typeof apiUrl === "undefined") { apiUrl = getURLParameter("url"); } if (typeof apiUrl === "undefined") { // make it default to something, if this has not been passed in the context apiUrl = "/secure/gcm/gcm-data/rest/client/search"; } console.log(`endpoint URL: ${apiUrl}`); // make N parallel search requests for each identifier requestSpecs.forEach((spec) => { const query = { searchKey: spec.searchKey, gId: null, portf: null, dynamics: null, salesForceId: null, firstName: null, lastName: null, netId: null, accountID: null }; query[spec.searchValueField] = request.query.filter[0].value; apiCall(request, apiUrl, query); }); }; // ~= _.get(obj, path) (lodash) const getObjectPath = (obj, path) => { if (!obj) { throw new Error("obj is required"); } const fields = path ? path.split(".") : []; if (fields.length == 0) { throw new Error("A path with at least 1 field name is required"); } let fieldName; let data = obj; for (let i = 0; i < fields.length; ++i) { fieldName = fields[i]; data = data[fieldName]; if (data === null || data === undefined) { break; } } return data; }; // ~= _.set(obj, path, value) (lodash) const setObjectPath = (obj, path, value) => { if (!obj) { throw new Error("obj is required"); } const fields = path ? path.split(".") : [] if (fields.length == 0) { throw new Error("A path with at least 1 field name is required") } let fieldName; let data = obj; for (let i = 0; i < fields.length - 1; ++i) { fieldName = fields[i]; if (data[fieldName] === null || data[fieldName] === undefined) { data[fieldName] = {}; } data = data[fieldName]; } data[fields[fields.length - 1]] = value; return obj; }; // function, mapping REST to GSS fields const fieldsMapping = (entity) => { // mapping <ext.REST> : <GLUE GSS> const mappings = { banker: "advisor.fullName", dynamics: "dynamicsId", clientName: "fullName", clientType: "partyType" }; const result = Object.getOwnPropertyNames(entity).reduce((soFar, key) => { const sourceValue = getObjectPath(entity, key); const mappedField = mappings[key] || key; setObjectPath(soFar, mappedField, sourceValue); return soFar; }, {}); return result; }; const gssPush = (request, data) => request.push(data, apiCount >== apiMaxConcurrent); // REST call wrapper const apiCall = (request, apiUrl, apiQuery) => { let data; const async = true; const xhr = new XMLHttpRequest(); xhr.open("POST", apiUrl, async); xhr.setRequestHeader("Content-Type", "application/json; charset=UTF-8"); xhr.onreadystatechange = () => { if (xhr.readyState !== XMLHttpRequest.DONE) { return; } apiCount++; if (xhr.status === 200 && xhr.getResponseHeader("Content-Type").includes("json")) { console.log("Request succeeded with response", xhr.responseText); try { data = JSON.parse(xhr.responseText); if (!data.hasOwnProperty("data")) { throw new Error("Response is missing data property"); } if (Array.isArray(data.data.constructor)) { throw new Error("Data response is not an array of clients"); } data = data.data.map(fieldsMapping).filter(uniquePortf); gssPush(request, data); } catch (e) { console.error("Response parse failed", e); request.error(e); // update cache to avoid duplicates data.forEach((client) => { uniqueClientsPerPortf[client.portf] = client; }); } } else { console.log("Request unsuccessful", xhr.status, xhr.statusText, xhr.responseText); gssPush(request, []); } xhr.send(JSON.stringify(apiQuery)); apiCalls.push(xhr); }; }; // creating the GSS provider const provider = new gss.GlueSearchProvider(glue.interop, { debug: true, measureLatency: false }); provider.start() // and starting it // registering the entity type search handler .then(() => addPartyEntityType(provider)) .catch(console.error); | https://docs.glue42.com/glue42-concepts/search-service/index.html | 2020-05-25T15:01:17 | CC-MAIN-2020-24 | 1590347388758.12 | [array(['../../images/gss.gif', 'GSS'], dtype=object)] | docs.glue42.com |
. client gives you visibility into everything you care about; the Sensu server gives you flexible, automated workflows to route metrics and alerts.
- Monitor containers, instances, applications, and on-premises infrastructure
Sensu is. | https://docs.sensu.io/sensu-core/1.8/ | 2020-05-25T14:02:56 | CC-MAIN-2020-24 | 1590347388758.12 | [array(['/images/pipeline.png',
'Sensu lets you take monitoring events from your system and use pipelines to take the right action for your workflow. Sensu event pipeline diagram'],
dtype=object)
array(['/images/system.png', 'Sensu system diagram'], dtype=object)] | docs.sensu.io |
Add-ons
Learn about all the great features and add-ons that make Sprout Invoices so special!
- Client Dashboards
- Predefined Line-Items
- Invoice and Estimate Shortcode Embeds
- Auto Billing & Payment Profiles - Sprout Billings
- HTML Notifications Add-on
- Dynamic Text for Line Items
- Zapier Integration
- PDF Troubleshooting
- Recurring Invoice, and Recurring Payments, and Sprout Billings Clarification
- Dynamic Invoice Submissions
- Stripe ACH Payments
- Projects and Time Tracking
- Payment Terms
- WooCommerce Payments Processor
- Troubleshooting: Zapier Integration
- Troubleshooting PayPal
- WP E-Signature Integration
- Toggl Integration
- Project Panorama Integration
- Slate Theme Installation | https://docs.sproutinvoices.com/category/27-add-ons | 2020-05-25T14:48:20 | CC-MAIN-2020-24 | 1590347388758.12 | [] | docs.sproutinvoices.com |
DeleteDirectConnectGatewayAssociationProposal
Deletes the association proposal request between the specified Direct Connect gateway and virtual private gateway or transit gateway.
Request Syntax
{ "proposalId": "
string" }
Request Parameters
For information about the parameters that are common to all actions, see Common Parameters.
The request accepts the following data in JSON format.
- proposalId
The ID of the proposal.
Type: String
Required: Yes
Response Syntax
{ "directConnectGatewayAssociationProposal": { " } ] } }
Response Elements
If the action is successful, the service sends back an HTTP 200 response.
The following data is returned in JSON format by the service.
- directConnectGatewayAssociationProposal
The ID of the associated gateway.
Type: DirectConnectGatewayAssociationProposal: | https://docs.aws.amazon.com/directconnect/latest/APIReference/API_DeleteDirectConnectGatewayAssociationProposal.html | 2020-05-25T14:36:45 | CC-MAIN-2020-24 | 1590347388758.12 | [] | docs.aws.amazon.com |
AWS Tools for Windows PowerShell Script Task
Synopsis
Runs a PowerShell script that uses cmdlets from the AWS Tools for Windows PowerShell module. The module is automatically installed if it isn't already available in the environment.
Description
This task accepts a PowerShell command or script that uses cmdlets from the Tools
for Windows PowerShell module to interact with AWS services.
You can specify the script to run via its file name, or you can enter it into the
task
configuration. Before running the supplied script, the task tests to see if the required
Tools for Windows PowerShell module
is already installed. If it isn't installed, the latest available version from the
PowerShell Gallery
If an installation is performed, the module is installed in the
current user
scope. The location is compatible with automatic module load. As a result, you don't
need to import the module in your script.
Parameters
You can set the following parameters for the task. Required parameters are noted by an asterisk (*). Other parameters are optional.
Display name*
The default name of the task instance, which can be modified: AWS Tools for Windows PowerShell resources).
Arguments
Optional arguments to pass to the script. You can use ordinal or named parameters.
Script Source*
The type of script to run. Choose Script File to run a script that is contained in a file. Choose Inline Script to enter the script to run in the task configuration.
Script Path*
Required if the
Script Source parameter is set to Script File.
Specify the full path to the script you want to run.
Inline Script*
Required if the
Script Source parameter is set to Inline Script. Enter the text of the
script to run.
ErrorActionPreference
Prepends the line $ErrorActionPreference = 'VALUE' at the top of your script.
Advanced
Fail on Standard Error
If this option is selected, the task will fail if any errors are written to the error pipeline, or if any data is written to the Standard Error stream. Otherwise, the task relies on the exit code to determine failure.
Ignore $LASTEXITCODE
If this option is not selected, the line if ((Test-Path -LiteralPath variable:\LASTEXITCODE)) { exit $LASTEXITCODE } is appended to the end of your script. This causes the last exit code from an external command to propagate as the exit code of PowerShell. Otherwise, the line is not appended to the end of your script.
Working Directory
The working directory where the script runs.
Task Permissions
Permissions for this task to call AWS service APIs depend on the activities in the supplied script. | https://docs.aws.amazon.com/vsts/latest/userguide/awspowershell-module-script.html | 2020-05-25T15:23:22 | CC-MAIN-2020-24 | 1590347388758.12 | [] | docs.aws.amazon.com |
Documentation Updates for the Period Ending November 30, 2017
New docs
The following documents are new to the help desk support series as part of our regular documentation update efforts:
- About Fastly's Media Shield
- Basic authentication
- Private Edge Dictionaries
- Working with domains
Recently edited and reviewed docs
The following documents were edited by request, as part of the normal doc review cycle, or as a result of updates to how the Fastly web interface and API operate:
- Adding or modifying headers on HTTP requests and responses
- Glossary of terms
- Image Optimization: Format
- Managing the Fastly WAF
- Miscellaneous VCL extensions
- VCL regular expression cheat sheet
- WAF Rules (API)
- WAF Rule status (API)
- Working with services
New and recently updated Japanese translations
We've recently added Japanese (日本語) translations for the following service guides:
- 大容量ファイルのキャッシュパフォーマンス改善
The following Japanese (日本語) translations were recently updated to reflect changes in their English counterparts:
- Fastly WAF の運用
- Fastly WAF ルールセットの更新とメンテナンス
Our documentation archive contains PDF snapshots of docs.fastly.com site content as of the above date. Previous updates can be found in the archive as well. | https://docs.fastly.com/changes/2017/11/30/changes | 2020-05-25T14:25:52 | CC-MAIN-2020-24 | 1590347388758.12 | [] | docs.fastly.com |
Welcome to Beekeeper Data#
This is the user documentation site for Beekeeper Data.
Overview#
Beekeeper Data is a hosted software platform for writing SQL queries, building reports, and sharing them with co-workers, customers, and executives.
Here's a quick walkthrough of Beekeeper features (developer focused):
Beekeeper Concepts#
Beekeeper has several key concepts:
- Reports are collections of information that can be used directly by business users without the help of a developer. Data generated by a report can be customized easily with drop-down selections and input boxes. Reports are configured by developers.
- Dashboards are pages of charts and tables designed to provide a high-level overview of a business, feature, or customer. A dashboard is great for disseminating performance metrics. Dashboards, like reports, are configured by developers
- Email Reports are like Dashboards -- they are visualizations of useful information that are easy to read. However email reports can be delivered directly to the email inbox of the user, so they don't even need to know how to use Beekeeper. This is great for non-technical users or customers. Email reports must be configured by a developer.
- Binders are collections of reports and dashboards with specific access rights attached to them. For example the 'sales' binder may contain useful reports and dashboards for the sales team.
Getting started#
Are you a user or a developer? | https://docs.beekeeperdata.com/ | 2017-06-22T16:36:15 | CC-MAIN-2017-26 | 1498128319636.73 | [] | docs.beekeeperdata.com |
How can I request a refund for a product I bought a license to?
We honor a 30 day refund policy if you have a valid reason. If, before you purchase,, send an email to [email protected]
Please be sure to state WHY you are asking for a refund and we will consider it. | http://docs.promotelabs.com/article/689-how-can-i-request-a-refund-for-a-product-i-bought-a-license-to | 2017-04-23T11:50:05 | CC-MAIN-2017-17 | 1492917118552.28 | [] | docs.promotelabs.com |
There are different ways to get started with Camunda BPM. Choose from the following guides:
- BPMN 2.0
- Learn how to model a BPMN 2.0 process using the Camunda Modeler, add a Java Class and HTML Forms. Package it as a web application and deploy it to an Apache Tomcat Server.
- CMMN 1.1
- Learn how to create a CMMN 1.1 Case Definition featuring Human Tasks, Sentries and Milestones. Package it as a web application and deploy it on Apache Tomcat Server.
- DMN 1.1
- Learn how to create a DMN 1.1 decision table using the Camunda Modeler. Package it as a web application and deploy it to an Apache Tomcat Server.
- Spring Framework
- Get started with using Camunda BPM in a Spring Web Application.
- Java EE 7
- Get started with using Camunda in a Java EE Web Application. Learn how to Camunda together with JSF, CDI, EJBs and JPA.
- Apache Maven
- The most commonly used Apache Maven Coordinates for Camunda.
- BPMN 2.0 Roundtrip
- Get started with Camunda cycle and the BPMN 2.0 roundtrip. Learn how to keep process models in sync. | https://docs.camunda.org/get-started/ | 2017-04-23T11:54:14 | CC-MAIN-2017-17 | 1492917118552.28 | [] | docs.camunda.org |
Retrieving and Filtering GET and POST requests with JRequest::getVar
From Joomla! Documentation
Revision as of 01:33, 18 April 2009 by Dean IconWeb (Talk | contribs)
Summary
When writing any web application, it is crucial that you filter input data before using it. Joomla! provides a set of filtering libraries to help you accomplish this.
JRequest 'getVar' method: EXAMPLE:
.
Definition
The class JRequest is defined in the following location.
libraries\joomla\environment\request.php
The JRequest api page can be found here. | https://docs.joomla.org/index.php?title=J1.5:Retrieving_and_Filtering_GET_and_POST_requests_with_JRequest::getVar&oldid=13977 | 2015-06-30T06:38:22 | CC-MAIN-2015-27 | 1435375091751.85 | [] | docs.joomla.org |
Revision history of "JDatabaseMySQLi::loadNextObject/1.6"
View logs for this page
There is no edit history for this page.
This page has been deleted. The deletion and move log for the page are provided below for reference.
- 20:04, 3 May 2013 Wilsonge (Talk | contribs) deleted page JDatabaseMySQLi::loadNextObject/1.6 (content was: "__NOTOC__ =={{JVer|1.6}} JDatabaseMySQLi::loadNextObject== ===Description=== Load the next row returned by the query. {{Description:JDatabaseMySQLi::loadNextObject}} <span class="editsection" style="font-size:76%;"> <nowiki>[</nowiki>[...") | https://docs.joomla.org/index.php?title=JDatabaseMySQLi::loadNextObject/1.6&action=history | 2015-06-30T05:38:30 | CC-MAIN-2015-27 | 1435375091751.85 | [] | docs.joomla.org |
Dw1Rianto
From Joomla! Documentation
Dear Dw1Rianto, Welcome to Joomla! Documentation!
Yes, welcome! This site is dedicated to documenting Joomla!, the CMS software behind many websites!
You have now been added to documentation as a translator! If you check the page Documentation Translators, you should see your username. Please, take a look at the following pages. They might prove useful to you as a new translator here:
-, or live:thutchison_jr. Just add in your contact request you are a JDOC translator and one of us will add you to the group on Skype.
- Post on a talk page. Either the translation administrator who sent you this welcome or another translator of your language on the Documentation Translators page.
If you have any other questions, please ask me on my talk page. Alternatively, you can go to my user page and click the "gear" icon in the top right and send a private email to me. Once again, welcome, and I hope you quickly feel comfortable here.
Thanks, and regards, Tom Hutchison (talk) 21:12, 22 April 2014 (CDT) | https://docs.joomla.org/User_talk:Dw1Rianto | 2015-06-30T06:41:06 | CC-MAIN-2015-27 | 1435375091751.85 | [] | docs.joomla.org |
Difference between revisions of "How to add an image" From Joomla! Documentation Redirect page Revision as of 04:48, 13 February 2011 (view source)LornaS (Talk | contribs) (How to add an image moved to Add an image: Joomla! 1.5: Refactoring titles for Getting Started series to improve version management) Latest revision as of 11:56, 14 June 2013 (view source) JoomlaWikiBot (Talk | contribs) m (Robot: Fixing double redirect to J1.5:Add an image: Joomla! 1.5) Line 1: Line 1: −#REDIRECT [[Add an image: Joomla! 1.5]]+#REDIRECT [[J1.5:Add an image: Joomla! 1.5]] Latest revision as of 11:56, 14 June 2013 J1.5:Add an image: Joomla! 1.5 Retrieved from ‘’ | https://docs.joomla.org/index.php?title=How_to_add_an_image&diff=prev&oldid=100310 | 2015-06-30T06:18:43 | CC-MAIN-2015-27 | 1435375091751.85 | [] | docs.joomla.org |
Information for "Documentation/doc" Basic information Display titleTemplate:Documentation/doc Default sort keyDocumentation/doc Page length (in bytes)2,316 Page ID25216:10, 19 October 2007 Latest editorCirTap (Talk | contribs) Date of latest edit12:47, 9 May 2008 Total number of edits9 Total number of distinct authors1 Recent number of edits (within past 30 days)0 Recent number of distinct authors0 Page properties Transcluded template (1)Template used on this page: Template:Documentation subpage (view source) Page transcluded on (1)Template used on this page: Template:Documentation (view source) Retrieved from ‘’ | https://docs.joomla.org/index.php?title=Template:Documentation/doc&action=info | 2015-06-30T05:55:28 | CC-MAIN-2015-27 | 1435375091751.85 | [] | docs.joomla.org |
Difference between revisions of "Terminology"
From Joomla! Documentation
Revision as of 10:09, 24 June 2011
Contents
Section
This the topmost “container” for content. The name can be appear within user friendly web addresses. So choose the names with care. You can create “blog” type formats that show articles in a tabular format
Category
Every category lives within a section. Again the name can appear within user friendly web addresses. You can create blog type formats that use a tabular format. In Joomla 1.5! categories cannot "live" within other categories - but in Joomla! 1.6 this changes.
Articles. Like any subject Joomla! has its own terminology - and ordinary English words may have particular meanings. If people don't understand the terminology they may fail to get the best from Joomla! - a real shame..
Look and Feel Terms
Template
These.
Template Positions
Joomla templates are a series of rectangular boxes - each of which can contain content and have an unique name (template position). The exception is an article which normally is shown in the part of the screen not taken up by defined positions. e.g. the centre.
These exist completely independently of any page This gives considerable flexibility - there can be lots of menus. You can assign articles to one or more “menu items” at will.
Be careful to use a good naming convention; as this can really help find the article appearing at a particular web address.
Menu item
This is where the name of the page ( that appears in the web address ) is chosen, and where the layout is selected ( e.g., Blog, article, wrapper ). Some Search Engine Friendly URL systems may allow alternatives to this.
Wrapper
A page layout that allows web pages from external websites to be included without change. This can be good (because it’s quick) and bad (because the look and feel may be very different). | https://docs.joomla.org/index.php?title=Terminology&diff=prev&oldid=59892 | 2015-06-30T06:16:21 | CC-MAIN-2015-27 | 1435375091751.85 | [] | docs.joomla.org |
2011 and onwards Reported Vulnerable Extensions
- 5 Simple File Upload 1.3
- 6 Dshop
- 7 QContacts 1.0.6
- 8 Jobprofile 1.0
- 9 JX Finder 2.0.1
- 10 wdbanners
- 11 JB Captify Content J1.5 and J1.7
- 12 JB Microblog
- 13 JB Slideshow <3.5.1,
- 14 JB Bamboobox
- 15 RokModule
- 16 hm community
- 17 Alameda
- 18 Techfolio 1.0
- 19 Barter Sites 1.3
- 20 Jeema SMS 3.2
- 21 Vik Real Estate 1.0
- 22 yj contact
- 23 NoNumber Framework
- 24 Time Returns
- 25 Simple File Upload
- 26 Jumi
- 27 Joomla content editor
- 28 Google Website Optimizer
- 29 Almond Classifieds
- 30 joomtouch
- 31 RAXO All-mode PRO
- 32 V-portfolio
- 33 obSuggest
- 34 Simple Page
- 35 JE Story
- 36 appointment booking pro
- 37 acajoom
- 38 gTranslate
- 39 alpharegistration
- 40 Jforce
- 41 Flash Magazine Deluxe Joomla
- 42 AVreloaded
- 43 Sobi
- 44 fabrik
- 45 xmap
- 46 Atomic Gallery
- 47 myApi
- 48 mdigg
- 49 Calc Builder
- 50 Cool Debate
- 51
- 52 Scriptegrator Plugin 1.5.5
- 53 Joomnik Gallery
- 54 JMS fileseller
- 55 sh404SEF
- 56 JE Story submit
- 57 FCKeditor
- 58 KeyCaptcha
- 59 Ask A Question AddOn v1.1
- 60 Global Flash Gallery
- 61 com_google
- 62 docman
- 63 Newsletter Subscriber
- 64 Akeeba
- 65 Facebook Graph Connect
- 66 booklibrary
- 67 semantic
- 68 JOMSOCIAL 2.0.x 2.1.x
- 69 flexicontent
- 70 jLabs Google Analytics Counter
- 71 xcloner
- 72 smartformer
- 73 xmap 1.2.10
- 74 Frontend-User-Access 3.4.1
- 75 com properties 7134
- 76 B2 Portfolio
- 77 allcinevid
- 78 People Component
- 79 Jimtawl
- 80 Maian Media SILVER
- 81 alfurqan
- 82 ccboard
- 83 ProDesk v 1.5
- 84 sponsorwall
- 85 Flip wall
- 86 Freestyle FAQ 1.5.6
- 87 iJoomla Magazine 3.0.1
- 88 Clantools
- 89 jphone
- 90 PicSell
- 91 Zoom Portfolio
- 92 zina
- 93 Team's
- 94 Amblog
- 95
- 96
- 97 wmtpic
- 98 Jomtube
- 99 Rapid Recipe
- 100 Health & Fitness Stats
- 101 staticxt
- 102 quickfaq
- 103 Minify4Joomla
- 104 IXXO Cart
- 105 PaymentsPlus
- 106 ArtForms
- 107 autartimonial
- 108 eventcal 1.6.4
- 109 date converter
- 110 real estate
- 111 cinema
- 112 Jreservation
- 113 joomdocs
- 114 Live Chat
- 115 Turtushout 0.11
- 116 BF Survey Pro Free
- 117 MisterEstate
- 118 RSMonials
- 119 Answers v2.3beta
- 120 Gallery XML 1.1
- 121 JFaq 1.2
- 122 Listbingo 1.3
- 123 Alpha User Points
- 124 recruitmentmanager
- 125 Info Line (MT_ILine)
- 126 Ads manager Annonce
- 127 lead article
- 128 djartgallery
- 129 Gallery 2 Bridge
- 130 jsjobs
- 131
- 132 JE Poll
- 133 MediQnA
- 134 JE Job
- 135
- 136 SectionEx
- 137 ActiveHelper LiveHelp
- 138 JE Quotation Form
- 139 konsultasi
- 140 Seber Cart
- 141 Camp26 Visitor
- 142 JE Property
- 143 Noticeboard
- 144 SmartSite
- 145 htmlcoderhelper graphics
- 146 Ultimate Portfolio
- 147 Archery Scores
- 148 ZiMB Manager
- 149 Matamko
- 150 Multiple Root
- 151 Multiple Map
- 152 Contact Us Draw Root Map
- 153 iF surfALERT
- 154 GBU FACEBOOK
- 155 jnewspaper
- 156
- 157 MT Fire Eagle
- 158 Sweetykeeper
- 159 jvehicles
- 160 worldrates
- 161 cvmaker
- 162 advertising
- 163 horoscope
- 164 webtv
- 165 diary
- 166 Memory Book
- 167 JprojectMan
- 168 econtentsite
- 169 Jvehicles
- 170
- 171 gigcalender
- 172 heza content
- 173 SqlReport
- 174 Yelp
- 175
- 176 Codes used
- 177 Future Actions & WIP
- 178 2011 and onwards | https://docs.joomla.org/index.php?title=Vulnerable_Extensions_List&oldid=64212 | 2015-06-30T06:00:44 | CC-MAIN-2015-27 | 1435375091751.85 | [] | docs.joomla.org |
Description / Features
- Install the plugin through the Update Center or download it into the SONARQUBE_HOME/extensions/plugins directory
- Restart the SonarQube server
Usage
- Define some alert thresholds in the quality profile of your project
- Run a quality analysis on your project
Advanced Usage
By default this plugin is active on every project. But you can skip its execution on some of them by setting the
sonar.buildbreaker.skip property to
true at project level. This property can also be set at instance level.
Property
sonar.buildbreaker.forbidden.conf can be used to specify configurations that would break the build. For example, if you set in SonarQube administration GUI property
sonar.buildbreaker.forbidden.conf to
sonar.gallio.mode=skip, each analysis on .Net projects that would be executed with Gallio skipped would be "broken".
Change Log
Release 1.1 (5 issues)
Release 1.0 (1 issues)
| http://docs.codehaus.org/pages/viewpage.action?pageId=232358055 | 2014-03-07T08:10:04 | CC-MAIN-2014-10 | 1393999636902 | [array(['/s/en_GB-1988229788/4727/ffd10e10ff7bc0a1d7e29b4d2225707dd7f03d0b.15/_/images/icons/wait.gif',
None], dtype=object)
array(['/s/en_GB-1988229788/4727/ffd10e10ff7bc0a1d7e29b4d2225707dd7f03d0b.15/_/images/icons/wait.gif',
None], dtype=object)
array(['/s/en_GB-1988229788/4727/ffd10e10ff7bc0a1d7e29b4d2225707dd7f03d0b.15/_/images/icons/wait.gif',
None], dtype=object) ] | docs.codehaus.org |
Bold > User Guide BlackBerry Bold 9900/9930 Smartphones - BlackBerry Bold Series - 7.1
Turn on or turn off TTY support
- From the home screen, press the
key.
- Press the
key > Options > TTY.
- Change the TTY field.
- Press the
key > Save.
A TTY indicator appears in the connections area at the top of the home screen.
Related information
Next topic: Voice mail
Previous topic: About TTY support
Next topic: Voice dialing
Previous topic: About TTY support
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/smartphone_users/deliverables/38289/alk1319566129007.jsp | 2014-03-07T08:09:42 | CC-MAIN-2014-10 | 1393999636902 | [] | docs.blackberry.com |
askmethodor
repo=options, you can install Fedora from a network server using FTP, HTTP, or NFS protocols. You can also instruct the installation program to consult additional software repositories later in the process.
/12/Fedora/to the path shown on the web page. A correct mirror location for an
architecture/os/
i386system resembles the URL. | http://docs.fedoraproject.org/en-US/Fedora/12/html/Installation_Guide/s1-begininstall-perform-nfs-x86.html | 2014-03-07T08:08:33 | CC-MAIN-2014-10 | 1393999636902 | [] | docs.fedoraproject.org |
Help Center
Local Navigation
Dangerous areas
The BlackBerry® Wireless Headset headset gasoline or petrol.
Blasting areas: To avoid interfering with blasting operations, turn off Bluetooth® technology on the headset when in a “blasting area” or in areas that post “Turn off two-way radio”. Obey all signs and instructions.
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/smartphone_users/deliverables/19232/Dangerous_areas_HS_1_1070791_11.jsp | 2014-03-07T08:16:40 | CC-MAIN-2014-10 | 1393999636902 | [] | docs.blackberry.com |
Product Attributes
Attributes are the building blocks of your product catalog, and describe specific characteristics of a product. Product attributes can be organized into attribute sets, which are then used as templates for creating products.
Attributes determine the type of input control that is used for product options, provide additional information for product pages, and are used as search parameters and criteria for layered navigation, product comparison reports, and promotions. You can create as many attributes and attribute sets as necessary to describe the products in your catalog. In addition to the attributes that you can create, system attributes, such as price, are built into the core Magento platform and cannot be changed.
Creating a New Attribute While Editing a Product | https://docs.magento.com/user-guide/catalog/product-attributes.html | 2020-09-18T11:44:34 | CC-MAIN-2020-40 | 1600400187390.18 | [] | docs.magento.com |
User Guide
Local Navigation
Prerequisites: Bluetooth connections
- Verify that your BlackBerry® smartphone is running BlackBerry® Device Software 4.1 or later.
- Verify that your computer is running Windows® XP SP 3 or later. For more information about the Bluetooth®. | http://docs.blackberry.com/en/smartphone_users/deliverables/29244/Bluetooth_prerequisites_6.0_1175321_11.jsp | 2014-04-16T11:00:46 | CC-MAIN-2014-15 | 1397609523265.25 | [] | docs.blackberry.com |
The primordial class list indicates which classes should be compiled and baked into the boot image. The bare minimum set of classes needed in the primordial list includes;
- All classes that are needed to load a class from the file system. The class may need to be loaded as a single class file or out of a jar. Failing this there will be an infinite regress on the first class load.
- All classes that are needed by the baseline compiler to compile any method. Failing this we regress when attempting to compile a method so we can execute it.
- Enough of the core VM services and data structures, and class library (java.*) to support the above. This includes threading, memory management, runtime support for compiled code,. | http://docs.codehaus.org/display/RVM/Primordial+Class+List | 2014-04-16T10:47:10 | CC-MAIN-2014-15 | 1397609523265.25 | [] | docs.codehaus.org |
.0.13 installed you will need to download the 1.0.13 to 1.0.14 RC1 patch package. The patch packages are available on JoomlaCode.org..0.13 to 1.0.14 RC1 and you would prefer the .tar.gz package format, then select the file named Joomla_1.0.12_to_1.0.14RC1-Patch_Package.tar.gz
Please note that for the Release Candidate(s) we are not providing patch packages for versions prior to 1.0.12. The full range of patch packages will be provided when 1.0.14 Stable is released.. If you are not familiar with the process of copying a Joomla! website then refer to the copying instructions.
There are different ways of installing a package file depending on your particular circumstances. If you have difficulty with one of these methods, then simply try another.. If you are not familiar with the process of copying a Joomla! website then refer to the copying instructions. 5.
Hopefully all will be well and you can relax. If you have any questions before, during, or after the upgrade then please ask them on the Joomla! 1.0 Upgrading 1.0.x Forum. | http://docs.joomla.org/index.php?title=Upgrading_to_1.0.14_RC1&oldid=348 | 2014-04-16T12:05:34 | CC-MAIN-2014-15 | 1397609523265.25 | [] | docs.joomla.org |
/usr/local/cpanel/hooks/__MODULENAME__/__FUNCTION__NAME__and are executed whenever the API event that corresponds to its name occurs. cPanel function hooks work by being passed XML formatted data from the API via
STDIN. These are run 100% as a separate process, so function hooks can be written in any language and are generally safe. If a function hook fails, it will not break the cPanel interface.
/usr/local/cpanel/Cpanel/CustomEventHandler.pm. The
event()method inside of this module will be executed every time a cPanel API call is made. The
event()method is passed a variety of data, including parameters being passed to the API call and data returned by the API call. The real value in CustomEventHandlers is that, rather than being executed as separate processes, they are executed as part of the API call. If you wish to deny or modify an event, CustomEventHandler is probably the best method.
CustomEventHandlerhere. | http://docs.cpanel.net/twiki/bin/view/DevHooks/ | 2014-04-16T10:09:49 | CC-MAIN-2014-15 | 1397609523265.25 | [] | docs.cpanel.net |
numpy.mod¶
- numpy.mod(x1, x2[, out]) = <ufunc 'remainder'>¶
Return element-wise remainder of division.
Computes x1 - floor(x1 / x2) * x2, the result has the same sign as the divisor x2. It is equivalent to the Python modulus operator x1 % x2 and should not be confused with the Matlab(TM) rem function.
Notes
Returns 0 when x2 is 0 and both x1 and x2 are (arrays of) integers.
Examples
>>> np.remainder([4, 7], [2, 3]) array([0, 1]) >>> np.remainder(np.arange(7), 5) array([0, 1, 2, 3, 4, 0, 1]) | http://docs.scipy.org/doc/numpy/reference/generated/numpy.mod.html | 2014-04-16T10:16:49 | CC-MAIN-2014-15 | 1397609523265.25 | [] | docs.scipy.org |
will change with Maven 2.1. | http://docs.codehaus.org/pages/viewpage.action?pageId=74383383 | 2014-04-16T10:44:40 | CC-MAIN-2014-15 | 1397609523265.25 | [] | docs.codehaus.org |
Working with Names and Name Based Attributes: Redux
This post is a follow-up to a previous post entitled “Working with Names and Name Based Attributes”. In that post, I discussed various ways of generating unique values for attributes such as AccountName. Since that post, I have had customers with slightly different naming conventions ask about ways to do something similar, but to also include a numeric value. So, in this post, I’d like to provide you with one more method of generating a semi-random unique attribute value. Please note that, as before, I am using a custom activity (WAL) to generate a unique value in this workflow. WAL download and documentation can be found here. While the random number bit could be generated with a standard function evaluator and appended to the end of an AccountName, unfortunately, there is no way with that method to check for uniqueness. In some cases (where there may be a very small set of objects we need to generate this value for), this may be perfectly acceptable. For AccountName, however, I would much prefer to determine uniqueness, either against FIM or AD.
Let’s start by looking at the real meat and potatoes of the activity. Here are two different examples, showing we can make this either basic or slightly more complex.
Here we see FirstInitial + First 15 of LastName + RandomNumber
Example: The Value Expression:
Left([//Target/FirstName],1)+Left([//Target/LastName],15)+RandomNum(1000,9999)
Will result in:
John Smith = JSMITH1234
John Smithstattersoon = JSMITHSTATTERSOO1234
The reason behind trimming LastName to 15 it to give us a total length of 20 (hard limit in AD for sAMAccountName).
Here, we are doing essentially the same thing, only we are now trimming LastName to 14 and including a MiddleInitial, if present. If MiddleName is not present, we pass over it. This will, however, result in an overall length of 19 rather than 20.
Example: The Value Expression:
Left([//Target/FirstName],1)+IIF(IsPresent([//Target/MiddleName]),Left([//Target/MiddleName],1),"")+Left([//Target/LastName],14)+RandomNum(1000,9999)
Will result in:
John Smithstattersoon = JSMITHSTATTERSO1234
John Robert Smithstattersoon = JRSMITHSTATTERSO1234
You will notice in each of the above examples there is a second Value Expression line. In each of these cases, it is the same value with the uniqueness key seed appended.
Example:
Left([//Target/FirstName],1)+IIF(IsPresent([//Target/MiddleName]),Left([//Target/MiddleName],1),"")+Left([//Target/LastName],14)+RandomNum(1000,9999)+[//UniquenessKey]
This is an additional safety to ensure uniqueness should all possibilities be exhausted. Given the above example, there would need to be user objects with account names in the entire range from:
JRSMITHSTATTERSO1000 to JRSMITHSTATTERSO9999
Once this condition has been met, the uniqueness key seed would generate a value containing:
JRSMITHSTATTERSO99992
In all reality, however, this scenario seems highly unlikely.
Now, for the sake of due diligence, here is what the entire new user generating workflow activity would look like:
Before firing this workflow, we might have:
Each one may or may not have a First, Middle or Last Name present. If so, cases may be off (i.e. all upper, all lower or any combination thereof). After this workflow activity, however, we would have:
Note that some users are lacking a MiddleName; as such, their AccountName and DisplayName lack it as well. Likewise, users with a populated MiddleName have a MiddleInitial included in both.
Questions? Comments? Love FIM so much you can't even stand it?
## # # | https://docs.microsoft.com/en-us/archive/blogs/connector_space/working-with-names-and-name-based-attributes-redux | 2020-05-25T05:57:36 | CC-MAIN-2020-24 | 1590347387219.0 | [] | docs.microsoft.com |
SQL Server 2005 Express Edition - More than you expect:
- Small companies with minimal storage requirements
- Local data storage and caching for smart clients
- Stand alone desktop applications
- Caching of data (from another SQL Server instance, or even one of those “other” databases) in multi tier and web applications
- Distributed data processing
- Web applications with minimal storage requirements
Can SQL Server 2005 Express accept client network connections?
Absolutely.
What are the limitations of SQL Server 2005 Express?
Limitations do exist, however they are very reasonable. The basic limitations are as follows:
- 32 bit only. SQL Server 2005 Express will run on a 64 bit OS, but only in 32 bit mode.
- 1 GB of RAM. SQL Server 2005 Express will utilize a maximum of 1 GB of RAM, even if more RAM is available.
- 4 GB per database. Each database can be a maximum of 4 GB in size.
- 1 CPU. SQL Server 2005 Express will only utilize one CPU, even on a multi processor machine.
- Basic reporting services. | https://docs.microsoft.com/en-us/archive/blogs/czhower/sql-server-2005-express-edition-more-than-you-expect | 2020-05-25T06:09:50 | CC-MAIN-2020-24 | 1590347387219.0 | [] | docs.microsoft.com |
New Nuggets - Content Controls and Repeating Data (or Can I Make a Table?)
I blogged back here about what is required to handle repeating data with dynamic documents (ie Word documents using content controls to bind to custom XML). In the meantime I've had numerous emails from people with very similar problems all asking how they can use content controls with some sort of repeating data. It seemed the best thing to do was to create a nugget (or two or three) and hopefully everyone will be happy.
It's a bit of a monster, split into three parts. And I apologise up front if there's some repetition in there as [a] I'm getting old and [b] it's amazingly easy to forget what you've said (or rather what you've committed to "tape") when you do a couple of retakes.
Anyway, the nuggets will be up very shortly here and you can download the source code here.
And here are the links from the "More Information" slide:
- Original Article on OpenXMLDeveloper.org
- Another Related Blog Entry
- MSDN Developer Centre
- Open XML File Format Snippets
- Lots more on Word Content Controls
- This blog :-) (search on Word Content Controls)
Technorati Tags: office , open xml | https://docs.microsoft.com/en-us/archive/blogs/mikeormond/new-nuggets-content-controls-and-repeating-data-or-can-i-make-a-table | 2020-05-25T06:24:07 | CC-MAIN-2020-24 | 1590347387219.0 | [] | docs.microsoft.com |
dhtmlxScheduler can load data of 3 formats which are:
To load data from an inline dataset, use the parse method:
scheduler.init('scheduler_here',new Date(2009,10,1),"month"); ... scheduler.parse([ {text:"Meeting", start_date:"2019-04-11 14:00", end_date:"2019-04-11 17:00"}, {text:"Conference", start_date:"2019-04-15 12:00", end_date:"2019-04-18 19:00"}, {text:"Interview", start_date:"2019-04-24 09:00", end_date:"2019-04-24 10:00"} ],"json");
Related sample: Displaying events as a cascade
To load data from a file, use the load method:
scheduler.init('scheduler_here',new Date(2018,10,1),"month"); ... scheduler.load("data.json"); //loading data from a file
Related sample: Basic initialization
There are two ways to load data from a database. In both cases, you need to deal with both the client and the server side.
1) The first way includes the usage of REST API for communication with server.
It will generate the corresponding response in JSON format.
app.get('/data', function(req, res){ db.event.find().toArray(function(err, data){ //set id property for all records for (var i = 0; i < data.length; i++) data[i].id = data[i]._id; //output response res.send(data); }); });
scheduler.init('scheduler_here', new Date(), "month"); scheduler.load("apiUrl");
The detailed information on Scheduler server-side integration using REST API is given in the article Server-Side Integration.
2) The second way presupposes loading data from database table(s) using PHP Connector.
Static loading from db. Server-side code
include ('dhtmlxConnector/codebase/scheduler_connector.php'); $res=mysql_connect("localhost","root",""); mysql_select_db("sampleDB"); $calendar = new SchedulerConnector($res); $calendar->render_table("events","id","event_start,event_end,text","type");
Static loading from db. Client-side code
scheduler.init('scheduler_here', new Date(), "month"); scheduler.load("events.php");
See the detailed information in the dhtmlxScheduler with dhtmlxConnector guide.
To load data from multiple sources, use a special extension - 'multisource' provided in the ext/dhtmlxscheduler_multisource.js file.
Multiple sources can be used for both static and dynamic loading
Include the aforementioned file on the page and use the same load method as in:
scheduler.load(["first/source/some","second/source/other"]);
To be correctly parsed, data items must have at least 3 properties:
To be loaded from a database, data items should have one more mandatory property:
The default date format for JSON and XML data is '%Y-%m-%d %H:%i' (see the date format specification).
To change it, use the date_format configuration option.
scheduler.config.date_format="%Y-%m-%d %H:%i"; ... scheduler.init('scheduler_here', new Date(2019, 3, 18), "week");
You are not limited to the mandatory properties listed above and can add any custom ones to data items. Extra data properties will be parsed as strings and loaded to the client side where you can use them according to your needs.
See examples of data with custom properties here.
When you set up a database, the expected structure for scheduler events is the following:
If you have recurring events, you need some extra columns for them:
You can define any additional columns, they can be loaded to the client and made available for the client-side API.
By default, dhtmlxScheduler loads all data at once. It may become problematic when you are using big event collections. In such situations you may use the dynamic loading mode and load data by parts, necessary to fill the current viewable area of the scheduler.
To enable the dynamic loading, call the setLoadMode method:
Enabling the dynamic loading
scheduler.setLoadMode("month"); scheduler.load("some.php");
As a parameter the method takes the loading mode that defines the size of the data to load: day, week, month or year.
For example, if you set the 'week' mode, the scheduler will request data just for the current week and load remaining ones on demand.
The predefined loading modes specify the interval of loading data within the set period. For example, you open the Week View in the scheduler for the following dates: from 2018-01-29 to 2018-02-05. Depending on the chosen mode, the dynamic loading will go like this:
scheduler.setLoadMode("day");
Scheduler will request data by days, i.e.: from 2018-01-29 to 2018-02-05.
scheduler.setLoadMode("month");
Scheduler will request data by whole months, i.e.: from 2018-01-01 to 2018-03-01.
scheduler.setLoadMode("year");
Scheduler will request data by whole years, i.e.: from 2018-01-01 to 2019-01-01.
In any case, the requested interval won't be smaller than the rendered one.
The loading interval defines:
The greater the loading interval is, the less the frequency of calls for dynamic loading will be. Scheduler keeps in memory the already loaded data portion and won't repeat a call for it.
The greater the loading interval is, the longer a request is being processed, since the more data are being loaded at once.
Generated requests look as in:
some.php?from=DATEHERE&to=DATEHERE
where DATEHERE - a valid date value in the format defined by the load_date option.
If you are using dhtmlxConnector at the server side, you don't need to do any additional server-side operations to parse the data.
When you deal with a large data size, it's useful to display the loading spinner. It will show users that the app is actually doing something.
To enable the loading spinner for the scheduler, set the show_loading property to true.
scheduler.config.show_loading = true; ... scheduler.init('scheduler_here',new Date(2018,0,10),"month");
To change the spinner image - replace 'imgs/loading.gif' with your custom image.
While loading data into Timeline and Units views, you need to set an array of sections that will be loaded into views.
In order to load data containing Timeline and Units sections from the backend, you need to implement a more extended configuration:
scheduler.createTimelineView({ .... y_unit: scheduler.serverList("sections"), ... });
scheduler.load("data.json");
"data.json"
{ "data":[ { "id":"1", "start_date":"2018-03-02 00:00:00", "end_date":"2018-03-04 00:00:00", "text":"dblclick me!", "type":"1" }, { "id":"2", "start_date":"2018-03-09 00:00:00", "end_date":"2018-03-11 00:00:00", "text":"and me!", "type":"2" }, { "id":"3", "start_date":"2018-03-16 00:00:00", "end_date":"2018-03-18 00:00:00", "text":"and me too!", "type":"3" }, { "id":"4", "start_date":"2018-03-02 08:00:00", "end_date":"2018-03-02 14:10:00", "text":"Type 2 event", "type":"2" } ], "collections": { "sections":[ {"value":"1","label":"Simple"}, {"value":"2","label":"Complex"}, {"value":"3","label":"Unknown"} ] } }
In the above example the "data" array contains calendar events, and the "collections" hash contains collections that can be referenced via the serverList method.Back to top | https://docs.dhtmlx.com/scheduler/loading_data.html | 2019-10-14T03:47:35 | CC-MAIN-2019-43 | 1570986649035.4 | [] | docs.dhtmlx.com |
The most basic configuration is related to the choice of actual redirection implementation by declaring class for each connector:
bosh { seeOtherHost (class: <value>) {} } c2s { seeOtherHost (class: <value>) {} } ws2s { seeOtherHost (class: <value>) {} }' 'phases' = [ 'OPEN', 'LOGIN' ] } }
'default-host' = 'host1;host2;host3'- a semicolon separated list of hosts to be used for redirection.
'phases' = []- an array of phases in which redirection should be active, currently possible values are:
OPENwhich enables redirection during opening of the XMPP stream;
LOGINwhich enables redirection upon authenticating user session;
By default redirection is currently enabled only in the
OPEN phase. | http://docs.tigase.net.s3-website-us-west-2.amazonaws.com/tigase-server/8.0.0/Administration_Guide/webhelp/_configuration_options.html | 2019-10-14T03:18:52 | CC-MAIN-2019-43 | 1570986649035.4 | [] | docs.tigase.net.s3-website-us-west-2.amazonaws.com |
Try it now and let us know what you think. Switch to the new look >>
You can return to the original look by selecting English in the language selector above.
RequestCancelWorkflowExecution
Records a
WorkflowExecutionCancelRequested event in the currently running
workflow execution identified by the given domain, workflowId, and runId. This logically
requests the cancellation of the workflow execution as a whole. It is up to the decider
to
take appropriate actions when it receives an execution history with this event.
Note
If the runId isn't specified, the
WorkflowExecutionCancelRequested event
is recorded in the history of the current open workflow execution with the specified
workflowId in the domain.
Note
Because this action allows the workflow to properly clean up and gracefully close, it should be used instead of TerminateWorkflowExecution when possible.", "runId": "
string", "workflowId": "
string" }
Request Parameters
For information about the parameters that are common to all actions, see Common Parameters.
The request accepts the following data in JSON format.
- domain
The name of the domain containing the workflow execution to cancel.
Type: String
Length Constraints: Minimum length of 1. Maximum length of 256.
Required: Yes
- runId
The runId of the workflow execution to cancel.
Type: String
Length Constraints: Maximum length of 64.
Required: No
- workflowId
The workflowId of the workflow execution to cancel.
Type: String
Length Constraints: Minimum length of 1. Maximum length of 256.
RequestCancelWorkflowExecution:49:06 GMT X-Amz-Target: SimpleWorkflowService.RequestCancelWorkflowExecution Content-Encoding: amz-1.0 X-Amzn-Authorization: AWS3 AWSAccessKeyId=AKIAIOSFODNN7EXAMPLE,Algorithm=HmacSHA256,SignedHeaders=Host;X-Amz-Date;X-Amz-Target;Content-Encoding,Signature=xODwV3kbpJbWVa6bQiV2zQAw9euGI3uXI82urc+bVeo= Referer: Content-Length: 106 Pragma: no-cache Cache-Control: no-cache {"domain": "867530901", "workflowId": "20110927-T-1", "runId": "94861fda-a714-4126-95d7-55ba847da8ab"}
Sample Response
HTTP/1.1 200 OK Content-Length: 0 Content-Type: application/json x-amzn-RequestId: 6bd0627e-3ffd-11e1-9b11-7182192d0b57
See Also
For more information about using this API in one of the language-specific AWS SDKs, see the following: | https://docs.aws.amazon.com/amazonswf/latest/apireference/API_RequestCancelWorkflowExecution.html | 2019-10-14T03:51:37 | CC-MAIN-2019-43 | 1570986649035.4 | [] | docs.aws.amazon.com |
User Manual¶
This section is for experienced users who want to get the most of Hummingbot:
- Installation: How to install Hummingbot via Docker or from source, on a variety of platforms.
- Operation: How to run and manage your trading bots
- Configuration: How to configure a trading bot
- Connectors: Reference for connectors to centralized and decentralized exchanges
- Strategies: Reference for trading strategies available in Hummingbot
- Data Feeds: Reference for external data feeds used to fetch real-time conversion rates, along with how to add your own
- Utilities: Miscellaneous features such as logging and the Telegram integration. | https://docs.hummingbot.io/manual/ | 2019-10-14T05:09:28 | CC-MAIN-2019-43 | 1570986649035.4 | [] | docs.hummingbot.io |
Contents Now Platform Capabilities Previous Topic Next Topic Add an additional proxy Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Add an additional proxy After the first Edge Encryption proxy is properly configured and tested, you can set up additional proxies on a Linux or Windows machine. Installing multiple proxies on the same machine is not recommended. About this task Add additional proxy servers on additional machines to ensure an optimal environment. See Sizing your Edge Encryption environment to determine the number of additional proxies needed. Note: Make sure that all proxies have the same encryption keys and the same RSA key pair used to digitally sign encryption configuration and encryption rules. If a proxy database was set up as part of the installation, all proxies must use the same proxy database. Procedure Install the proxy using the command for Linux. See Install the Edge Encryption proxy server on Linux or Windows. Copy all the encryption keys and the edgeencryption.properties file from the first proxy to the new proxy. Encryption keys may be located in the proxy keystore, in the /keys directory, or in a SafeNet KeySecure keystore. Open the edgeencryption.properties file on the new proxy. Change the following properties: Property Description edgeencryption.proxy.name Unique name of the proxy server edgeencryption.proxy.host The server name, IP address, or fully-qualified domain name of the computer running the proxy. Do not change this property if installing the proxy server on the same machine the properties file was copied from. edgeencryption.proxy.http.port Port on the proxy for HTTP communication. Must be unique across all processes on the machine. edgeencryption.proxy.https.port Port on the proxy for HTTPS communication. Must be unique across processes on the machine. If installing the proxy server on a Windows machine, you must change the name of the service. Open the conf/wrapper.conf file on the new proxy and add the following properties. Caution: You must perform this step before launching the proxy server. Property Description wrapper.ntservice.name Unique name of the Edge Encryption proxy service. wrapper.ntservice.displayname Edge Encryption proxy service display name. wrapper.ntservice.description (Optional) Proxy server description. Save and close the file. Launch the proxy using the appropriate command. See Start the Edge Encryption proxy. Previous topicObfuscate passwords in the properties file On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/istanbul-servicenow-platform/page/administer/edge-encryption/task/t_AddAdditionalProxies.html | 2019-10-14T03:51:13 | CC-MAIN-2019-43 | 1570986649035.4 | [] | docs.servicenow.com |
Contents Now Platform Administration Previous Topic Next Topic Add domains to a visibility domains list Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Add domains to a visibility domains list Adding a visibility domain allows a user or group to see and potentially edit records from another domain regardless of the user or group's normal domain membership. Before you beginRole required: admin About this task Assigning visibility domains to all members of a group is preferred over granting them to individual users.Note: Adding a visibility domain does not change a table or record's access control rule requirements. Procedure Navigate to the domain table. Select the group you want to provide with visibility domains. Add the Visibility domains related list to the form. From the Visibility domains related list, click Edit. Select the domain records you want the group or domain to see. Click Save, and then click Update. On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/madrid-platform-administration/page/administer/company-and-domain-separation/task/t_AddADomainToAVisibilityDomainList.html | 2019-10-14T03:48:16 | CC-MAIN-2019-43 | 1570986649035.4 | [] | docs.servicenow.com |
Installing Ubuntu Touch 16.04 images on Halium¶
Warning
These steps will wipe all of the data on your device. If there is anything that you would like to keep, ensure it is backed up and copied off of the device before continuing.
Now that you’ve built halium-boot, we’re ready to install Ubuntu Touch on your device.
In order to install Ubuntu Touch, you will need a recovery with Busybox, such as TWRP, installed on your phone. You will also need to ensure the /data partition is formatted with ext4 and does not have any encryption on it.
Install halium-boot¶
We’ll need to install the halium-boot image before installing an image. Reboot your phone into fastboot mode, then do the following from your Halium tree:
cout fastboot flash boot halium-boot.img
Download the rootfs¶
Next we’ll need to download the rootfs (root filesystem) that’s appropriate for your device. Right now, we only have one available. Simply download
ubports-touch.rootfs-xenial-armhf.tar.gz from our CI server. If you have a 64-bit ARM (aarch64) device, this same rootfs should work for you. If you have an x86 device, let us know. We do not have a rootfs available for these yet.
Install system.img and rootfs¶
Clone or download the halium-install repository. This repository contains tools that can be used to install a Halium system image and distribution rootfs. We’ll use the
halium-install script to install Ubuntu Touch on your device:
path/to/halium-install -p ut path/to/rootfs.tar.gz path/to/system.img
The script will copy and extract the files to their proper places, then allow you to set the phablet user’s password.
Get SSH access¶
When your device boots, it will likely stay at the bootloader screen. However, you should also get a new network connection on the computer you have it plugged in to. We will use this to debug the system.
To confirm that your device has booted correctly, run
dmesg -w and watch for “GNU/Linux device” in the output. If you instead get something similar to “Halium initrd Failed to boot”, please get in contact with us so we can find out why.
Similar to the Halium reference rootfs, you should set your computer’s IP on the newly connected RNDIS interface to
10.15.19.100 if you don’t get one automatically. Then, run the following to access your device:
ssh [email protected]
The password will be the one that you set while running halium-install.
Common Problems¶
If you have any errors while performing these steps, check see if any of the following suggestions match what you are seeing. If you have installed successfully, skip down to Continue on.
Continue on¶
Congratulations! Ubuntu Touch has now booted on your device. Move on to Running Ubuntu Touch to learn about more specific steps you will need to take for a complete port. | https://docs.ubports.com/en/latest/porting/installing-16-04.html | 2019-10-14T03:26:39 | CC-MAIN-2019-43 | 1570986649035.4 | [] | docs.ubports.com |
Secure Scrapoxy¶
Secure Scrapoxy with Basic auth¶
Scrapoxy supports standard HTTP Basic auth (RFC2617).
Step 1: Add username and password in configuration¶
Open
conf.json and add auth section in proxy section (see Configure Scrapoxy):
{ "proxy": { "auth": { "username": "myuser", "password": "mypassword" } } }
Step 2: Add username and password to the scraper¶
Configure your scraper to use username and password:
The URL is:
(replace myuser and mypassword with your credentials). | https://scrapoxy.readthedocs.io/en/stable/advanced/security/index.html | 2019-10-14T04:30:26 | CC-MAIN-2019-43 | 1570986649035.4 | [] | scrapoxy.readthedocs.io |
Templates are provided for the signaling protocols that are listed here.
This section describes the values of the Blueworx Voice Response system parameters as they are supplied in each template. Because so many templates are available, they are described in four tables; to find the table that you want, see Table 1.1 | http://docs.blueworx.com/BVR/InfoCenter/V6.1/help/topic/com.ibm.wvraix.config.doc/i567792.html | 2019-10-14T02:57:16 | CC-MAIN-2019-43 | 1570986649035.4 | [] | docs.blueworx.com |
Contents Performance Analytics and Reporting Previous Topic Next Topic Condition builder Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Condition. Condition builder actions You can add a dependent condition by clicking AND or OR next to the condition. You can add a top-level condition by clicking AND or OR on the condition builder toolbar above the conditions. You can remove a condition by clicking the delete icon (X) next to the condition. Figure 1. Example AND condition Figure 2. Example OR condition Related ConceptsDot-walking On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/geneva-performance-analytics-and-reporting/page/use/common_ui_elements/concept/c_ConditionBuilder.html | 2019-10-14T03:45:40 | CC-MAIN-2019-43 | 1570986649035.4 | [] | docs.servicenow.com |
This page describes how to install a Kubernetes cluster on AWS.
To create a Kubernetes cluster on AWS, you will need an Access Key ID and a Secret Access Key from AWS..
CoreOS Tectonic includes the open-source Tectonic Installer that creates Kubernetes clusters with Container Linux nodes on AWS.
CoreOS originated and the Kubernetes Incubator maintains a CLI tool, kube-aws, that creates and manages Kubernetes clusters with Container Linux nodes, using AWS tools: EC2, CloudFormation and Autoscaling.
See a simple nginx example to try out your new cluster.
The “Guestbook” application is another popular example to get started with Kubernetes: guestbook example
For more complete applications, please look in the examples directory
Adding and removing nodes through
kubectl is not supported. You can still scale the amount of nodes manually through adjustments of the ‘Desired’ and ‘Max’ properties within the Auto Scaling Group, which was created during the installation.
Make sure the environment variables you used to provision your cluster are still exported, then call the following script inside the
kubernetes directory:
cluster/kube-down.sh. | https://v1-12.docs.kubernetes.io/docs/setup/turnkey/ | 2019-10-14T03:31:24 | CC-MAIN-2019-43 | 1570986649035.4 | [] | v1-12.docs.kubernetes.io |
@java.lang.SuppressWarnings("rawtypes") public class DefaultUrlMappingsHolder extends java.lang.Object
Default implementation of the UrlMappingsHolder interface that takes a list of mappings and then sorts them according to their precedence rules as defined in the implementation of Comparable.
Performs a match uses reverse mappings to looks up a mapping from the controller, action and params. This is refactored to use a list of mappings identified by only controller and action and then matches the mapping to select the mapping that best matches the params (most possible matches).
controller- The controller name
action- The action name
httpMethod- The HTTP method
params- The params | http://docs.grails.org/4.0.0.RC1/api/org/grails/web/mapping/DefaultUrlMappingsHolder.html | 2019-10-14T04:09:44 | CC-MAIN-2019-43 | 1570986649035.4 | [] | docs.grails.org |
Build Guide - Regular - Feature Announcement (slideout)
Use case: You want to announce a new feature, but you don't want to interrupt your users with a large modal. Instead, use a slideout. Your user can still interact with your product when a slideout is present.
Indiegogo Example:
Format of Indiegogo's slideout:
1. An indication this is something new, like a feature release.
2. Information about the release.
3.. | https://docs.appcues.com/article/168-feature-announcement-slideout | 2019-10-14T04:50:42 | CC-MAIN-2019-43 | 1570986649035.4 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/559e91f8e4b0b0593824b4a9/images/57f7f98cc697911f2d323987/file-deP2jVpBxx.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/559e91f8e4b0b0593824b4a9/images/57f7fa579033600277a67d14/file-RbWDLbXcG1.png',
None], dtype=object) ] | docs.appcues.com |
Use Azure portal to back up multiple virtual machines
When you back up data in Azure, you store that data in an Azure resource called a Recovery Services vault. The Recovery Services vault resource is available from the Settings menu of most Azure services. The benefit of having the Recovery Services vault integrated into the Settings menu of most Azure services makes it very easy to back up data. However, individually working with each database or virtual machine in your business is tedious. What if you want to back up the data for all virtual machines in one department, or in one location? It is easy to back up multiple virtual machines by creating a backup policy and applying that policy to the desired virtual machines. This tutorial explains how to:
- Create a Recovery Services vault
- Define a backup policy
- Apply the backup policy to protect multiple virtual machines
- Trigger an on-demand backup job for the protected virtual machines
Log in to the Azure portal
Log in to the Azure portal.
Create a Recovery Services vault
The Recovery Services vault contains the backup data, and the backup policy applied to the protected virtual machines. Backing up virtual machines is a local process. You cannot back up a virtual machine from one location to a Recovery Services vault in another location. So, for each Azure location that has virtual machines to be backed up, at least one Recovery Services vault must exist in that location.
On the left-hand menu, select All services and in the services list, type Recovery Services. As you type, the list of resources filters. When you see Recovery Services vaults in the list, select it to open the Recovery Services vaults menu.
In the Recovery Services vaults menu, click Add to open the Recovery Services vault menu.
In the Recovery Services vault menu,
-.
A Recovery Services vault must be in the same location as the virtual machines being protected. If you have virtual machines in multiple regions, create a Recovery Services vault in each region. This tutorial creates a Recovery Services vault in West Europe because that is where myVM (the virtual machine created with the quickstart) was created.
It can take several minutes for the Recovery Services vault to be created. Monitor the status notifications in the upper right-hand area of the portal. Once your vault is created, it appears in the list of Recovery Services vaults.
When you create a Recovery Services vault, by default the vault has geo-redundant storage. To provide data resiliency, geo-redundant storage replicates the data multiple times across two Azure regions.
Set backup policy to protect VMs
After creating the Recovery Services vault, the next step is to configure the vault for the type of data, and to set the backup policy. Backup policy is the schedule for how often and when recovery points are taken. Policy also includes the retention range for the recovery points. For this tutorial let's assume your business is a sports complex with a hotel, stadium, and restaurants and concessions, and you are protecting the data on the virtual machines. The following steps create a backup policy for the financial data.
From the list of Recovery Services vaults, select myRecoveryServicesVault to open its dashboard.
On the vault dashboard menu, click Backup to open the Backup menu.
On the Backup Goal menu, in the Where is your workload running drop-down menu, choose Azure. From the What do you want to backup drop-down, choose Virtual machine, and click Backup.
These actions prepare the Recovery Services vault for interacting with a virtual machine. Recovery Services vaults have a default policy that creates a restore point each day, and retains the restore points for 30 days.
To create a new policy, on the Backup policy menu, from the Choose backup policy drop-down menu, select Create New.
In the Backup policy menu, for Policy Name type Finance. Enter the following changes for the Backup policy:
For Backup frequency set the timezone for Central Time. Since the sports complex is in Texas, the owner wants the timing to be local. Leave the backup frequency set to Daily at 3:30AM.
For Retention of daily backup point, set the period to 90 days.
For Retention of weekly backup point, use the Monday restore point and retain it for 52 weeks.
For Retention of monthly backup point, use the restore point from First Sunday of the month, and retain it for 36 months.
Deselect the Retention of yearly backup point option. The leader of Finance doesn't want to keep data longer than 36 months.
Click OK to create the backup policy.
After creating the backup policy, associate the policy with the virtual machines.
In the Select virtual machines dialog select myVM and click OK to deploy the backup policy to the virtual machines.
All virtual machines that are in the same location, and are not already associated with a backup policy, appear. myVMH1 and myVMR1 are selected to be associated with the Finance policy.
When the deployment completes, you receive a notification that deployment successfully completed.
Initial backup
You have enabled backup for the Recovery Services vaults, but an initial backup has not been created. It is a disaster recovery best practice to trigger the first backup, so that your data is protected.
To run an on-demand backup job:
On the vault dashboard, click 3 under Backup Items, to open the Backup Items menu.
The Backup Items menu opens.
On the Backup Items menu, click Azure Virtual Machine to open the list of virtual machines associated with the vault.
The Backup Items list opens.
On the Backup Items list, click the ellipses ... to open the Context menu.
On the Context menu, select Backup now.
The Backup Now menu opens.
On the Backup Now menu, enter the last day to retain the recovery point, and click Backup.
Deployment notifications let you know the backup job has been triggered, and that you can monitor the progress of the job on the Backup jobs page. Depending on the size of your virtual machine, creating the initial backup may take a while.
When the initial backup job completes, you can see its status in the Backup job menu. The on-demand backup job created the initial restore point for myVM. If you want to back up other virtual machines, repeat these steps for each virtual machine.
Clean up resources
If you plan to continue on to work with subsequent tutorials, do not clean up the resources created in this tutorial. If you do not plan to continue, use the following steps to delete all resources created by this tutorial in the Azure portal.
On the myRecoveryServicesVault dashboard, click 3 under Backup Items, to open the Backup Items menu.
On the Backup Items menu, click Azure Virtual Machine to open the list of virtual machines associated with the vault.
The Backup Items list opens.
In the Backup Items menu, click the ellipsis to open the Context menu.
On the context menu select Stop backup to open Stop Backup menu.
In the Stop Backup menu, select the upper drop-down menu and choose Delete Backup Data.
In the Type the name of the Backup item dialog, type myVM.
Once the backup item is verified (a checkmark appears), Stop backup button is enabled. Click Stop Backup to stop the policy and delete the restore points.
In the myRecoveryServicesVault menu, click Delete.
Once the vault is deleted, you return to the list of Recovery Services vaults.
Next steps
In this tutorial you used the Azure portal to:
- Create a Recovery Services vault
- Set the vault to protect virtual machines
- Create a custom backup and retention policy
- Assign the policy to protect multiple virtual machines
- Trigger an on-demand back up for virtual machines
Continue to the next tutorial to restore an Azure virtual machine from disk.
Feedback | https://docs.microsoft.com/en-us/azure/backup/tutorial-backup-vm-at-scale | 2019-10-14T04:56:27 | CC-MAIN-2019-43 | 1570986649035.4 | [] | docs.microsoft.com |
OperationScope
This topic applies to Windows Workflow Foundation 4 (WF4).
This sample demonstrates how the messaging activities, Receive and SendReply can be used to expose an existing custom activity as an operation in a workflow service. This sample includes a new custom activity called an
OperationScope. It is intended to ease the development of a workflow service by allowing users to author the body of their operations separately as custom activities and then easily exposing them as service operations using the
OperationScope activity. For example, a custom
Add activity that takes two in arguments and returns one out argument could be exposed as an
Add operation on the workflow service by dropping it into an
OperationScope.
The scope works by inspecting the activity provided as its body. Any unbound in arguments are assumed to be inputs from the incoming message. All out arguments, regardless of whether they are bound, are assumed to be outputs in the subsequent reply message. The exposed operation’s name is taken from the display name of the
OperationScope activity. The end result is that the body activity is wrapped in a Receive and SendReply with the parameters from the messages bound to the arguments of the activity.
This sample exposes a workflow service using HTTP endpoints. To run, proper URL ACLs must be added. For more information, see Configuring HTTP and HTTPS. Executing the following command at an elevated prompt adds the appropriate ACLs (ensure that your Domain and Username are substituted for %DOMAIN%\%UserName%).
netsh http add urlacl url= user=%DOMAIN%\%UserName%
To run the sample
Open the OperationScope.sln solution in Visual Studio 2010.
Set multiple start-up projects by right-clicking the solution in Solution Explorer and selecting Set Startup Projects. Add Scenario and Scenario_Client (in that order) as multiple start-up projects.
Press CTRL+SHIFT+B to build the solution.
Press CTRL+F5 to run the application. The Scenario_Client console prompts you for inputs and the corresponding output is seen in the Scenario console. | https://docs.microsoft.com/en-us/previous-versions/dotnet/netframework-4.0/ee662961(v=vs.100)?redirectedfrom=MSDN | 2019-10-14T03:03:03 | CC-MAIN-2019-43 | 1570986649035.4 | [] | docs.microsoft.com |
Contents Security Operations Previous Topic Next Topic Set up Security Incident Response Orchestration Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Set up Security Incident Response Orchestration Prior to using Security Incident Response Orchestration, perform steps to set up various parts of the system, including populating the CMDB, configuring the mid-server, and configuring credentials. Before you beginRole required: admin To use Security Incident Response, you need a fully populated CMDB with domain names. For more information, see Discovery. About this task Procedure Activate Security Incident Response plugin. Configure the mid-server. Configure credentials. On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/madrid-security-management/page/product/security-incident-response-orchestration/task/t_SecInRespOrchSetup.html | 2019-10-14T03:56:03 | CC-MAIN-2019-43 | 1570986649035.4 | [] | docs.servicenow.com |
The.
It is important to note that this system is currently in BETA, and may change with any given release or patch.
The Trust and Safety system is designed so that, even when left on default settings, the system will ensure that someone can’t attack you with malicious avatar features. Malicious users won’t have these features shown, so you can have a good experience in the metaverse.
There are two vital components of this system, the Trust system, and the Safety system. Let’s go over them in a bit more detail.
The Trust System
The Trust system is actually already implemented in VRChat! It is what determines when a user is permitted to upload content — you may have heard us calling it “Content Gating” previously. However, the system is far more than just determining if you can upload content — it looks at user behavior to determine “Trust”, which is an aggregate of many variables. We can easily adjust the way we calculate this value, so we can tweak it as time goes on.
Trust Rank
A user’s Trust feeds into something we’ve called a “Trust Rank”, which is an indicator of how much time a user has spent in VRChat, how much content they’ve contributed, the friends they’ve made, and many other factors. These ranks are as follows:
You gain these ranks simply by playing VRChat — as you explore worlds, make friends, and create content, you will gain more Trust, which determines your Trust Rank. The ranks correspond to the color of your nameplate (see the color of the text for an example), and they also play a vital part in the Safety System, which we’ll describe later on.
Friends are a special Trust Rank. Users that you have Friended have all of their avatar features shown in the Normal Shield Level, and you can customize them just like any other Trust Rank.
The transition between “Visitor” and “New User” is a special one — when a Visitor becomes a New User, they gain the ability to upload content to VRChat as long as they’re using a VRChat account. Users receive a notification when they have passed this rank, and are directed to the VRChat documentation page to get started with creating content.
In a future version, users will receive a notification when they transition Trust Ranks.
All ranks from “Known User” and upward also have the ability to toggle off the display of their rank on their nameplate. They can choose to appear as a “User”, which will turn their nameplate to the User color, and will also change how the Safety system treats them to match the User template. This is for users who do not wish to show off their higher rank for whatever reason.
By default, Known and Trusted users will display their rank. Using the toggle will revert you to User.!
If a VRChat Team member doesn’t have their “DEV” tag on, they’ll appear as a normal user with their actual Trust Rank.
What does the Safety System do?
. The behavior behind the Shader blocking system is detailed on our Shader Blocking System doc page
- Particles and Lights — Enables or disables particle systems on a user’s avatar, as well as any light sources. This will also block Line and Trail Renderer components.
Each rank has their own unique settings. To illustrate this, here’s a screenshot of the Safety Menu:
There are four main elements in this Menu to pay attention to.
At the top is a row of “Shield Levels”, which are preset settings for the Safety System that we’ve developed. These Levels should cover most of your needs — however, you’re free to customize them completely in the “Custom” Level, which is a special mode where you can create your own settings.
In the middle are the actual settings for the Shield Level you’ve selected. They cover each of the elements that the Safety System affects, and if you’re in Custom Mode, you can toggle them as you see fit.
At the bottom is a listing of each of the Trust Ranks. When you’ve chosen a “Safety Mode”, you can select each Rank to see how avatar features of that Rank are set.
The text inside the blue area below the Safety Modes changes as you select different Modes. The text at the bottom also changes as you explore the menu, and helps inform you about the UI element you’re pointing at.
Once you’re done setting this up, close the menu. The settings will apply, and you’ll see them take effect on the users around you.
In general, we have tuned the system such that most users can leave the system on “Normal”. Most users should not have to do anything, and the Safety system will work properly. If you wish to customize Safety to your own liking, you are free to do so via the Custom Shield Level.
You can reset the Custom settings you’ve set by clicking “Reset Custom Settings” in the top right of the menu.
Finally, you might notice that the settings for “Safe Mode” in System have disappeared. That’s because they’ve been absorbed into the Safety System. Pressing the keybind for Safe Mode (Shift+Esc on Desktop, Both Triggers + Both Menu buttons in VR) will change your Shield Level to Custom, and will turn off all settings for all ranks. We realize this behavior will wipe your custom settings, and plan on implementing a dedicated “Safe” Shield Level in a future update.
Hiding or Showing Specific Users
You might encounter a user that, despite their higher Trust Rank, is wearing an avatar you’d rather not see. You can select that user and choose “Hide Avatar” in their social panel. This will hide that user’s avatar and disable all features except for voice. You can re-enable their avatar and features by simply clicking the button again, which will be labeled “Show Avatar”.
If you’d like to return the control of showing the avatar to the Safety System, just click “Use Safety Settings”.
You can disable the entire system if you want. Selecting the “None” Shield Level will show almost all features for every rank, or create a Custom Shield Level that has all features on for all ranks. However, we don’t recommend using this setting unless you trust everyone in the room. This does not affect Nuisance users — they will always have everything hidden unless you explicitly unhide them.
You can override these settings on a per-user basis in two ways. The first way is to select a user that has their avatar or features hidden by the current Safety settings, find “Show Avatar”, and click it. This will display the avatar and all avatar features, no matter what Safety mode you have currently active. “Hide Avatar” does the opposite — no matter what Safety Mode you’re on, that user’s avatar will be hidden. You select “Use Safety Settings” to let the Safety Mode that you’re using manage that user’s avatar visibility.
The second way is to make friends! Friends have no features hidden in Normal Shield Level. If you have someone as a Friend, we assume that you implicitly trust them. As such, their avatar features will be completely on in Normal Shield Level. If, for some reason, you want to hide your friend’s avatar, you can still do that with the “Hide Avatar” button.
Quick Menu “Scanning Mode”
When you open your Quick Menu, you will be able to see more information on user’s nameplates, as well as view information on users. This acts as a “scanner”, providing more information when your Quick Menu is open. If you point at a user, you’ll get basic quick info. Clicking on the user will show more details. This will show their avatar thumbnail, their name, their displayed rank, as well as other information.
Interacting with a User
As you can see in the top left, the user’s avatar image and name is displayed. Their Trust Rank displays below their avatar thumbnail, and the thumbnail is highlighted in the appropriate color . To the right, you can see their current status. The text box below the status is a “tooltip”, which will give you helpful information depending on what you’re pointing at.
Selecting a user pulls up a more detailed Social Quick Menu.
Using this menu also allows you to send a Friend Request to the user, turn their voice on and off, and view Details on the user (which brings up the full social Menu for that user). Clicking on the “Not Blocked”/”Blocked” button will toggle blocking that user.
Clicking “Prev” and “Next” at the top let you scroll through all the users in the instance. This is useful just to get a picture of how you have settings set for users, who you have muted, who you have as a friend, etc.
“Warn” and “Kick” are buttons available when you are the owner of an instance — it allows you to moderate your instance as you see fit. Back will take you back to the previous menu, which is probably the Quick Menu.
Hiding and Showing Avatars
In the Social Quick Menu, clicking “Use Safety Settings” will allow the Safety Mode that you’ve chosen to determine how to display that user’s avatar. Choosing “Show Avatar” or “Hide Avatar” will override the Safety Mode you’re on, and show/hide the avatar and all features.
Clicking “Show Avatar” is the easiest way to override the Shield Level for an individual user!
Nameplates
You’ll also see icons matching the Safety menu icons over the user’s nameplate. When this icon is present, it means that this feature is being blocked on that user. For example, if you see the Shader icon over someone’s nameplate, that means your Shield Level settings have reverted all the shaders on that user’s avatar to Standard. A speaker icon indicates that user has Avatar Audio playing that is being blocked by the Safety system.
The text in the bottom left indicates what Trust Rank that user has. Again, this text will only appear when you have your Quick Menu open.
The nameplates may also have additional icons on them. You can find all of the icons in the safety menu. For clarity, here is a screenshot with all icons on, and a listing of what the icons mean. A typical nameplate will never look this busy, but we turned them all on just to illustrate them.
Avatar Feature Icons: If these have a red X on them, they have been disabled by your Safety Mode Settings. If there’s no red X, then the features are present on that user’s avatar, but they aren’t being disabled.
Other Nameplate UI Icons
To reiterate, when looking at a Nuisance user with the Quick Menu open, you’ll see a skull icon that indicates that the user has been causing problems for others. When you click them to see their Social Details, you’ll see a red outline around their avatar image, and they will have the “Nuisance” rank listed. Remember, these users have almost all of their features hidden on all of the Shield Levels. If you really want to override their Nuisance rank, you can unhide their avatar or add them as a Friend.
How does the Trust system work?
The Trust system’s inner workings are intentionally hidden so that it is more difficult to exploit. Regardless, the best way to raise your Rank is simply to play VRChat — spending time exploring, making friends, and creating content (once you are permitted to do so) will all help you increase your Trust Rank, and will show off more of your avatar’s features.
There are a few things we want to point out that will not raise your Trust Rank:
- Standing or idling in a room (AFKing)
- Uploading a large amount of low-effort content
- Mass-friending large amounts of users
How am I protected from malicious users?
You’re probably wondering how a malicious user might abuse this system to attack or harass legitimate users of VRChat. We have built this system to be durable, and it is capable of detecting and mitigating attacks against a user’s reputation. Although we can’t share exact details of how we mitigate these types of attacks, it is something that we are both prepared for, and can easily adjust to adapt to potential problems we did not foresee. We have prepared for these types of problems from the very beginning of the design of this system, and have the ability to adapt to rising issues quickly.
Program Status, Summary, and Feedback
In closing, keep in mind that this system is in BETA, and is subject to change. We have been developing and using this system in various forms internally for months, and are very confident in how well it will affect the experience for VRChat users, both new and experienced. We are very interested in feedback from our users regarding the Safety system.
If you have feedback on the UI, how the system protects you, or other aspects of the system, please post on our Canny board for the Trust and Safety System, available here:
When you make a post on the Canny, ensure you title it appropriately. Something like this should be sufficient:
[Build ###] Descriptive Title
Thank you for reading! We will continue to iterate on the Trust and Safety System, working and testing with the Community. We hope this post helps you understand how the Safety and Trust systems work and interact. | https://docs.vrchat.com/docs/vrchat-safety-and-trust-system | 2019-10-14T04:20:10 | CC-MAIN-2019-43 | 1570986649035.4 | [] | docs.vrchat.com |
Notebook versioning and diffs¶
Version control¶
If you’re used to creating many duplicate versions of notebooks with slight modifications and long file names. Look no further, Jovian will be your version control for notebooks.
jovian.commit records all the versions under same notebook project. So, each change can be a version by author and collaborators which can be easily toggled in the website.
Note
You have to own the notebook or have to be a collaborator to commit changes to the same project notebook. If not you can commit any changes made to your profile as a new notebook.
View Differences¶
All the versions are comparable, you can view additions, deletions made among any 2 versions of the notebook. Hide/show common part of the code.
How to view the differences?
Commit different version and visit Jovian.
Click on
Versiondrop down on the right top corner.
CLick on
Compare Versions
Select any 2 versions with the use of check boxes and click on
View Diffbutton.
There are more things to be compared, but first let’s add more content to the notebook to understand all the parameters that can be compared. Click on
Next to follow through. | https://jovian-py.readthedocs.io/en/latest/user-guide/04-version.html | 2019-10-14T04:17:01 | CC-MAIN-2019-43 | 1570986649035.4 | [] | jovian-py.readthedocs.io |
Set Up the Unified Messaging Test Phone
[This is pre-release documentation and subject to change in future releases. This topic's current status is: Milestone-Ready.]
Applies to: Exchange Server 2010 Beta* *Topic Last Modified: 2008-10-23
You can manually install and configure the Exchange Unified Messaging (UM) Test Phone application on a server running Microsoft Exchange Server 2010 that doesn't have the Unified Messaging server role installed or on a client computer. You can use the UM Test Phone to test the functionality of specific Unified Messaging features such as call answering, subscriber access, and auto attendants.
Before You Begin
Before you can run the Exchange UM Test Phone application, you must set up and configure the client computer by installing the appropriate audio devices, audio drivers, speakers, and a microphone. The Exchange UM Test Phone application streams the audio to the audio devices configured on the client computer from the Unified Messaging server. Verify that these devices are connected and working correctly before you run the Exchange UM Test Phone application on a client computer.
Note
If you're using a Unified Messaging server with multiple network adapters or you have multiple IP addresses bound to a single network adapter and you're experiencing intermittent problems, you may have to change the binding order of the network adapters or change the order of the IP addresses to correct these issues.
To perform this procedure, the account you use must be delegated the Exchange Organization Administrator role.
For more information about permissions, delegating roles, and the rights that are required to administer Exchange 2010, see Important: Update for Permissions in Exchange 2010.
Important
When you're testing the functionality of a 64-bit Unified Messaging server, you must copy the required files from a 64-bit Unified Messaging server. However, you can only test the functionality of a 64-bit Unified Messaging server from a 64-bit client computer or a server that isn't running the Unified Messaging server role.
Procedure
To configure a Unified Messaging server using either the 32-bit or 64-bit version of Exchange 2010
Install the Exchange 2010 Unified Messaging server role.
Note
You can install the Unified Messaging server role on an Exchange 2010 computer on which a different server role is currently installed, or you can install the Unified Messaging server role on a separate computer in the same Exchange 2010 organization.
Use Active Directory Users and Computers to create a new user object. When you create the user, don Create a Mailbox for an Existing User.
From the Start menu, open the Exchange Management Shell.
Create the following Unified Messaging objects using the Exchange Management Shell:
Run the following command to create a new Dial Plan object:
New-umdialplan -name dp -Num 5
Run the following command to create a new UM IP gateway:
New-UMIPGateway -name ip -address <IP address>
Important
When you're configuring the IP gateway, the IP address for the object is the IP address of the computer on which the Exchange UM Test Phone application is installed.
Run the following command to create a new UM Hunt Group object:
New-umhuntgroup -name hg -umipgateway ip -umdialplan dp - Pilotidentifier <phonenumber>
Run the following command to create a new Mailbox Policy object:
New-ummailboxpolicy -name p1 -umdialplan dp
Run the following command to create a new UM Auto Attendant object:
New-umautoattendant -name AA -umdialplan dp -PilotIdentifierList <phonenumber>
Run the following command to enable a mail-enabled recipient for Unified Messaging:
Enable-ummailbox -id <smtpaddress> -ummailboxpolicy p1 -extensions 12345
Note
The SMTP address is the SMTP address for the user that was created in the previous step.
Run the following command to add or associate the Unified Messaging server with a dial plan that was created:
Set-umserver -identity <servername> -dialplan dp
Testing Unified Messaging Functionality Using a 32-bit Version of Exchange 2010
The Exchange UM Test Phone application files are copied to the Unified Messaging server when the Unified Messaging server role is installed. However, the DLL files that are copied are located in the global assembly cache, and you cannot view the contents of the global assembly cache using Microsoft Windows Explorer. The easiest way to copy these files to the appropriate location is to use the copy command at a command prompt.
Important
You don't have to perform the manual setup steps in the following section if you're running the Exchange UM Test Phone application on the local Unified Messaging server.
To set up and configure the UM Test Phone on a client computer
Install the 32.
Testing Unified Messaging Functionality Using a 64-bit Version of Exchange 2010
To set up and configure the UM Test Phone on a client computer
Install the 64.
For More Information
- For more information about the UM Test Phone, see Testing a Unified Messaging Server with the Unified Messaging Test Phone.
- For more information about how to test Unified Messaging server functionality, see Testing Unified Messaging Server Functionality. | https://docs.microsoft.com/en-us/previous-versions/exchange-server/exchange-140/aa998254(v%3Dexchg.140) | 2019-10-14T04:55:29 | CC-MAIN-2019-43 | 1570986649035.4 | [] | docs.microsoft.com |
Klocwork has decoupled the Build Server version from the Klocwork Server. Therefore, you can use older build server tools with newer versions of the Klocwork Server, one full release back. For example, this means you can load Klocwork builds from any version of Klocwork 2018 through to 2018.3 into Klocwork 2019.3 without having to import or migrate data.
For large organizations, this feature provides flexibility by allowing you to upgrade the Server and Desktop plug-ins to take advantage of improvements in a newer release, while still analyzing some or all of your projects with a previous version of Klocwork.
kwadmin lock-project-version --url demosthenes
This command creates a file that contains information about the engine and checker configuration on the build machine, including any custom checkers and whether checkers are enabled or disabled. The server uses this information as the base configuration for the project.
kwbuildproject --url -o tablesdir buildspec.out
kwadmin load demosthenes tablesdir --url
When you lock a project to an earlier version of Klocwork, the lock-project-version command creates a configuration file called default_checkers_configuration.xml and moves it to that project directory on the Klocwork Server. The configuration file contains information about the checkers available in that release, including any custom checkers you deployed to that build machine.
Search the project directory for default_checkers_configuration.xml to verify whether your project is locked to an earlier version of Klocwork.
If you no longer want to lock a project to an earlier version of Klocwork, use the unlock-project-configuration command, for example:
kwadmin unlock-project-version demosthenes --url | https://docs.roguewave.com/en/klocwork/current/crossversionsupportforbuilds | 2019-10-14T03:41:25 | CC-MAIN-2019-43 | 1570986649035.4 | [] | docs.roguewave.com |
Line width and color controls
Set a border width and color using the new feature on the properties panel. Select an item, enter a value for a width (the size defaults to points) and choose a color by tapping the patch or the arrow to display the color editor.
SVG selection export
Along with PNGs and JPEGs you can now download a selection as an SVG file. Select a single item or marquee drag around several items to create a selection, then export using the option in the Download > SVG menu.
Logo insert
The new Logo category in the + Add insert menu now allows you to insert a logo from a variety of sources. This replicates the options found on the Replace Logo or Replace Symbol context menu.
Tap the icon and choose your logo from the following...
- Local file
- Cloud drive
- From URL
- Symbol
Other fixes/improvements
- New sign-up/sign-in screens for desktop and mobile with separate pages for signing up and signing in.
- Smoother and faster list editing.
- Many other 'under the hood' fixes and improvements. | https://docs.xara.com/en/articles/2720463-product-update-for-28th-february-2019 | 2019-10-14T04:39:13 | CC-MAIN-2019-43 | 1570986649035.4 | [array(['https://downloads.intercomcdn.com/i/o/105963119/1c27cc26a517b15002f7ce1f/lineborder_02.jpg',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/102471716/65e3e61e958a8951620638af/image.png',
None], dtype=object) ] | docs.xara.com |
Platform Development¶
- This.xmlfile.
-
Introduction to Data structure of eXo Platform components, including Social, Calendar, Wiki, Forum, FAQ, and Poll.
-
Instructions on how to configure the templates used for Spaces, Content and FAQ applications.
-
Introduction to events of eXo Platform modules, including: Portal, ECMS, Social, and Forum.
Extensions¶
This.
UI Extensions¶
Some.
UI Extension components¶
UIExtensionManager¶
This class is used to manage all extensions available in the system. The target is to create the ability to add a new extension dynamically without changing anything in the source code. UIExtensionManager is implemented by UIExtensionManagerImpl.
<component> <key>org.exoplatform.webui.ext.UIExtensionManager</key> <type>org.exoplatform.webui.ext.impl.UIExtensionManagerImpl</type> </component>
UIExtensionPlugin¶
This class allows you to define new extensions in the configuration file
dynamically (for example:
configuration.xml). As you want
UIExtensionManager to manage every extension, you have to plug
UIExtensionPlugin into it:
<external-component-plugins> <target-component>org.exoplatform.webui.ext.UIExtensionManager</target-component> <component-plugin> <name>add.action</name> <set-method>registerUIExtensionPlugin</set-method> <type>org.exoplatform.webui.ext.UIExtensionPlugin</type> ... </component-plugin> </external-component-plugins>
Definition of UI Extensions¶
Each UI Extension is defined as an object param:
... > </object> </object-param> ...
In which:
- Name: the extension’s name.
- Object Type: point to the UI Extension lib class.
- Type: the “parent” UI component which is extended by your UI Extension.
- Rank: used to sort by Collection of UI Extension.
- Component: point to the UI Extension definition class.
UI Extension Definition class¶
This class is used to define filters, actions and templates of each UI Extension:
@ComponentConfig( events = {(listeners = EditPageActionComponent.EditPageActionListener.class);}) public class EditPageActionComponent extends UIComponent { private static final List<UIExtensionFilter> FILTERS = Arrays.asList(new UIExtensionFilter[] { new IsViewModeFilter() }); @UIExtensionFilters public List<UIExtensionFilter> getFilters() { return FILTERS; } public static class EditPageActionListener extends UIPageToolBarActionListener<EditPageActionComponent> { @Override protected void processEvent(Event<EditPageActionComponent> event) throws Exception { ... super.processEvent(event); } } ...
Parent UI Component¶
This is what your UI Extension will be added to (in this example, the parent UI Componet is UIPageToolBar). All extensions of this component are got by UIExtensionManager.
UIExtensionManager manager = getApplicationComponent(UIExtensionManager.class); List<UIExtension> extensions = manager.getUIExtensions(EXTENSION_TYPE); public List<ActionComponent> getActions() throws Exception { .... List<UIExtension> extensions = manager.getUIExtensions(EXTENSION_TYPE); if (extensions != null) { for (UIExtension extension : extensions) { UIComponent component = manager.addUIExtension(extension, context, this); // Child UI Component has been made by UI Extension // It's available to use now ... } } return activeActions; }
Internal filter¶
Each UI Extension has a list of filters depending on variety of purposes. It indicates which UI Extension is accepted and which is denied. You are free to create your own filter extended from UIExtensionAbstractFilter. Internal filters are part of the business logic of your component. For example, if your component is only dedicated to articles, you will add an internal filter to your component that will check the type of the current document.
public class IsViewModeFilter extends UIExtensionAbstractFilter { public IsViewModeFilter(String messageKey) { super(messageKey, UIExtensionFilterType.MANDATORY); } @Override public boolean accept(Map<String, Object> context) throws Exception { UIWikiPortlet wikiPortlet = (UIWikiPortlet) context.get(UIWikiPortlet.class.getName()); return(wikiPortlet.getWikiMode() == WikiMode.VIEW||wikiPortlet.getWikiMode() == WikiMode.VIEWREVISION); } @Override public void onDeny(Map<String, Object> context) throws Exception { // TODO Auto-generated method stub }
Your filter will define which type of filter it belongs to (in UIExtensionFilterType). There are 4 types:
There are 2 conditions for filtering: Accept and onDeny.
- Accept: Describe the “Accept” condition, and how a UI Extension can accept by a context.
- onDeny: What you will do after the filter denies a UI Extension by a specific context (generating a message for pop-up form, for example).
You have known how and where the filter is put in a UI Component, but when it is gonna fire?
It falls into 2 situations: when you get it and when it is action fire. Thus, you should ensure that your UI Extension is always trapped by its filter.
External filter¶
External filters are mainly used to add new filters that are not related to the business logic to your component. A good example is the UserACLFilter which allows you to filter by access permissions.
For example, to make the EditPage action only be used by manager:/platform/administrators, do as follows:
- Create an external filter:
public class UserACLFilter implements UIExtensionFilter { /** * The list of all access permissions allowed */ protected List<String> permissions; /** * {@inheritDoc} */ public boolean accept(Map<String, Object> context) throws Exception { if (permissions == null || permissions.isEmpty()) { return true; } ExoContainer container = ExoContainerContext.getCurrentContainer(); UserACL userACL = (UserACL) container.getComponentInstance(UserACL.class); for (int i = 0, length = permissions.size(); i < length; i++) { String permission = permissions.get(i); if (userACL.hasPermission(permission)) { return true; } } return false; } /** * {@inheritDoc} */ public UIExtensionFilterType getType() { return UIExtensionFilterType.MANDATORY; } /** * {@inheritDoc} */ public void onDeny(Map<String, Object> context) throws Exception {} }
- Add the external filter to a UI Extension in the
configuration.xmlfile:
> <!-- The external filters --> <field name="extendedFilters"> <collection type="java.util.ArrayList"> <value> <object type="org.exoplatform.webui.ext.filter.impl.UserACLFilter"> <field name="permissions"> <collection type="java.util.ArrayList"> <value> <string>manager:/platform/administrators</string> </value> </collection> </field> </object> </value> </collection> </field> </object> </object-param>
Mechanism¶
The UI Extension’s working process is divided into 3 phases:
Setting up¶
At first, you must add dependencies to
pom.xml. In this phase, you
are going to install all elements of UI Extension framework in the
configuration.xml file:
- Implement UIExtensionManager by using UIExtensionManagerImpl.
- Plug UIExtensionPlugin in UIExtensionManager by using the
registerUIExtensionPlugin()method.
- List all the UI Extension’s definitions. You can also define your own external filter (optional).
- Create the parent UI Component class.
- Create the UI Extension class.
- Create the internal filters.
UIExtensionPlugin is responsible for looking up all UI Extension
definitions, thus you can use it to obtain all UI Extensions, then plug
it into UIExtensionManager. At present, all UI Extensions in your
project will be managed by UIExtensionManager. Now you can get UI
Extensions everywhere by invoking the
getUIExtensions(String objectType) method.
In the UI Component class, implement a function which:
- Retrieve a collection of UI Extensions which belongs to it by UIExtensionManager:
List<UIExtension> extensions = manager.getUIExtensions("org.exoplatform.wiki.UIPageToolBar");
- Transform them into UIComponent and add them to the parent UI Component:
// You are free to create a context Map<String, Object> context = new HashMap<String, Object>(); context.put(key,Obj); // UIExtensionManager will depend on this context and extension to add or does not add extension to UI Component(this) UIComponent component = manager.addUIExtension(extension, context, this);
The
addUIExtension() method is responsible for adding extensions to
a UI Component. It depends on:
- UIExtension, in particular, the UIExtension’s filter. Either internal filter or external filter has the
acceptmethod, thus the adding process will be successful if
acceptreturns ‘true’ and vice versa.
- Context will be the parameter of the
acceptmethod.
Activating¶
The final step is to present UI Extension in a template.
As all UI Extensions are presently becoming children of UI Component, you can implement UI Component’s action thanks to UI Extension’s action. For example:
<%for(entry in uicomponent.getActions()) { String action = entry.Id(); def uiComponent = entry; String link = uiComponent.event(action); %> <a href="$link" class="$action" title="$action" %>"><%= action %></a> <%}%> **Note** You are free to customize your action's Stylesheet.
Auxiliary attributes for documents¶
By default, your activities, such as writing a document, and uploading a
file, are published on the activity stream. However, you can decide to
publish these activities or not by creating a context named
DocumentContext for a specific document. This context stores some
auxiliary attributes of the document and helps document listeners make
decision based on these attributes.
This context looks like:
public class DocumentContext { private static ThreadLocal<DocumentContext> current = new ThreadLocal<DocumentContext>(); public static DocumentContext getCurrent() { if (current.get() == null) { setCurrent(new DocumentContext()); } return current.get(); } .... //Each time, attributes are able to set and got via: /** * @return the attributes */ public HashMap<String, Object> getAttributes() { return attributes; } /** * @param attributes the attributes to set */ public void setAttributes(HashMap<String, Object> attributes) { this.attributes = attributes; } }
For example:
When you upload a document to a drive by using
ManageDocumentService, but do not want to publish this activity on
the activity stream, you can do as follows:
DocumentContext.getCurrent().getAttributes().put(DocumentContext.IS_SKIP_RAISE_ACT, true);
Then, this activity is skipped at:
Object isSkipRaiseAct = DocumentContext.getCurrent().getAttributes().get(DocumentContext.IS_SKIP_RAISE_ACT); if (isSkipRaiseAct != null && Boolean.valueOf(isSkipRaiseAct.toString())) { return; }
Note
The
DocumentContext class is able to help developers manage
various kinds of actions with a document based on its auxiliary
attributes. You can be free to define new attributes for yourself.
Overridable Components¶
This section consists of the following main topics:
Information about Social components which can be overriden, including
Relationship listener plugin,
Profile listener plugin, and
Space listener plugin.
Information about 2 overridden components in Forum, consisting of
ForumEventLifeCycle, and
BBCodeRenderer.
Information about the
AnswerEventLifeCyclecomponent which installs event updates for the Answers data that is injected while saving answers, saving questions or posting comments.
Information about the
EventLifeCycleextension point used in the Calendar application of eXo Platform.
Forum¶
The Forum function needs two overridden components:
- ForumEventLifeCycle enables you to listen to the lifecycle of a forum. By implementing ForumEventLifeCycle, you can be notified of new posts and replies, categories and topics. This installation will be injected when the data flow is called to save data.
- BBCodeRenderer is used in the core of Forum to render BBCodes. In which, the data input is text, containing BBCode tags. The data output will be BBCode tags which have been encrypted into HTML tags.
Forum Event LifeCycle¶
ForumEventLifeCycle enables you to listen to the lifecycle of a
forum. By implementing ForumEventLifeCycle, you can be notified of new
posts and replies, categories and topics. This installation will be
injected when the data flow is called to save data.
Configuration plug-in
- You can find the configuration file of this component at: ``
- integ-forum-social/src/main/resources/conf/portal/configuration.xml``.
For example, to add a Forum to a space of the Social application and keep new activities of Forum (such as new posts and topics) updated to the activities of space, do as follows:
<external-component-plugins> <target-component>org.exoplatform.forum.service.ForumService</target-component> <component-plugin> <name>ForumEventListener</name> <set-method>addListenerPlugin</set-method> <type>org.exoplatform.forum.ext.impl.ForumSpaceActivityPublisher</type> </component-plugin> </external-component-plugins>
Tutorial
To use ForumEventLifeCycle class, do the following steps:
Create a new class that extends ForumEventListener.
For example: class ABCActivityPublisher
public class ABCActivityPublisher extends ForumEventListener { .............. }
Override functions in this created class. In each function, you can write anythings to meet your needs.
public class ABCActivityPublisher extends ForumEventListener { public void saveCategory(Category category){ .... } public void saveForum(Forum forum){ .... } public void addTopic(Topic topic, String categoryId, String forumId)){ .... } public void updateTopic(Topic topic, String categoryId, String forumId){ .... } public void addPost(Post post, String categoryId, String forumId, String topicId){ .... } public void updatePost(Post post, String categoryId, String forumId, String topicId){ .... } }
- The function saveCategory is called when a category is added and/or edited.
- The function saveForum is called when a forum is added and/or edited.
- The
addTopicfunction is called when a topic is added.
- The
updateTopicfunction is called when a topic is updated.
- The
addPostfunction is called when a post is added.
- The
updatePostfunction is called when a post is updated.
Add a new configuration to the
configuration.xml file with the type
that is the class created in the Step 1.
<external-component-plugins> <target-component>org.exoplatform.forum.service.ForumService</target-component> <component-plugin> <name>ForumEventListener</name> <set-method>addListenerPlugin</set-method> <type>{package}.{class name}</type> <!-- example <type>org.exoplatform.forum.ext.impl.ABCActivityPublisher</type> --> </component-plugin> </external-component-plugins>
BBCode Renderer¶
BBCodeRenderer is used in the core of Forum to render BBCodes. In
which, the data input is text, containing BBCode tags. The data output
will be BBCode tags which have been encrypted into HTML tags.
- You can find the configuration file of this component at: ``
- extension/webapp/src/main/webapp/WEB-INF/ks-extension/ks/forum/bbcodes-configuration.xml``.
For example, to register BBCodeRenderer, do as follows:
<external-component-plugins> <target-component>org.exoplatform.forum.rendering.MarkupRenderingService</target-component> <component-plugin> <name>BBCodeRenderer</name> <set-method>registerRenderer</set-method> <type>org.exoplatform.forum.rendering.spi.RendererPlugin</type> <description>BBCode renderer</description> <init-params> <object-param> <name>renderer</name> <description>Extended BBCodeRenderer</description> <object type="org.exoplatform.forum.bbcode.core.BBCodeRenderer"> <field name="bbCodeProvider"> <object type="org.exoplatform.forum.bbcode.core.ExtendedBBCodeProvider"/> </field> </object> </object-param> </init-params> </component-plugin> </external-component-plugins>
In which,
ExtendedBBCodeProvider is the class to implement
BBCodeProvider.
Answers¶
AnswerEventLifeCycle installs event updates for the Answers data
that is injected while saving answers, saving questions or posting
Configuration plug-in
You can find the configuration file of this component here.
For example, to add Answers to a space of the Social application and keep new activities of Answers updated to the activities of space, do as follows:
<external-component-plugins> <target-component>org.exoplatform.faq.service.FAQService</target-component> <component-plugin> <name>AnswerEventListener</name> <set-method>addListenerPlugin</set-method> <type>org.exoplatform.forum.ext.impl.AnswersSpaceActivityPublisher</type> </component-plugin> </external-component-plugins>
In which,
AnswersSpaceActivityPublisher is the class to implement
ForumEventLifeCycle.
Tutorial
To use the AnswerEventLifeCycle class, do the following steps:
Create a new class that extends AnswerEventListener.
For example: ABCActivityPublisher
public class ABCActivityPublisher extends AnswerEventListener { .... }
Override functions in this created class. In each function, you can write anything to meet your needs.
public class ABCActivityPublisher extends AnswerEventListener { public void saveQuestion(Question question, boolean isNew){ .... } public void saveAnswer(String questionId, Answer answer, boolean isNew){ .... } public void saveAnswer(String questionId, Answer[] answers, boolean isNew){ .... } public void saveComment(String questionId, Comment comment, boolean isNew){ .... } }
- The saveQuestion function is called when a question is added and/or edited.
- The saveAnswer function is called when an answer is added and/or edited.
- The saveAnswer function is called when answers are added and/or edited.
- The saveComment function is called when a comment is added and/or edited.
Add a new configuration to the
configuration.xml file with the type
that is the class created in the Step 1.
<external-component-plugins> <target-component>org.exoplatform.faq.service.FAQService</target-component> <component-plugin> <name>AnswerEventListener</name> <set-method>addListenerPlugin</set-method> <type>{package}.{class name}</type> <!-- example <type>org.exoplatform.forum.ext.impl.ABCActivityPublisher</type> --> </component-plugin> </external-component-plugins>
Calendar¶
EventLifeCycle is an extension point used in Calendar. You can
find the configuration file of this component at:
integration/integ-calendar/integ-calendar-social/src/main/resources/conf/portal/configuration.xml.
See the following example:
<external-component-plugins> <target-component>org.exoplatform.calendar.service.CalendarService</target-component> <component-plugin> <name>CalendarEventListener</name> <set-method>addEventListenerPlugin</set-method> <type>org.exoplatform.cs.ext.impl.CalendarSpaceActivityPublisher</type> </component-plugin> </external-component-plugins>
Details:
CalendarSpaceActivityPublisherimplements
EventLifeCycle. It writes activities in the space activity stream when events or tasks are added/modified.
Data Structure¶
This section consists of the following main topics:
A description of the Files Data Structure.
A description of the Social Data Structure.
Introduction to the Calendar JCR structure, details of child nodes, node types and properties of the following nodes: calendars, eventCategories, categories, eXoCalendarFeed, YY%yyyy% and calendarSetting.
Introduction to the whole Data structure of Wiki.
Introduction to the whole JCR structure of Forum, and comprehensive knowledge of its main nodes: Forum System and Forum Data.
Notifications data structure
A description of Notifications Data Structure.
A description of Email queue Data Structure.
A description of Settings Data Structure.
Introduction to the whole JCR structure of FAQ, and comprehensive knowledge of its main nodes: Category, FAQ setting, Template for FAQ.
Introduction to the whole JCR structure of Poll, and properties of its node type (exo:polls).
Login History data structure
A description of Login History Data Structure.
Files Data structure¶
Files in eXo Platform are stored in the database following this diagram:
Table FILES_BINARY
Table FILES_NAMESPACES
Table FILES_FILES
Table FILES_ORPHAN_FILES
Calendar JCR structure¶
The Calendar data are saved in eXo-JCR under the CalendarApplication
data directory. The Calendar JCR Structure is divided into two main
branches: one for public (
exo:application) and the other for users
(
Users).
The whole JCR structure of Calendar can be visualized in the diagram below:
Calendars¶
The Calendars node of the nt:unstructured type contains the child nodes of the exo:calendar type. When a calendar is created by users or the default ones in the system, it is stored under the calendars node: CalendarApplication/calendars/%calendar_id%. Its node type is exo:calendar that has the following properties:
When a user shares his own calendar with other users, the Id of the calendar node is referred to the node under the sharedCalendar node: CalendarApplication/sharedCalendars/%user_id% following the JCR reference mechanism.
In case of users’ private calendar, two mixin node types exo:remoteCalendar and exo:calendarShared can be added to the exo:calendar node type.
- The exo:remoteCalendar mixin node type has the following properties:
- The exo:calendarShared mixin node type has the following properties:
An event can have many attachments which are stored under the attachment node of the exo:eventAttachmenttype: CalendarApplication/calendars/%calendar_id%/%event_id%/attachment/%attachment_id%. The exo:eventAttachment node type has the following properties:
Event categories¶
The eventCategories node contains all event categories. When an event category is created, it is stored in a node of the exo:eventCategory type, under the eventCategories node defined at the path: CalendarApplication/eventCategories/%eventcategory_id%.
This node type has the following properties:
Each event category node contains the calendar event node of the exo:calendarEvent type. This node of the exo:calendarEvent type is stored at the path: CalendarApplication/eventCategories/%eventcategory_id%/%event_id%.
This node type has the following properties:
eXo Calendar feed¶
The eXoCalendarFeed of the nt:unstructured type contains iCalendars, webDavCalendars as child nodes and others of the exo:rssData type.
The exo:rssData node type has the following properties:
The iCalendars node of the nt:unstructured type contains the child nodes of exo:iCalData type.
The exo:iCalData node type has the following properties:
The webDavCalendars node of the nt:unstructured type contains the child nodes of the exo:caldavCalendarEvent type.
The exo:caldavCalendarEvent node type has the following properties:
Calendar year¶
The Y%yyyy% of the nt:unstructured type has the name beginning with the Y character followed by the year name having 4 numbers. It contains all the child nodes of M%mm%.
The M%mm% of the nt:unstructured type has the name beginning with the M character followed by the month name having 2 numbers. It contains all the child nodes of D%dd%.
The D%dd% of the nt:unstructured type has the name beginning with the D character followed by the date having 2 numbers. This node has two child nodes: reminder and events.
The reminder node of the nt:unstructured type contains the child nodes named basing on the Id of the event. This child node also has the nt:unstructured type. Each node is used to classify reminders of the same event. Each reminder is stored under a node of the exo:reminder type: CalendarApplication/Y%yyyy%/M%mm%/D%dd%/reminders/%event_id%/%reminder_id%.
The exo:reminder node type has the following properties:
The events node of the nt:unstructured type contains the child node of the exo:calendarPublicEvent type defined at the path: CalendarApplication/Y%yyyy%/M%mm%/D%dd%/events/%event_id%.
The events node can add the exo:repeatCalendarEvent mixin node that has the following properties:
Wiki data structure¶
Same as for Social data, Wiki data is stored in JPA data source in a set of database tables as follows:
Table WIKI_WIKIS
Table WIKI_PAGES
Table WIKI_TEMPLATES
Table WIKI_WIKI_PERMISSIONS
Table WIKI_DRAFT_PAGES
Table WIKI_DRAFT_ATTACHMENTS
Table WIKI_WATCHERS
Table WIKI_PAGE_ATTACHMENTS
Table WIKI_PAGE_VERSIONS
Table WIKI_PAGES_RELATED_PAGES
Table WIKI_PAGE_PERMISSIONS
Table WIKI_PAGE_MOVES
Table WIKI_EMOTION_ICONS
Forum JCR structure¶
Forum is a JCR-based application. The Forum data are saved in eXo-JCR under the Forum Service data directory. The whole JCR structure of Forum can be visualized in the diagram below:
Forum data¶
The
Forum Data node is created from the
exo:forumData node type.
The data nodes like category, forum, topic, post, tag, BBcode and topic
type will be stored under the
Forum Data node: ``
/exo:applications/ForumService/ForumData``.
Category and Category home¶
The
Category node is used to store all categories of forum, this
node is a child node of the
Forum Data node and only the
Category node type can be added to the
Category Home node. The
type of the
Category Home node which is
exo:categoryHome is
stored in
/exo:applications/ForumService/ForumData/CategoryHome. The
Category node has the
exo:forumCategory type which is a child
node of the
CategoryHome node. This node type is defined to allow
adding child nodes as
exo:forum and
exo:forumRSS.
- The
exo:forumCategorynode type has the following properties:
The
exo:forumCategory can add the
exo:forumWatching mixin type
which has the following properties:
Forum¶
The
Forum node is defined as a child node of category and allowed
adding child nodes as
Topic and
RSS type. The node type of
Forum is
exo:forum. The
Forum node is stored in
/exo:applications/ForumService/ForumData/CategoryHome/%Category-id%/%Forum-id%
and its node type has the following properties:
The
exo:forum can add the
exo:forumWatching mixin type. See its
properties here.
- The
exo:pruneSettingchild node has the following properties:
Topic¶
The
Topic node is defined as a child node of the
Forum `` node and
allowed adding child nodes as ``Topic,
Poll and
RSS types. The
node type of the
Topic and
Poll nodes is
exo:topic, and
exo:poll.
- The
Topicnode is stored in
/exo:applications/ForumService/ForumData/CategoryHome/%Category-id%/%Forum-id%/%Topic-id%and its
exo:topicnode type has the following properties:
The
exo:topic can add the
exo:forumWatching mixin type. See its
properties here.
- The
exo:pollchild node type has the following properties:
The
Post node is defined as the child node of
Topic and allowed
adding only the
Attachment `` child node type. The ``Post node has
the type of
exo:post, and the child node type is
exo:forumAttachment.
Tag and Tag home¶
The
Tag node is used to store data about tag name, topics with tag
added, number of users using this tag, number of tags in use. The type
of the
Tag node is
exo:forumTag and its child node type is
“
exo:tagHome”. The
Tag node is stored in
/exo:applications/ForumService/ForumData/TagHome/%tag-id% and its
node type has the following properties:
Forum system¶
The
Forum System node is created from the
exo:forumSystem node
type. That is defined as a child node of
Forum Service and can store
nodes with these following types:
exo:banIP, ``
exo:forumUserProfile``,
exo:statistic,
exo:administration
- under the
Forum System. The
Forum Systemnode is stored in ``
- /exo:applications/ForumService/ForumSystem``.
User Profile and User Profile Home¶
The
User Profile and
User Profile Home nodes are used to store
information of each user.
User Profile is automatically created by a
listener when a user registers to the organization service. Private
message and forum subscription can be added to
User Profile as a
child node. These node types
exo:forumUserProfile,
exo:userProfileHome,
exo:privateMessage and
exo:forumSubscription are defined as child nodes of
exo:forumUserProfile. The
User Profile node is stored under the
ForumSystem node: ``
/exo:applications/ForumService/ForumSystem/exo:userProfileHome/exo:forumUserProfile``.
- The
exo:forumUserProfilenode type has the following properties:
- The
exo:privateMessagechild node type has the following properties:
- The
exo:forumSubscriptionchild node type has the following properties:
Statistic and Statistic Home¶
The
Statistic and
Statistic Home nodes are used to store
statistic information of forum, such as number of posts, topics, users,
active users. The node types are
exo:forumStatistic, and
exo:statisticHome.
- The
Statisticnode is stored under the
Forum Systemnode:
/exo:applications/ForumService/ForumSystem/exo:statisticHome/exo:forumStatisticand its
exo:forumStatisticnode type has the following properties:
Ban IP and Ban IP Home¶
The
Ban IP and
Ban IP Home nodes are used to store data about
banned IP addresses. The
exo:banIPHome node type contains the
exo:IPHome child node.
- The
Ban IPnode is stored under the
Forum Systemnode:
/exo:applications/ForumService/ForumSystem/exo:banIPHome/exo:banIPand its
exo:banIPnode type has the following properties:
Administration and Administration Home¶
The:
Notifications data structure¶
Same as for Wiki and Social datas, notifications data is also stored on JPA data source and it has this database structure:
The database table EMAIL_QUEUE stores information about emails sent via the platform.
Table EMAIL_QUEUE
Settings data structure¶
The settings data stucture is defined by this databases diagram:
Table STG_SCOPES
Table STG_CONTEXTS
Table STG_SETTINGS
FAQ JCR structure¶
FAQ is a JCR-based application. The FAQ data are stored in the eXo-JCR under the faqApp data directory. The whole FAQ JCR structure can be visualized in the following diagram:
Category¶
Theand its node type has the following properties:
FAQ setting¶
This FAQ Setting node stores the user settings data, such as how answer is ordered (in alphabetical order or by created date), the order type (descending or ascending) or the user’s selection to sort questions by popularity. Each user has a dedicated settings data to select the display preferences in FAQ. The default setting will be used if the users has never changed and saved their setting.
- The User Setting node of an individual user is stored in
/exo:applications/faqApp/settingHome/userSettingHome/%user-id%and has the following properties:
Poll JCR structure¶
The Poll data are saved in eXo-JCR under the eXoPolls data directory. The whole JCR structure of Poll can be visualized in the diagram below:
The Poll node is used to store the default data in a poll. The node type of the Poll node is ``
exo:poll``. The Poll node is stored under eXoPolls node
/exo:applications/eXoPolls/%PortalName%/Polls/Poll-id% and its node
type (exo:poll) has the following properties:
Templates configuration¶
This section consists of the following main topics:
Provision of simple and explicit examples of the spaces template configuration in the Social function of eXo Platform.
Information about Content types and a list of Contents used in the Content function of eXo Platform.
Instructions on how to configure the FQA template and to change its look and feel, information of APIs provided by the UIComponent.
Spaces Templates¶
A space template allows you to configure layout of applications. In the example below, a container called “Menu” is placed on top of others. The container contains the SpaceMenuPortlet portlet:
<page> <owner-type></owner-type> <owner-id></owner-id> <name></name> <container id="SpacePage" template="system:/groovy/portal/webui/container/UIContainer.gtmpl"> <container id="Menu" template="system:/groovy/portal/webui/container/UIContainer.gtmpl"> <portlet-application> <portlet> <application-ref>social-portlet</application-ref> <portlet-ref>SpaceMenuPortlet</portlet-ref> </portlet> <access-permissions>*:/platform/users</access-permissions> <show-info-bar>false</show-info-bar> </portlet-application> </container> <container id="Application" template="system:/groovy/portal/webui/container/UIContainer.gtmpl"> </container> </container> </page>
In this example, the outer container “SpacePage” contains two inner
containers: Menu and Application. These containers are displayed
as below, where
is “Menu” and
is “Application”:
Changing the order of these two inner containers will swap the display position:
<page> <owner-type></owner-type> <owner-id></owner-id> <name></name> <container id="SpacePage" template="system:/groovy/portal/webui/container/UIContainer.gtmpl"> <container id="Application" template="system:/groovy/portal/webui/container/UIContainer.gtmpl"> </container> <container id="Menu" template="system:/groovy/portal/webui/container/UIContainer.gtmpl"> ... </container> </container> </page>
If you want to display a container in the left and another in the right, place them in the UITableColumnContainer.gtmpl outer container:
<page> <owner-type></owner-type> <owner-id></owner-id> <name></name> <container id="SpacePage" template="system:/groovy/portal/webui/container/UITableColumnContainer.gtmpl"> <container id="Menu" template="system:/groovy/portal/webui/container/UIContainer.gtmpl"> ... </container> <container id="Application" template="system:/groovy/portal/webui/container/UIContainer.gtmpl"> </container> </container> </page>
The sample code of space templates can be found here.
Content Templates¶
This sections consists of two following topics:
Details of 2 template types (dialog and view) applied to a node type or a metadata mixin type.
Content list viewer templates
Description about Content List Templates, Category Navigation Templates, and Paginator Templates which are commonly used in Content.
Content types¶
Overview
The templates are applied to a node type or a metadata mixin type. There are three of templates:
- Dialogs: are in the HTML form that allows creating node instances.
- Views: are in the HTML fragments which are used to display nodes.
- CSS: can be embedded into the Views template to define how to display HTML elements.
From the ECM Admin portlet, the Templates lists existing node types associated to Dialog, View and CSS templates. These templates can be attached to permissions (in the usual membership:group form), so that a specific one is displayed according to the rights of the user (very useful in a content validation workflow activity).
Document Type
The checkbox defines if the node type should be considered as the Document Type or not. Sites Explorer considers such nodes as user content and applies the following behavior:
- View template will be used to display the document type nodes.
- Document types nodes can be created by the ‘Add Document’ action.
- Non-document types are hidden (unless the ‘Show non document types’ option is checked).
Templates are written by using Groovy Templates that requires some experiences with JCR API and HTML notions.
Dialogs¶
Dialogs are Groovy templates that generate forms by mixing static HTML fragments and Groovy calls to the components responsible for building the UI at runtime. The result is a simple but powerful syntax.
Common parameters¶
These following parameters are common and can be used for all input fields.
Pass parameters to validators
- “name” validator:
String[] webContentFieldTitle = ["jcrPath=/node/exo:title", "validate=name", "editable=if-null"]; uicomponent.addTextField("title", webContentFieldTitle) ;
String[] webContentFieldTitle = ["jcrPath=/node/exo:title", "validate=email", "editable=if-null"]; uicomponent.addTextField("title", webContentFieldTitle) ;
- “number” validator:
String[] webContentFieldTitle = ["jcrPath=/node/exo:title", "validate=number", "editable=if-null"]; uicomponent.addTextField("title", webContentFieldTitle) ;
- “empty” validator:
String[] webContentFieldTitle = ["jcrPath=/node/exo:title", "validate=empty", "editable=if-null"]; uicomponent.addTextField("title", webContentFieldTitle) ;
- “null” validator:
String[] webContentFieldTitle = ["jcrPath=/node/exo:title", "validate=null", "editable=if-null"]; uicomponent.addTextField("title", webContentFieldTitle) ;
- “datetime” validator:
String[] webContentFieldTitle = ["jcrPath=/node/exo:title", "validate=datetime", "editable=if-null"]; uicomponent.addTextField("title", webContentFieldTitle) ;
- “length” validator:
For a maximum length of 50 characters:
String[] webContentFieldTitle = ["jcrPath=/node/exo:title", "validate=empty,length(50;int)", "editable=if-null"]; uicomponent.addTextField("title", webContentFieldTitle) ;
For a minimum length of 6 characters and maximum length of 50 characters:
String[] webContentFieldTitle = ["jcrPath=/node/exo:title", "validate=empty,length(6;50;int)", "editable=if-null"]; uicomponent.addTextField("title", webContentFieldTitle) ;
Note
The mixintype can be used only in the root node field (commonly known as the name field).
Text Field¶
- Additional parameters See also: Common parameters
- Example
<% String[] fieldTitle = ["jcrPath=/node/exo:title", "validate=empty"] ; uicomponent.addTextField("title", fieldTitle) ; %>
Non-value field¶
You cannot either see the non-value field on the form or input value for them. Its value will be automatically created or defined when you are managing templates.
- Example
String[] hiddenField1 = ["jcrPath=/node/jcr:content", "nodetype=nt:resource", "mixintype=dc:elementSet", "visible=false"] ; uicomponent.addHiddenField("hiddenInput1", hiddenField1) ;
Text Area Field¶
- Additional parameters
See also: Common parameters
- Example
<% String[] fieldDescription = ["jcrPath=/node/exo:description", "validate=empty"] ; uicomponent.addTextAreaField("description", fieldDescription) ; %>
Rich Text Field¶
- Additional parameters
See also: Common parameters
- Example
<% String[] fieldSummary = ["jcrPath=/node/exo:summary", "options=toolbar:CompleteWCM,width:'100%',height:'200px'", "validate=empty"] ; uicomponent.addRichtextField("summary", fieldSummary) ; %>
Creating a custom RichText editor fields
In the WYSIWYG widget section, you already know about a set of default toolbars (CompleteWCM, Default, BasicWCM, Basic, SuperBasicWCM). In this section, you will learn how to create a RichText editor with custom buttons.
Just edit the configuration file and modify or add new items to the
configuration file of the RichText editor is located in:
webapps/eXoWCMResources/eXoConfig.js
Take a look at the
eXoConfig.js file to see a definition of a custom
toolbar named “MyCustomToolbar”:
config.toolbar_MyCustomToolbar = [ ['Source','Templates','-','FitWindow','ShowBlocks'], ['Cut','Copy','PasteText','-','SpellCheck','-','Undo','Redo'], ['WCMInsertGadget','Flash','Table','SpecialChar', 'WCMInsertContent'], '/', ['Bold','Italic','Underline','StrikeThrough','-','JustifyLeft','JustifyCenter','JustifyRight','JustifyFull','-','OrderedList','UnorderedList','-','TextColor','BGColor','-','RemoveFormat'], ['Link','WCMInsertPortalLink','Unlink','Anchor'], '/', ['Style','FontFormat','FontName','FontSize'] ] ;
Every toolbar set is composed of a series of “toolbar bands” that are grouped in the final toolbar layout. The bands items move together on new rows when resizing the editor.
Every toolbar band is defined as a separated JavaScript array of strings. Each string corresponds to an available toolbar item defined in the editor code or in a plugin.
- Put the desired button names in square bracket (“[” & “]”) and separate them by commas to create a toolbar band. You can look at the above code to know all the possible toolbar item. If the toolbar item does not exist, a message will be displayed when loading the editor.
- Include a separator in the toolbar band by putting the “-” string on it.
- Separate each toolbar brands with commas.
- Use slash (“/”) to tell the editor that you want to force the next bands to be rendered in a new row and not following the previous one.
Note
The last toolbar band must have no comma after it.
Calendar Field¶
- Additional parameters
- Example
<% String[] fieldPublishedDate = ["jcrPath=/node/exo:publishedDate", "options=displaytime", "validate=datetime", "visible=true"] ; uicomponent.addCalendarField("publishedDate", fieldPublishedDate) ; %>
Upload Field¶
- Additional parameters
See also: Common parameters
- Example
When you create an upload form, you can store an image by two main ways:
- If you want to store the image as a property, use the following code:
<% String[] fieldMedia = ["jcrPath=/node/exo:image"] ; uicomponent.addUploadField("media", fieldMedia) ; %>
- If you want to store the image as a node, use the following code:
<% String[] hiddenField1 = ["jcrPath=/node/exo:image", "nodetype=nt:resource", "visible=false"] ; String[] hiddenField2 = ["jcrPath=/node/exo:image/jcr:encoding", "visible=false", "UTF-8"] ; String[] hiddenField3 = ["jcrPath=/node/exo:image/jcr:lastModified", "visible=false"] ; uicomponent.addHiddenField("hiddenInput1", hiddenField1) ; uicomponent.addHiddenField("hiddenInput2", hiddenField2) ; uicomponent.addHiddenField("hiddenInput3", hiddenField3) ; String[] fieldMedia = ["jcrPath=/node/exo:image"] ; uicomponent.addUploadField("media", fieldMedia) ; %>
- But, this code is not complete. If you want to display the upload field, the image must be blank, otherwise you can display the image and an action enables you to remove it. You can do as follows:
<% def <a href="$actionLink"> <img src="/eXoResources/skin/DefaultSkin/background/Blank.gif" alt="" class="ActionIcon Remove16x16Icon"/> </a> </div> <% } else { String[] fieldImage = ["jcrPath=/node/exo:image/jcr:data"] ; uicomponent.addUploadField(image, fieldImage) ; } } else { String[] fieldImage = ["jcrPath=/node/exo:image/jcr:data"] ; uicomponent.addUploadField(image, fieldImage) ; } } else if(uicomponent.dataRemoved()) { String[] fieldImage = ["jcrPath=/node/exo:image/jcr:data"] ; uicomponent.addUploadField(image, fieldImage) ; } else { String[] fieldImage = ["jcrPath=/node/exo:image/jcr:data"] ; uicomponent.addUploadField(image, fieldImage) ; } %>
- To have multiple upload fields, you just add the multiValues=true parameter to fieldProperty in dialog1.gtmpl:
# Multi upload fieldProperty = ["jcrPath=/node/exo:value", "multiValues=true"]; uicomponent.addUploadField("/node/exo_value", fieldProperty);
Note
In this case, you must be sure that the node type definition of the document you are currently editing should allow the document to have a child node named ‘exo:value’ whose node type is ‘ nt:unstructured’. All uploaded files of this upload component are stored in this ‘exo:value’ child node.
Radio Field¶
- Additional parameters
See also: Common parameters
- Example
<% String[] fieldDeep = ["jcrPath=/node/exo:isDeep", "defaultValues=true", "options=radio1,radio2,radio3"]; uicomponent.addRadioBoxField("isDeep", fieldDeep); %>
Select box Field¶
The select box widget enables you to render a select box with static values. These values are enumerated in a comma-separated list in the “options” argument.
See also: Common parameters
- Example
String[] mimetype = ["jcrPath=/node/jcrcontent/jcr:mimeType", "text/html", "options=text/html,text/plain"] ; uicomponent.addSelectBoxField("mimetype", mimetype) ;
The argument with no key (here “text/html”) is selected by default.
Advanced dynamic select box
In many cases, the previous solution with static options is not good enough and one would like to have the select box checked dynamically. That is what eXo Platform provide thanks to the introduction of a Groovy script as shown in the code fragment below.
String[] args = ["jcrPath=/node/exodestWorkspace", "script=ecm-explorer/widget/FillSelectBoxWithWorkspaces:groovy", "scriptParams=production"]; uicomponent.addSelectBoxField("destWorkspace", args) ;
The script itself implements the CMS Script interface and the cast is done to get the select box object as shown in the script code which fills the select box with the existing JCR workspaces.
import java.util.List ; import java.util.ArrayList ; import org.exoplatform.services.jcr.RepositoryService; import org.exoplatform.services.jcr.core.ManageableRepository; import org.exoplatform.webui.form.UIFormSelectBox; import org.exoplatform.webui.core.model.SelectItemOption; import org.exoplatform.services.cms.scripts.CmsScript; public class FillSelectBoxWithWorkspaces implements CmsScript { private RepositoryService repositoryService_; public FillSelectBoxWithWorkspaces(RepositoryService repositoryService) { repositoryService_ = repositoryService; } public void execute(Object context) { UIFormSelectBox selectBox = (UIFormSelectBox) context; ManageableRepository jcrRepository = repositoryService_.getRepository(); List options = new ArrayList(); String[] workspaceNames = jcrRepository.getWorkspaceNames(); for(name in workspaceNames) { options.add(new SelectItem(name, name)); } selectBox.setOptions(options); } public void setParams(String[] params) { } }
Note
It is also possible to provide a parameter to the script by using the argument “scriptParams”.
Checkbox Field¶
- Additional parameters
See also: Common parameters
- Example
<% String[] fieldDeep = ["jcrPath=/node/exo:isDeep", "defaultValues=true", "options=checkbox1,checkbox2,checkbox3"]; uicomponent.addCheckBoxField("isDeep", fieldDeep); %>
Mixin Field¶
- Additional parameters
See also: Common parameters
- Example
<% String[] fieldId = ["jcrPath=/node", "editable=false", "visible=if-not-null"] ; uicomponent.addMixinField("id", fieldId) ; %>
Action Field¶
One of the most advanced functionalities of this syntax is the ability to plug your own component that shows an interface, enabling you to select the value of the field.
In the generated form, you will see an icon which is configurable thanks
to the
selectorIcon argument.
You can plug your own component using the
selectorClass argument. It
must follow the eXo UIComponent mechanism and implements the interface
ComponentSelector:
package org.exoplatform.ecm.webui.selector; import org.exoplatform.webui.core.UIComponent; public interface ComponentSelector { public UIComponent getSourceComponent() ; public void setSourceComponent(UIComponent uicomponent, String[] initParams) ; }
- Additional parameters
Depending on the
selectorClass, some other parameters can be added.
For example, the component
org.exoplatform.ecm.webui.tree.selectone.UIOneNodePathSelector needs
the following parameter:
The component
org.exoplatform.ecm.webui.selector.UIPermissionSelector does not
need any special parameters.
- Example
<% String[] fieldPath = ["jcrPath=/node/exo:targetPath", "selectorClass=org.exoplatform.ecm.webui.tree.selectone.UIOneNodePathSelector", "workspaceField=targetWorkspace", "selectorIcon=SelectPath24x24Icon"] ; uicomponent.addActionField("targetPath", fieldPath) ; %>
The followings are predefined selectors which can be used in the action field to select an object from a list provided by the system. For example, to assign the permission to given users/groups, users must select them from a list of users/groups available in the system.
org.exoplatform.ecm.webui.tree.selectone.UIOneNodePathSelector
Allows selecting the node path.
org.exoplatform.ecm.webui.tree.selectone.UIOneTaxonomySelector
Allows selecting the category path.
org.exoplatform.ecm.webui.selector.UIGroupMemberSelector
Allows selecting the membership of a given group.
org.exoplatform.ecm.webui.component.explorer.popup.info.UIGroupSelector
Allows selecting a group.
org.exoplatform.ecm.webui.nodetype.selector.UINodeTypeSelector
Allows selecting node types.
org.exoplatform.ecm.webui.selector.UIPermissionSelector
Allows selecting permission expressions.
org.exoplatform.wcm.webui.selector.UIUserMemberSelector
Allows selecting users from a users list.
Interceptors¶
To add an interceptor to a dialog, you can use this method
uicomponent.addInterceptor(String scriptPath, String type).
- Example
<% uicomponent.addInterceptor("ecm-explorer/interceptor/PreNodeSaveInterceptor.groovy", "prev"); %>
WYSIWYG widget¶
Widgets are natively part of the eXo Platform product to provide a simple and easy way for users to get information and notification on their application. They complete the portlet application that focuses on more transactional behaviors.
WYSIWYG stands for What You See Is What You Get. This widget is one of the most powerful tools. It renders an advanced JavaScript text editor with many functionalities, including the ability to dynamically upload images or flash assets into a JCR workspace and then to refer to them from the created HTML text.
String[] fieldSummary = ["jcrPath=/node/exo:summary", "options=basic"] ; uicomponent.addWYSIWYGField("summary", fieldSummary) ;
String[] fieldContent = ["jcrPath=/node/exo:text", "options=toolbar:CompleteWCM,'height:410px'", ""] ; uicomponent.addRichtextField("content", fieldContent
The “options” argument is used to tell the component which toolbar should be used.
By default, there are five options for the toolbar: CompleteWCM, Default, BasicWCM, Basic, SuperBasicWCM.
- CompleteWCM: a full set of tools is shown.
The following buttons are shown: Source, Templates, Show Blocks, Cut, Copy, Paste Text, Undo, Redo, SpellCheck, WCM Insert Gadget, Flash, Table, Insert Special Character, WCM Insert Content Link, Bold, Italic, Underline, Strike Through, Justify Left, Justify Center, Justify Right, Justify Full, Ordered List, Unordered List, Text Color, Background Color, Remove Format, Link, WCM Insert Portal Link, Unlink, Anchor, Style, Font Format, Font Name, Font Size, Maximize.
- Default: a large set of tools is shown, no “options” argument is needed in that case.
The following buttons are shown: Source, Templates, Cut, Copy, PasteText, Undo, Redo, SpellCheck, RemoveFormat, Bold, Italic, Underline, Strike Through, Ordered List, Unordered List, Link, Unlink, Anchor, Image, Flash, Table, Special Character, Text Color, Background Color, Show Blocks, Style, Font Format, Font Name, Font Size, Maximize.
- BasicWCM: a minimal set of tools is shown.
The following buttons are shown: Source, Bold, Italic, Underline, Strike Through, OrderedList, UnorderedList, Outdent, Indent, Justify Left, Justify Center, Justify Right, JustifyFull, Blockquote, Link, Unlink, WCM Insert Portal Link, WCM Insert Content Link, Show Blocks, Style, Font Format, Font Name, FontSize, Maximize.
- Basic:
The following buttons are shown: Source, Bold, Italic, Underline, Strike Through, Ordered List, Unordered List, Outdent, Indent, Justify Left, Justify Center, Justify Right, Justify Full, Blockquote, Link, Unlink, Show Blocks, Style, Font Format, Font Name, Font Size, Maximize.
- SuperBasicWCM:
The following buttons are shown: Source, Bold, Italic, Underline, Justify Left, Justify Center, Justify Right, Justify Full, Link, Unlink, WCM Insert Portal Link, WCM Insert Gadget, WCM Insert Content Link.
There is also a simple text area widget, which has text-input area only:
String [] descriptionArgs = ["jcrPath=/node/exo:title", "validate=empty"]; uicomponent.addTextAreaField("description", descriptionArgs) ;
Content Explorer¶
CSS
- By using Content, all the stylesheets of each site can be managed online easily. You do not need to access the file system to modify and wait until the server has been restarted. For the structure, each site has its own CSS folder which can contain one or more CSS files. These CSS files have the data, and the priority. If they have the same CSS definition, the higher priority will be applied. You can also disable some of them to make sure the disabled style will no longer be applied into the site.
- For example, the Platform demo package has two main sites by default: ACME and Intranet. The ACME site has two CSS files called BlueStylesheet and GreenStylesheet. The blue one is enabled and the green one is disabled by default. All you need to test is to disable the blue one (by editing it and setting Available to ‘false’) and enable the green one. Now, back to the homepage and see the magic.
Note
Remember the cache and refresh the browser first if you do not see any changes. Normally, this is the main reason why the new style is not applied.
CKEditor
Basically, if you want to add a rich text area to your dialogs, you can use the **addRichtextField** method. However, in case you want to add the rich text editor manually, you first need to use the **addTextAreaField** method and some additional Javascripts as shown below:
<script src="/CommonsResources/ckeditor/ckeditor.js"></script> <div class="control-group"> <label class="control-label">Description:</label> <div class="controls"> <% String[] fieldDescription = ["jcrPath=/node/exo:description"] ; uicomponent.addTextAreaField("description", fieldDescription) %> </div> </div> <script> CKEDITOR.config.toolbar = "Basic"; CKEDITOR.replace( 'description' ); </script>
CKEditor Enter mode
When creating/editing content with CKEditor, the Enter mode in CKEditor will determine the default behavior when users press the Enter key.
In eXo Platform, when you press the Enter key inside an editable text region, a new <p/> paragraph is created in the Source editor by default as below.
However, you can change the default behavior of the CKEditor Enter mode (<br/> line breaks or <div/> blocks) when creating a new dialog. For example, if you want the Enter mode to be displayed as <br/> rather than <p/> in CKEditor, simply add the following to the dialog.
String[] htmlArguments = ["jcrPath=/node/default.html/jcr:content/jcr:data", "options=toolbar:CompleteWCM,height:'410px',noSanitization,enterMode:CKEDITOR.ENTER_BR", htmlContent];
In case you want to change the default value from <p/> to <br/> for an existing dialog, follow the steps:
Click Content > Content Administration on the top navigation bar.
Select Templates, then click
corresponding to one template (for
example, Web Content) to open the View & Edit Template form.
Select the Dialog tab, then click
corresponding to the dialog
that is currently used by the template (for example, dialog1).
Replace the following in the Content field:
String[] htmlArguments = ["jcrPath=/node/default.html/jcr:content/jcr:data", "options=toolbar:CompleteWCM,height:'410px',noSanitization", htmlContent];
with the following:
String[] htmlArguments = ["jcrPath=/node/default.html/jcr:content/jcr:data", "options=toolbar:CompleteWCM,height:'410px',noSanitization,enterMode:CKEDITOR.ENTER_BR", htmlContent];
Save the above change, then go to Content > Sites Explorer on the top navigation bar to see your change:
Adding a new ECM template with tabs¶
To avoid refreshing the first tab for every action execution, add a new private function to the template with tabs. In the template, you must insert a new piece of code like the following:
private String getDisplayTab(String selectedTab) { if ((uicomponent.getSelectedTab() == null && selectedTab.equals("mainWebcontent")) || selectedTab.equals(uicomponent.getSelectedTab())) { return "display:block"; } return "display:none"; } private String getSelectedTab(String selectedTab) { if (getDisplayTab(selectedTab).equals("display:block")) { return "SelectedTab"; } return "NormalTab"; }
Changing in every event of onClick must be done like the following:
<div class="UITab NormalTabStyle"> <div class="<%=getSelectedTab("mainWebcontent")%> "> <div class="LeftTab"> <div class="RightTab"> <div class="MiddleTab" onClick="<%=uicomponent.event("ChangeTab", "mainWebcontent")%>"><%=_ctx.appRes("WebContent.dialog.label.MainContent")%></div> </div> </div> </div> </div> <div class="UITab NormalTabStyle"> <div class="<%=getSelectedTab("illustrationWebcontent")%> "> <div class="LeftTab"> <div class="RightTab"> <div class="MiddleTab" onClick="<%=uicomponent.event("ChangeTab", "illustrationWebcontent")%>"><%=_ctx.appRes("WebContent.dialog.label.Illustration")%></div> </div> </div> </div> </div> <div class="UITab NormalTabStyle"> <div class="<%= getSelectedTab("contentCSSWebcontent")%> "> <div class="LeftTab"> <div class="RightTab"> <div class="MiddleTab" onClick="<%=uicomponent.event("ChangeTab", "contentCSSWebcontent")%>"><%=_ctx.appRes("WebContent.dialog.label.Advanced")%></div> </div> </div> </div> </div>
Finally, to display the selected tab, simply add it to the style of UITabContent class.
<div class="UITabContent" style="<%=getDisplayTab("mainWebcontent")%>">
Preventing XSS attacks¶
In the content management sytem, its typical feature is enabling JavaScript in a content. This causes the XSS (Cross-site Scripting) attacks to the content displayed in the HTML format.
However, there is no solution to keep JavaScript and to prevent the XSS
attacks at the same time, so Content allows you to decide whether
JavaScript is allowed to run on a field of the content template or not
by using the
option parameter.
- To allow JavaScript to execute, add “
options = noSanitization” to the dialog template file. Normally, this file is named
dialog1.gtmpl.
- For example: The following code shows how to enable JavaScript in the Main Content field of the Free Layout Webcontent content:
String [] htmlArguments = ["jcrPath = / node / default.html / JCR: content / JCR: data", "options = toolbar: CompleteWCM, height: '410px ', noSanitization" htmlContent];
- By default, there is no “
options = noSanitization” parameter in the dialog template file and this helps you prevent the XSS attacks. When end-users input JavaScript into a content, the JavaScript is automatically deleted when the content is saved.
View¶
The following is a sample code of the View template of a content node:
- Get a content node to display:
<% def node = uicomponent.getNode() ; def originalNode = uicomponent.getOriginalNode() %>
- Display the name of the content node:
<%=node.getName()%>
- Display the exo:title property of the content node:
<%if(node.hasProperty("exo:title")) { %> <%=node.getProperty("exo:title").getString()%> <% } %>
- Display the exo:date property of the content node in a desired format. For example: “MM DD YYYY” or “YYYY MM DD”.
<% import java.text.SimpleDateFormat ; SimpleDateFormat dateFormat = new SimpleDateFormat() ; %> ... <% if(node.hasProperty("exo:date")) { dateFormat.applyPattern("MMMMM dd yyyy") ; %> <%=dateFormat.format(node.getProperty("exo:date").getDate().getTime())%> <% } %>
- Display the translation of the Sample.view.label.node-name message in different languages.
<%=_ctx.appRes("Sample.view.label.node-name")%>
CSS¶
In Content, the stylesheet of a node is an optional template embedded in the View template, such as File. To create the stylesheet for the View template, you just need to add the content of the stylesheet into the Content field of the CSS tab.
See the following example of the stylesheet for the
nt:file
template:
/** LTR skin for nt:file template */ .FileContent { color: #0e396c; } .FileContent .TopNavContent { background: #F8F8F8; border: 1px solid #E1E1E1; } .FileContent .TopTitle { color: #4F4F4F; font-weight: bold; height: 28px; line-height: 26px; padding-left: 10px; width:75%; overflow:hidden; float:left; } .FileContent .ActionButton{ padding: 4px 0 !important; } .FileContent .ActionButton a{ background: url("/eXoWCMResources/skin/images/file/DownloadFile.png") no-repeat scroll 4px center transparent; border-left: 1px solid #E1E1E1; color: #058EE6; line-height: 20px; padding: 3px 10px 3px 29px; } .FileContent .ActionTextButton a{ border-left: 1px solid #E1E1E1; color: #058EE6; line-height: 20px; padding: 3px 10px 3px 2px; } .FileContent .ECMIframe { border: 1px solid #cbcbcb; height: 100%; overflow: auto; width: 93%; margin: 5px; background: white; }
Content list viewer templates¶
Content List Templates
The Content List Templates allow you to view the content list with various templates. eXo Platform supports the following content list templates:
Category Navigation Templates
The Category Navigation Templates display all contents under the categories.
Paginator Templates
The Paginator Templates allow you to paginate the content into various pages.
FAQ Template¶
This section consists of three following main topics:
Information about the configuration plug-in which is used to automatically set up a default template for the FAQ portlet, and details of properties of the template configuration plug-in.
How to change look and feel
Instructions on how to change the template FAQ viewer, either by using plug-in or by using the Edit mode.
API provided by the UIComponent (UIViewer.java)
Introduction to UIViewer, details of APIs and classes (CategoryInfo, QuestionInfo, SubCategoryInfo).
Configuration plug-in¶
Configuration plug-in is used to automatically set up a default template for the FAQ portlet. When the FAQ service starts, it will get values which are returned from the TemplatePlugin component to initialize the template for the FAQ portlet.
The template configuration plug-in is configured in the
templates-configuration.xml file.
In details:
At runtime of the FAQ Service, FAQService component is called, then
templates-configuration.xml file is executed. The component-plugin
named addTemplatePlugin will be referred to
org.exoplatform.faq.service.TemplatePlugin to execute some objects
and create default data for the Forum application.
<external-component-plugins> <target-component>org.exoplatform.faq.service.FAQService</target-component> <component-plugin> <name>faq.default.template</name> <set-method>addTemplatePlugin</set-method> <type>org.exoplatform.faq.service.TemplatePlugin</type> <init-params> <value-param> <name>viewerTemplate</name> <value>war:/ks-extension/ks/faq/templates/FAQViewerPortlet.gtmpl</value> </value-param> </init-params> </component-plugin> </external-component-plugins>
The properties of template configuration plug-in are defined in the init-params tag as follows:
<init-params> <value-param> <name>viewerTemplate</name> <value>war:/ks-extension/ks/faq/templates/FAQViewerPortlet.gtmpl</value> </value-param> </init-params>
How to change look and feel¶
You can change the template FAQ viewer in one of the following two ways:
Plug-in
- Create a file named
FAQViewerPortlet.gtmpl. The content of the file is the template of the FAQ viewer.
- Copy this file and paste into
ks-extension/WEB-INF/ks-extension/ks/faq/templates/that is in the webapps folder of the server (Tomcat, JBoss).
When the server runs,
FAQViewerPortlet.gtmplwill initialize the
template of the FAQ viewer.
Edit Mode
- Run the server and open the FAQ Portlet.
- Go to edit mode and open the Edit Template tab.
- Edit the content of text-area-input and click Save.
API provided by the UIComponent (UIViewer.java)¶
- UIViewer is the child of the component UIFAQPortlet. It shows the main content of FAQ portlet.
- List of APIs:
- The CategoryInfo class:
... private String id; private String path; private String name; private List<String> pathName; private List<QuestionInfo> questionInfos = new ArrayList<QuestionInfo>(); private List<SubCategoryInfo> subCateInfos = new ArrayList<SubCategoryInfo>(); ...
- The QuestionInfo class:
... private String id; private String question; private String detail; private List<String> answers = new ArrayList<String>(); ...
Listener Service events¶
In eXo Platform, whenever an action occurs (for example, login/logout, content creation/modification), a corresponding event is sent to the Listener Service that dispatches the notification to its listeners. Listeners then can perform whatever action they want when receiving an event.
For example, to manage events related to profile updates, the
ProfileLifecyle that extends
AbstractLifeCycle<ProfileListener, ProfileLifeCycleEvent> will be
implemented. To listen to these event types, the ProfileLifecyle
extends
LifeCycleListener<ProfileLifeCycleEvent>. This listener is
registered to ProfileLifecyle as below:
<external-component-plugins> <target-component>org.exoplatform.social.core.manager.IdentityManager</target-component> <component-plugin> <name>ProfileUpdatesPublisher</name> <set-method>addProfileListener</set-method> <type>org.exoplatform.social.core.application.ProfileUpdatesPublisher</type> </component-plugin> </external-component-plugins>
For simplification, it is assumed that the user changes his avatar, then this activity is posted on his stream. The process of these actions are illustrated as below:
The ProfileUpdatesPublisher listener (that extends
ProfileListenerPlugin) is registered into ProfileLifecyle in the
IdentityManager class. When his profile is updated, the below event
will be broadcasted.
/** * {@inheritDoc} */ public void updateProfile(Profile existingProfile) throws MessageException { .... broadcastUpdateProfileEvent(existingProfile); }
Based on event type of avatar update, the ProfileLifecycle will dispatch that event to the avatarUpdated listener.
/** * Broadcasts update profile event depending on type of update. * * @param profile * @since 1.2.0-GA */ protected void broadcastUpdateProfileEvent(Profile profile) { if (..) { .... } else if (profile.getUpdateType().equals(Profile.UpdateType.AVATAR)) { // updateAvatar profileLifeCycle.avatarUpdated(profile.getIdentity().getRemoteId(), profile); } else if (...) { } }
The fired event is listened by ProfileListener.
@Override protected void dispatchEvent(ProfileListener listener, ProfileLifeCycleEvent event) { switch(event.getType()) { case AVATAR_UPDATED : listener.avatarUpdated(event); break; case BASIC_UPDATED: ... } }
Information included in the event is extracted and processed.
@Override public void avatarUpdated(ProfileLifeCycleEvent event) { ..... publishActivity(event, activityMessage, "avatar_updated"); } private void publishActivity(ProfileLifeCycleEvent event, String activityMessage, String titleId) { ..... publish(event, activity, activityId, titleId); }
See Understanding the ListenerService for more details.
Events and event listeners in eXo Platform have to follow the
org.exoplatform.services.listener.Event and
org.exoplatform.services.listener.Listener classes respectively.
To make easy for you to learn about events in eXo Platform, this section will list events and their brief description that are classified to each module, including:
Portal events¶
Portal configuration events
org.exoplatform.portal.config.DataStorage will fire the following events when a portal configuration object is created/updated/removed:
org.exoplatform.portal.config.DataStorage.portalConfigCreated
org.exoplatform.portal.config.DataStorage.portalConfigUpdated
org.exoplatform.portal.config.DataStorage.portalConfigRemoved
To cache these above events, you can create event listeners that must be subclasses of: org.exoplatform.services.listener.Listener<org.exoplatform.portal.config.DataStorage, org.exoplatform.portal.config.model.PortalConfig>.
Page configuration events
org.exoplatform.portal.config.DataStorage will fire the following events when a page configuration object is created/updated/removed:
org.exoplatform.portal.config.DataStorage.pageCreated
org.exoplatform.portal.config.DataStorage.pageUpdated
org.exoplatform.portal.config.DataStorage.pageRemoved
The related event listeners must be extended from org.exoplatform.services.listener.Listener<org.exoplatform.portal.config.DataStorage, org.exoplatform.portal.config.model.Page>.
Navigation tree events
org.exoplatform.portal.mop.navigation.NavigationService will broadcast the following events when a navigation is created/updated/removed:
org.exoplatform.portal.mop.navigation.navigation_created
org.exoplatform.portal.mop.navigation.navigation_updated
org.exoplatform.portal.mop.navigation.navigation_destroyed
The related event listeners must be subclasses of org.exoplatform.services.listener.Listener<org.exoplatform.portal.mop.navigation.NavigationService, org.exoplatform.portal.mop.SiteKey>.
Page events
org.exoplatform.portal.mop.page.PageService will broadcast the following events when a page is created/updated/removed.
org.exoplatform.portal.mop.page.page_created
org.exoplatform.portal.mop.page.page_updated
org.exoplatform.portal.mop.page.page_destroyed
The related event listeners must be subclasses of org.exoplatform.services.listener.Listener<org.exoplatform.portal.mop.page.PageService, org.exoplatform.portal.mop.page.PageKey>.
Registered/unregistered conversation state events
org.exoplatform.services.security.ConversationRegistry will fire the following events when any user signs in/out the portal.
exo.core.security.ConversationRegistry.register
exo.core.security.ConversationRegistry.unregister
The related event listeners must be subclasses of org.exoplatform.services.listener.Listener<org.exoplatform.services.security.ConversationRegistry, org.exoplatform.services.security.ConversationState>.
Servlet context events
org.exoplatform.web.GenericHttpListener will broadcast the following events when a Servlet context is initialized/destroyed:
org.exoplatform.web.GenericHttpListener.contextInitialized
org.exoplatform.web.GenericHttpListener.contextDestroyed
The related event listeners must be extended from org.exoplatform.services.listener.Listener<org.exoplatform.container.PortalContainer, javax.servlet.ServletContextEvent>.
HTTP Session Events
org.exoplatform.web.GenericHttpListener will broadcast the following events when an HTTP session is created/destroyed:
org.exoplatform.web.GenericHttpListener.sessionCreated
org.exoplatform.web.GenericHttpListener.sessionDestroyed
The related event listeners must be extended from org.exoplatform.services.listener.Listener<org.exoplatform.container.PortalContainer, javax.servlet.http.HttpSessionEvent>.
ECMS events¶
Content events
InlineEditingService and RenameConnector will fire the below event when a content is created, updated, or removed from the database:
CmsService.event.postEdit
The related event listeners must be subclasses of org.exoplatform.services.listener.Listener<org.exoplatform.services.cms.CmService, javax.jcr.Node>.
CmsService will fire these events when a content is created/added.
CmsService.event.postCreate
CmsService.event.postEdit
CmsService.event.preCreate
CmsService.event.preEdit
The related event listeners must be subclasses of org.exoplatform.services.listener.Listener<org.exoplatform.services.cms.CmService, javax.jcr.Node>.
LinkManager will fire the following event when a link is added to the content.
CmsService.event.postEdit
The related event listeners must be subclasses of org.exoplatform.services.listener.Listener<org.exoplatform.services.cms.CmService, javax.jcr.Node>.
WebDavService will fire this event when a content is uploaded through WebDav.
WebDavService.event.postUpload
The related event listeners must be subclasses of org.exoplatform.services.listener.Listener<org.exoplatform.services.jcr.webdav.WebDavService, javax.jcr.Node>.
File events
FileUploadHandler and WebdavService will fire the below events when a file is created or removed from the database:
FileActivityNotify.event.FileRemoved
ActivityNotify.event.FileCreated
The related event listeners must be subclasses of org.exoplatform.services.listener.Listener<java.lang.Object, javax.jcr.Node>.
UIDocumentForm will fire the following event when a file is created.
ActivityNotify.event.FileCreated
The related event listeners must be subclasses of org.exoplatform.services.listener.Listener<java.lang.Object, javax.jcr.Node>.
DeleteManageComponent will fire the following event when a file is removed from the database.
FileActivityNotify.event.FileRemoved
The related event listeners must be subclasses of org.exoplatform.services.listener.Listener<java.lang.Object, javax.jcr.Node>.
AddNodeActivityAction will fire the following event when an attachment is added into the database.
ActivityNotify.event.AttachmentAdded
The related event listeners must be subclasses of org.exoplatform.services.listener.Listener<javax.jcr.Node, javax.jcr.Node>.
RemoveFileActivityAction will fire the following event when a file is removed from the database.
FileActivityNotify.event.FileRemoved
The related event listeners must be subclasses of org.exoplatform.services.listener.Listener<java.lang.Object, javax.jcr.Node>.
RemoveNodeActivityAction will fire the following event when an attachment is removed from the database.
ActivityNotify.event.AttachmentRemoved
The related event listeners must be subclasses of org.exoplatform.services.listener.Listener<javax.jcr.Node, javax.jcr.Node>.
Publication events
WCMPublicationService will fire the below event when the document publication state is changed.
WCMPublicationService.event.updateState
The related event listeners must be subclasses of org.exoplatform.services.listener.Listener<org.exoplatform.services.cms.CmService, javax.jcr.Node>.
AuthoringPublicationPlugin will fire the below event when a node is involved into a publication lifecycle.
PublicationService.event.postInitState
The related event listeners must be subclasses of org.exoplatform.services.listener.Listener<org.exoplatform.services.cms.CmService, javax.jcr.Node>.
AuthoringPublicationPlugin will fire the following events when publication state of a document is changed.
ActivityNotify.event.StateChanged
PublicationService.event.postUpdateState
The related event listeners must be subclasses of org.exoplatform.services.listener.Listener<javax.jcr.Node, java.lang.String>.
Version events
UIPublicationPanel will fire the below event when the document version is restored.
ActivityNotify.event.RevisionChanged
The related event listeners must be subclasses of org.exoplatform.services.listener.Listener<javax.jcr.Node, java.lang.String>.
Comment events
CommentService will fire the following events when a comment is created/updated/removed.
ActivityNotify.event.CommentAdded
ActivityNotify.event.CommentUpdated
ActivityNotify.event.CommentRemoved
The related event listeners must be subclasses of org.exoplatform.services.listener.Listener<javax.jcr.Node, javax.jcr.Node>.
Tag events
NewFolksonomyService will fire the following events when a tag is created/removed.
ActivityNotify.event.TagAdded
ActivityNotify.event.TagRemoved
The related event listeners must be subclasses of org.exoplatform.services.listener.Listener<javax.jcr.Node, java.lang.String>.
Node events
CmsService, MoveNodeManageComponent and PasteManageComponent will fire the below event when a node is moved to another place.
ActivityNotify.event.NodeMoved
The related event listeners must be subclasses of org.exoplatform.services.listener.Listener<javax.jcr.Node, java.lang.String>.
RemoveNodeActivityAction will fire the below event when a node is removed from the database.
ActivityNotify.event.NodeRemoved
The related event listeners must be subclasses of org.exoplatform.services.listener.Listener<javax.jcr.Node, java.lang.String>.
UIDocumentForm will fire the below event when a new node is created.
ActivityNotify.event.NodeCreated
The related event listeners must be subclasses of org.exoplatform.services.listener.Listener<java.lang.Object, javax.jcr.Node>.
Property events
- AddFilePropertyActivityAction will fire the below event when a new property is added to a file:
FileActivityNotify.event.PropertyAdded
- EditFilePropertyActivityAction will fire the below event when a property of file is modified.
FileActivityNotify.event.PropertyUpdated
- EditPropertyActivityAction will fire the below event when a property of document is modified.
ActivityNotify.event.PropertyUpdated
- RemoveFilePropertyActivityAction will fire the below event when a property is removed from file.
FileActivityNotify.event.PropertyRemoved
The listeners of Property events must be subclasses of org.exoplatform.services.listener.Listener<javax.jcr.Node, java.lang.String>.
Category events
TaxomonyService will fire the following events when a category is added to/removed from a node.
ActivityNotify.event.CategoryAdded
ActivityNotify.event.CategoryRemoved
The related event listeners must be subclasses of org.exoplatform.services.listener.Listener<javax.jcr.Node, java.lang.String>.
Artifacts events
CreatePortalArtifactService will fire the below event when artifacts are deployed for a new site.
PortalArtifactsInitializerServiceImpl.portal.onCreate
The related event listeners must be subclasses of org.exoplatform.services.listener.Listener<org.exoplatform.services.jcr.ext.commonSessionProvider, java.lang.String>.
Forum events¶
Forums events
org.exoplatform.forum.service.ForumEventLifeCycle will broadcast the following events when:
A category is saved.
org.exoplatform.forum.service.ForumEventLifeCycle.saveCategory
A forum is saved.
org.exoplatform.forum.service.ForumEventLifeCycle.saveForum
A topic is added/updated/moved/splitted or more than 2 topics are merged.
org.exoplatform.forum.service.ForumEventLifeCycle.addTopic
org.exoplatform.forum.service.ForumEventLifeCycle.updateTopic
org.exoplatform.forum.service.ForumEventLifeCycle.updateTopics
org.exoplatform.forum.service.ForumEventLifeCycle.moveTopic
org.exoplatform.forum.service.ForumEventLifeCycle.mergeTopic
org.exoplatform.forum.service.ForumEventLifeCycle.splitTopic
A post is added/updated.
org.exoplatform.forum.service.ForumEventLifeCycle.addPost
org.exoplatform.forum.service.ForumEventLifeCycle.updatePost
An activity is removed.
org.exoplatform.forum.service.ForumEventLifeCycle.removeActivity
A comment is removed.
org.exoplatform.forum.service.ForumEventLifeCycle.removeComment
Answers events
org.exoplatform.faq.service.AnswerEventLifeCycle will broadcast the following events when:
- A question is saved/voted/unvoted/removed.
org.exoplatform.faq.service.AnswerEventLifeCycle.saveQuestion
org.exoplatform.faq.service.AnswerEventLifeCycle.voteQuestion
org.exoplatform.faq.service.AnswerEventLifeCycle.unVoteQuestion
org.exoplatform.faq.service.AnswerEventLifeCycle.removeQuestion
- An answer is saved/removed.
org.exoplatform.faq.service.AnswerEventLifeCycle.saveAnswer
org.exoplatform.faq.service.AnswerEventLifeCycle.removeAnswer
- A comment is saved/removed.
org.exoplatform.faq.service.AnswerEventLifeCycle.saveComment
org.exoplatform.faq.service.AnswerEventLifeCycle.removeComment
Extended Answers events
org.exoplatform.faq.service.ExtendedAnswerEventLifeCycle (extending AnswerEventLifeCycle) will broadcast the following event when a question is moved:
org.exoplatform.faq.service.ExtendedAnswerEventLifeCycle.moveQuestions
Poll events
org.exoplatform.poll.service.PollEventLifeCycle will broadcast the following events when a poll is saved/closed/removed:
org.exoplatform.poll.service.PollEventLifeCycle.savePoll
org.exoplatform.poll.service.PollEventLifeCycle.closePoll
org.exoplatform.poll.service.PollEventLifeCycle.pollRemove
Social¶
There are the following components in Social that can be overridden:
Relationship listener plugin¶
RelationshipListenerPluginenables you to listen to events of a relationship between users. By implementing this overriable component, users will be notified when the connection request is accepted or the connection is removed.
To use the
RelationshipListenerPluginclass, you can do as follows:
Create a new class, for example,
RelationshipPublisherthat extends
RelationshipListenerPlugin.
Override functions in this created class. In each function, you can write anything to meet your needs.
confirmedfunction is called when a connection request is accepted.
removedfunction is called when a connection is removed.
Add a new configuration to the
/social-config/src/main/resources/conf/social/core-configuration.xmlfile with the type that is the class created in Step 1.
Profile listener plugin¶
ProfileListenerPluginenables you to listen to events of profiles of users. By implementing this overriable component, a notification will be updated in Activity Stream when the profile is changed.
To use the
ProfileListenerPluginclass, you can do as follows:
Create a new class, for example,
ProfileUpdatesPublisherthat extends
ProfileListenerPlugin.
Override functions in this created class. In each function, you can write anything to meet your needs.
avatarUpdatedfunction is called when the avatar picture of a user is updated.
basicInfoUpdatedfunction is called when the basic account information of a user is updated.
contactSectionUpdatedfunction is called when the contact information of a user is updated.
experienceSectionUpdatedfunction is called when the experience section of a user is updated.
headerSectionUpdatedfunction is called when the header section of a user is updated.
Add a new configuration to the
/social-config/src/main/resources/conf/social/core-configuration.xmlfile with the type that is the class created in Step 1.
Space listener Plugin¶
SpaceListenerPluginenables you to listen to events of spaces. By implementing this overriable component, the notification will be updated in Activity Stream of the space or of members when the space information is changed or when a user joins or leaves the space.
To use the
SpaceListenerPluginclass, you can do as follows:
Create a new class, for example,
SpaceActivityPublisherthat extends
SpaceListenerPlugin.
Override functions in this created class. In each function, you can write anything to meet your needs.
grantedLeadfunction is called when a member is promoted as a space manager.
revokedLeadfunction is called when a user is demoted from a space manager.
joinedfunction is called when a user joins a space.
leftfunction is called when a user leaves a space.
spaceRenamedfunction is called when a space is renamed.
spaceDescriptionEditedfunction is called when the description of a space is changed.
spaceAvatarEditedfunction is called when the space avatar is changed.
Add a new configuration to the
/social-config/src/main/resources/conf/social/core-configuration.xmlfile with the type that is the class created in Step 1. | https://exo-documentation.readthedocs.io/en/latest/Platform_Development.html | 2019-10-14T03:18:55 | CC-MAIN-2019-43 | 1570986649035.4 | [array(['_images/email-queue-data-model.png', 'image17'], dtype=object)
array(['_images/settings-data-model.png', 'image16'], dtype=object)] | exo-documentation.readthedocs.io |
Documentation for picamera¶ timelapse sequences
- 4.6. Capturing to a network stream
- 4.7. Recording video to a file
- 4.8. Recording video to a stream
- 4.9. Recording over multiple files
- 4.10. Recording full-resolution video
- 4.11. Recording to a circular stream
- 4.12. Recording to a network stream
- 4.13. Controlling the LED
- 5. Advanced Recipes
- 6. Camera Hardware
- 7. API Reference
- 8. Change log
- 9. License | https://picamera.readthedocs.io/en/release-1.2/index.html | 2019-10-14T04:30:16 | CC-MAIN-2019-43 | 1570986649035.4 | [] | picamera.readthedocs.io |
Objectives
- Deploy your first app on Kubernetes with kubectl.
Kubernetes Deployments
Once you have a running Kubernetes cluster, you can deploy your containerized applications on top of it. To do so, you create a Kubernetes Deployment configuration. The Deployment instructs Kubernetes how to create and update instances of your application. Once. our first Deployment, we'll use a Node.js application packaged in a Docker container. To create the Node.js application and deploy the Docker container, follow the instructions from the Hello Minikube tutorial.
Now that you know what Deployments are, let's go to the online tutorial and deploy our first app! | https://v1-12.docs.kubernetes.io/docs/tutorials/kubernetes-basics/deploy-app/ | 2019-10-14T03:51:48 | CC-MAIN-2019-43 | 1570986649035.4 | [] | v1-12.docs.kubernetes.io |
An Act to renumber and amend 973.20 (2); and to create 40.08 (1t) and 973.20 (2) (bm) of the statutes; Relating to: withholding from a Wisconsin Retirement System lump sum payment or annuity to satisfy an order of restitution.
Bill Text (PDF: )
Wisconsin Ethics Commission information
2019 Senate Bill 233 - S - Available for Scheduling | https://docs.legis.wisconsin.gov/2019/proposals/ab257 | 2019-10-14T04:20:40 | CC-MAIN-2019-43 | 1570986649035.4 | [] | docs.legis.wisconsin.gov |
Specify time modifiers in your search
When searching or saving a search, you can specify absolute and relative time ranges using the following attributes:
earliest=<time_modifier> latest=<time_modifier>
Specify absolute time ranges in your search.
If you don't specify a time offset before the "snap to" amount, Splunk interprets the time as "current time snapped to" the specified amount. For example, if it is currently 11:59 PM on Friday and you use
@w6 to "snap to Saturday", the resulting time is the previous Saturday at 12:01 AM.
Examples of relative time modifiers
For these examples, the current time is Wednesday, 05 February 2009, 01:37:05 PM.
For example 2 and 3, isn't there any mistake ? If we are Saturday or Sunday, example2 will lead to earliest="This Mon 12:00AM" (Good); latest="Sat next week 12:00AM" (Bad). I think latest should be replaced by @w1+5d. Same for example 3. | https://docs.splunk.com/Documentation/Splunk/6.1.12/Search/Specifytimemodifiersinyoursearch | 2019-10-14T04:20:06 | CC-MAIN-2019-43 | 1570986649035.4 | [array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)] | docs.splunk.com |
When deploying to multiple environments (development, staging, production, etc.), you’ll likely want to deploy different configurations. Each environment/configuration should have its own file in dinnertime/settings and inherit from dinnertime.settings.base. A dev environment is provided as an example.
By default, manage.py and wsgi.py will use dinnertime.settings.local if no settings module has been defined. To override this, use the standard Django constructs (setting the DJANGO_SETTINGS_MODULE environment variable or passing in --settings=dinnertime.settings.<env>). Alternatively, you can symlink your environment’s settings to dinnertime/settings/local.py.
You may want to have different wsgi.py and urls.py files for different environments as well. If so, simply follow the directory structure laid out by dinnertime/settings, for example:
wsgi/ __init__.py base.py dev.py ...
The settings files have examples of how to point Django to these specific environments. | http://tablesurfin.readthedocs.io/en/latest/environments.html | 2017-10-17T05:39:25 | CC-MAIN-2017-43 | 1508187820927.48 | [] | tablesurfin.readthedocs.io |
While U-Boot can boot a variety of file types (ldr files, elf files, binary files) and it includes support for its own special format - the U-Boot image format (or uImage).
The format stores information about the operating system type, the load address, the entry point, basic integrity verification (via CRC), compression types, and free description text. Some common bootable U-Boot images you will encounter are files typically named uImage or vmImage.
These images are operated on in directly addressable memory regions. So you can boot directly out of external memory or flash hooked up to the asynchronous memory banks. If the image is stored elsewhere (say serial or NAND flash), you'll have to boot indirectly.
For information on loading a bootable image into the board, please consult the loading files via TFTP or the loading files via the serial port documents. For information on writing an image into storage, please consult the appropriate page (see the U-Boot index).
The
iminfo command just displays information about the image stored at the specified memory address and verifies the checksum.
bfin> iminfo 0x1000000 ## Checking
If you wish to actually boot an image, you give the
bootm command the address in memory where the image is stored.
bfin> bootm 0x1000000 ## Booting Uncompressing Kernel Image ... OK Starting Kernel at = 244000 Linux version 2.6.24.4-ADI-2008R2-pre-svn4516 (vapier@G5) (gcc version 4.1.2 (ADI svn)) #89 Mon Mar 31 16:07:02 EDT 2008
This does the following things:
Normally you do not need to create a boot image yourself as the uClinux-dist and Linux kernel will generate appropriate images for you. But in case you need to, the command to use is called
mkimage. You can either find this with the toolchain (just add the corresponding
bfin-… prefix), or you can find it in the U-Boot source directory in the tools folder.
The help for the
mkimage utility:)
For example, if we want to create an image with a gzipped compressed kernel in it, we would just do:
$ gzip -9 vmlinux $ mkimage \ -A blackfin -O linux -T kernel \ -C gzip \ -n 'My Linux Image' \ -a 0x1000 -e 0x1000 \ -d vmlinux.gz vmImageHere we create an image for the Blackfin Linux kernel that will load and boot at 0x1000. You'll obviously need to change these values for your own needs. | http://docs.blackfin.uclinux.org/doku.php?id=bootloaders:u-boot:uimage | 2014-10-20T17:55:45 | CC-MAIN-2014-42 | 1413507443062.21 | [] | docs.blackfin.uclinux.org |
) 19 months ago. (Purge)
This screen is accessed from the back-end Joomla! administrator panel. It is used to create a new or edit an existing module
The Extensions Module Manager Edit (New) allows editing an existing Module or creating a new Module by Module type..
For existing Modules, edit functions: At the top right you will see the toolbar:
The functions are:
For creating a New Module, new functions: At the top right you will see the toolbar:
The functions are: | http://docs.joomla.org/index.php?title=Help25:Extensions_Module_Manager_Edit&oldid=81491 | 2014-10-20T18:08:45 | CC-MAIN-2014-42 | 1413507443062.21 | [] | docs.joomla.org |
changes.mady.by.user Andrew Eisenberg
Saved on Dec 19, 2012
...
This service refresh of Groovy-Eclipse provides significant improvements with the @CompileStatic annotation. Also, we have upgraded to Groovy 2.0.6. Other fixes and features are included as well. See all the details: Groovy-Eclipse 2.7.1 2 New and Noteworthy. (December 19, 2012)
@CompileStatic
Powered by a free Atlassian Confluence Open Source Project License granted to Codehaus. Evaluate Confluence today. | http://docs.codehaus.org/pages/diffpages.action?pageId=230396329&originalId=230395561 | 2014-10-20T18:25:50 | CC-MAIN-2014-42 | 1413507443062.21 | [] | docs.codehaus.org |
I:. It seemed like the way to go if I wanted to use Joomla with ZenCart.
Here's how I implemented this:.
See also: Joomla ZenCart extensions | http://docs.joomla.org/index.php?title=Using_Joomla!_with_ZenCart&curid=2500&diff=77531&oldid=77530 | 2014-10-20T18:43:42 | CC-MAIN-2014-42 | 1413507443062.21 | [] | docs.joomla.org |
Follow this simple guide to get started with Sencha Touch by building a small website app with a welcome page, a list, and a form.
Sencha Touch is all about device support, performance, ease of use, and great docs. Here's a guide to all that's new.
Dive right in and explore our useful documentation. | http://docs-devel.sencha.com/touch/2.2.1/ | 2014-10-20T17:54:50 | CC-MAIN-2014-42 | 1413507443062.21 | [] | docs-devel.sencha.com |
The 6th edition of the Groovy Weekly column is out!
Get the latest news of the Groovy ecosystem.
The Groovy Weekly 5th edition is out!
The big news of this edition is probably the ongoing success of Groovy, demonstrated by the tremendous growth in downloads for Groovy, from 1.7 million downloads in 2012 up to 3 million in 2013!
The "Groovy Weekly" column has launched!
On a weekly basis, you'll be able to get all the latest news, filtered and categorized, about the Groovy ecosystem, with information such as the latest releases, upcoming events, news about various projects of the Groovy world, job postings, interesting tweets or mailing-list posts, and more. If you want to get up-to-date on everything Groovy, that's what you'll have to read!
The first two editions have been published here:
Note that going forward, as we're working on a redesigned Groovy website, the blog posts will likely migrate to the future Groovy blog instead.
The release of the column should take place every week, and will be announced via different means: mailing-list, tweeter, Google+, and on this blog. But if you prefer to receive this information directly in your inbox, you can subscribe to the Groovy Weekly newsletter.
Also, this column is yours! If you want to contribute news items, don't hesitate to do so via the contribution form.
We're looking forward to hearing about your feedback about this weekly column!
And on behalf of the Groovy team, we'd like to wish you a very Groovy New Year! | http://docs.codehaus.org/display/GROOVY/2014/01 | 2014-10-20T18:08:25 | CC-MAIN-2014-42 | 1413507443062.21 | [] | docs.codehaus.org |
...
It seems like a lot of times there are good examples in the test/testcases/integration/ folder. We just need to paste an example or two here with a little explanation.
- Events - We need docs and recipes on using events, the "callable" type (see existing notes on Callable Types page), and the ICallable interface. There are good examples in tests/testcases/integration/events-*.boo, callables-*.boo, etc.
- | http://docs.codehaus.org/pages/diffpages.action?pageId=26166&originalId=25047 | 2014-10-20T18:26:36 | CC-MAIN-2014-42 | 1413507443062.21 | [] | docs.codehaus.org |
Java supports most of the signatures you know from Java. Since Groovy 1.5 Groovy supports full usage of generics in method declarations. Please see a Java tutorial for details.
General form
I will often show a more general method signature like
This is not valid Groovy code, it is just there to show that a method named foo has n+2 parameters. Usually this means also that n could be 0. In that cae, the real resulting method signature would look like:
. A form of
would mean n+m+2 parameters. Likewise a method call
is a method call with n+1 arguments. Note also the word parameter is used when a method declaration is involved and argument is in case of method calls. I will in general use the method name foo as a placeholder for a real method name. I will use T if a certain type is required. This could be Object, a primitive type or any class you like to use.
Variable Arguments
Like Java since Java5 Groovy supports variable arguments. To use them you have to give a special method signature where the last parameter is an array..
Note: T[] or T... do have to be the last parameter, this clashes with Closures.
Closures
See also Closures.
Groovy allows you to attach "blocks" to method calls like in:
To be able to attach a "block" in this way, the method signature must at runtime:
!args will be true if args is null or an array of length 0, args-1 refers to the last element of the array.
Named Arguments
See Maps
Named arguments requires the following method signature
, but work is done by the compiler for the method call. A method call
will always be transformed to
. If you mix the positions of the pi and qi elements, then the compiler will still force the same transformed signature. Example:
The type T can be Object, Map or no explicit type. in fact other types are possible for T.. any type compatible to the Extended Guide to Method Signatures.getClass() can be used. But the type may change for example from HashMap (Groovy 1.0) to LinkedHashMap (since Groovy 1.5), so it is only safe to use the less specialized types.
Note: the map is always the first argument
Combining named arguments with closures or variable arguments is no problem. you can make the map the first element of the variabale arguments part or you can let it have its own parameter - that is for you to decide. | http://docs.codehaus.org/pages/viewpage.action?pageId=97976431 | 2014-10-20T18:28:50 | CC-MAIN-2014-42 | 1413507443062.21 | [] | docs.codehaus.org |
Misc¶
These are sections that do not fit into the rest of the document.
CLI¶
Differences to CLIgen¶
Clixon adds some features and structure to CLIgen which include:
- A plugin framework for both textual CLI specifications(.cli) and object files (.so)
- Object files contains compiled C functions referenced by callbacks in the CLI specification. For example, in the cli spec command: a,fn(), fn must exist in the object file as a C function.
- The CLIgen treename syntax does not work.
- A CLI specification file is enhanced with the CLIgen variables CLICON_MODE, CLICON_PROMPT, CLICON_PLUGIN.
- Clixon generates a command syntax from the Yang specification that can be referenced as @datamodel. This is useful if you do not want to hand-craft CLI syntax for configuration syntax.
Example of @datamodel syntax:
set @datamodel, cli_set(); merge @datamodel, cli_merge(); create @datamodel, cli_create(); show @datamodel, cli_show_auto("running", "xml");
The commands (eg cli_set) will be called with the first argument an api-path to the referenced object.
How to deal with large specs¶
CLIgen is designed to handle large specifications in runtime, but it may be difficult to handle large specifications from a design perspective.
Here are some techniques and hints on how to reduce the complexity of large CLI specs:
Sub-modes¶
The CLICON_MODE is used to specify in which modes the syntax in a specific file should be added. For example, if you have major modes configure and operation you can have a file with commands for only that mode, or files with commands in both, (or in all).
First, lets add a basic set in each:
CLICON_MODE="configure"; show configure;
First, lets add a basic set in each:
CLICON_MODE="configure"; show configure;
Note that CLI command trees are merged so that show commands in other files are shown together. Thus, for example:
CLICON_MODE="operation:files"; show("Show") files("files");
will result in both commands in the operation mode:
> clixon_cli -m operation cli> show <TAB> routing files
but
> clixon_cli -m operation cli> show <TAB> routing files
Sub-trees¶
You can also use sub-trees and the the tree operator @. Every mode gets assigned a tree which can be referenced as @name. This tree can be either on the top-level or as a sub-tree. For example, create a specific sub-tree that is used as sub-trees in other modes:
CLICON_MODE="subtree"; subcommand{ a, a(); b, b(); }
then access that subtree from other modes:
CLICON_MODE="configure"; main @subtree; other @subtree,c();
The configure mode will now use the same subtree in two different commands. Additionally, in the other command, the callbacks will be overwritten by c. That is, if other a, or other b is called, callback function c will be invoked.
C-preprocessor¶
You can also add the C preprocessor as a first step. You can then define macros, include files, etc. Here is an example of a Makefile using cpp:
C_CPP = clispec_example1.cpp clispec_example2.cpp C_CLI = $(C_CPP:.cpp=.cli CLIS = $(C_CLI) all: $(CLIS) %.cli : %.cpp $(CPP) -P -x assembler-with-cpp $(INCLUDES) -o $@ $<
Automatic upgrades¶
There is an EXPERIMENTAL xml changelog feature based on “draft-wang-netmod-module-revision-management-01” (Zitao Wang et al) where changes to the Yang model are documented and loaded into Clixon. The implementation is not complete.
When upgrading, the system parses the changelog and tries to upgrade the datastore automatically. This feature is experimental and has several limitations.
You enable the automatic upgrading by registering the changelog upgrade method in
clixon_plugin_init() using wildcards:
upgrade_callback_register(h, xml_changelog_upgrade, NULL, 0, 0, NULL);
The transformation is defined by a list of changelogs. Each changelog defined how a module (defined by a namespace) is transformed from an old revision to a new. Example from auto upgrade test script:
<changelogs xmlns=""> <changelog> <namespace>urn:example:b</namespace> <revfrom>2017-12-01</revfrom> <revision>2017-12-20</revision> ... <changelog> </changelogs>
Each changelog consists of set of (ordered) steps:
<step> <name>1</name> <op>insert</op> <where>/a:system</where> <new><y>created</y></new> </step> <step> <name>2</name> <op>delete</op> <where>/a:system/a:x</where> </step>
Each step has an (atomic) operation:
- rename - Rename an XML tag
- replace - Replace the content of an XML node
- insert - Insert a new XML node
- delete - Delete and existing node
- move - Move a node to a new place
A step has the following arguments:
- where - An XPath node-vector pointing at a set of target nodes. In most operations, the vector denotes the target node themselves, but for some operations (such as insert) the vector points to parent nodes.
- when - A boolean XPath determining if the step should be evaluated for that (target) node.
Extended arguments:
- tag - XPath string argument (rename)
- new - XML expression for a new or transformed node (replace, insert)
- dst - XPath node expression (move)
Step summary:
- rename(where:targets, when:bool, tag:string)
- replace(where:targets, when:bool, new:xml)
- insert(where:parents, when:bool, new:xml)
- delete(where:parents, when:bool)
- move(where:parents, when:bool, dst:node)
Extensions¶
Clixon implements YANG extensions. There are several uses, but one is to “annotate” a YANG specification with application-specific data that can be used in plugin code for some reason.
An extension with an argument is introduced in YANG as follows:
module example-lib { namespace "urn:example:lib"; extension mymode { argument annotation; }
Such an extension can then be used in YANG declarations in two ways, either inline or augmented.
An inlined extension is useful in a YANG module that the designer has control over and can add extension reference directly in the YANG specification.
Assume for example that an interface declaration is extended with the extension declared above, as follows:
module my-interface { import example-lib{ prefix exl; } container "interfaces" { list "interface" { exl:mymode "my-interface"; ...
If you instead use an external YANG, where you cannot edit the YANG itself, you can use augmentation instead, as follows:
module my-augments { import example-lib{ prefix exl; } import ietf-interfaces{ prefix if; } augment "/if:interfaces/if:interface"{ exl:mymode "my-interface"; } ...
When this is done, it is possible to access the extension value in
plugin code and use that value to perform application-specific
actions. For example, assume an XML interface object
x retrieve
the annotation argument:
char *value = NULL; yang_stmt *y = xml_spec(x); if (yang_extension_value(y, "mymode", "urn:example:lib", &value) < 0) err; if (value != NULL){ // use extension value if (strcmp(value, "my-interface") == 0) ...
A more advanced usage is possible via an extension callback
(
ca_callback) which is defined for backend, cli, netconf and
restconf plugins. This allows for advanced YANG transformations. Please
consult the main example to see how this could be done.
High availability¶
This is a brief note on a potential future feature.
Clixon is mainly a stand-alone app tightly coupled to the application/device with “shared fate”, that is, if clixon fails, so does the application.
- That said, the primary state is the backend holding the configuration database that can be shared in several ways. This is not implemented in Clixon, but potential implementation strategies include:
- Active/standby: With a standard failure/liveness detection of a master backend, a standby could be started when the master fails using “-s running” (just picking up the state from the failed master). The default cache write-through can be used (
CLICON_DATASTORE_CACHE = cache). Would suffer from outage during standby boot.
- Active/active: The config-db cache is turned off (
CLICON_DATASTORE_CACHE = nocache) and two backend process started with a load-balancing in front. Turning the cache off would suffer from performance degradation (and its not currently tested in regression tests). Would also need a failure/liveness detection.
In both cases the config-db would be a single-point-of-failure but could be mitigated by a replicated file system, for example.
- Regarding clients:
- the CLI and NETCONF clients are stateless and spun up on demand.
- the RESTCONF daemon is stateless and can run as multiple instances (with a load-balancer) | https://clixon-docs.readthedocs.io/en/latest/misc.html | 2021-02-25T08:24:05 | CC-MAIN-2021-10 | 1614178350846.9 | [] | clixon-docs.readthedocs.io |
ajModelError functionajModelError function
DescriptionDescription
The ajModelError retrieves error records from the Model Error List. This function is normally used together with ajRaiseError function.
SyntaxSyntax
ajModelError([cell_reference], [error_code], [error_message])
The function will return:
1) Return Value: Records from Model Error List
2) Return Type: Multiple values (array formula)
ExamplesExamples
Here are a few examples of ajModelError function.
Example 1Example 1
First, we use ajRaiseError to add two errors into the Model Error List.
We can then return all errors from the Model Error List by using aj.
Click here to download the use case workbooks for further reference. | https://docs.alchemyj.io/docs/4.1/Reference/AJExtendedFunctions/ajModelError | 2021-02-25T07:28:38 | CC-MAIN-2021-10 | 1614178350846.9 | [] | docs.alchemyj.io |
I downloaded the samples from GitHub but apart from getting a Form with some |Layout and Control details, I can notfind any way to try any of the examples.
How to load/run any of the examples in the Solution? The only way I can figurte out is to copy/paste the codeinto a new Project and I'msure that is not the intention of the sample application.
Does anyone know how? | https://docs.microsoft.com/en-us/answers/questions/38259/wpf-samples-how-to-use-the-example-projects-includ.html | 2021-02-25T09:24:29 | CC-MAIN-2021-10 | 1614178350846.9 | [] | docs.microsoft.com |
Configuring Lights in LISY¶
Lights in LISY can be configured as lights using their number from the game manual.
This is an example:
lights: your_light: number: 03
This example is tested to be valid MPF config. However, it is not integration tested.
lights: your_light: number: 03
There are some features in the light list like the
game_over_relay which
are not real lights. Those can be configured as
digital outputs.
See Configuring and Enabling Flippers/Pop Bumpers/Slingshots in LISY for details about the
game_over_relay.
What if it did not work?¶
Have a look at our LISY troubleshooting guide. | https://docs.missionpinball.org/en/latest/hardware/lisy/lights.html | 2021-02-25T07:06:29 | CC-MAIN-2021-10 | 1614178350846.9 | [] | docs.missionpinball.org |
Introduction
Access to the Developer Portal can be fine-tuned with the use of Developer Roles and Content Permissions, managed through the Dev Portal Permissions page of Kong Manager. This page can be found by clicking the Permissions link under Dev Portal in the Kong Manager navigation bar.
Roles
The Roles Tab contains a list of available developer roles as well as providing the ability to create and edit roles.
Selecting Create Role allows you to enter a unique role name, as well as a comment to provide context for the nature of the role. You can assign the role to existing developers from within the role creation page. Clicking Create saves the role and returns you to the Roles List view. There you can see your newly created role as well as any other previously defined roles.
Clicking View displays the Role Details page with a list of developers assigned.
From the Role Details page, click the Edit button to make changes to the role. You can also access this page from the Roles List Edit button. Here you can change the name and comment of the role, assign or remove developers, or delete the role.
Deleting a role will remove it from any developers assigned the role and remove the role restriction from any content files it is applied to.
Content
The Content Tab shows the list of content files used by the Dev Portal. You can apply roles to your content files, restricting access only to developers who possess certain roles. Selecting an individual content file displays a dropdown of available developer roles where you can choose which role has access to the file. Unchecking all available roles will leave the file unauthenticated.
An additional option, the
* role, is preset in the list. This predefined role
behaves differently from other roles. When a content file has the
* role
attached to it, any developer may view the page as long as they are
authenticated. Additionally, the
* role may not be used in conjunction with
other user-defined roles and will deselect those roles when
* is selected.
⚠️Important: The
dashboard.txt and
settings.txt content files are
assigned the
* role by default. All other content files have no roles by
default. This means that until a role is added, the file is unauthenticated
even if Dev Portal Authentication is enabled. Content Permissions are ignored
when Dev Portal Authentication is disabled. For more information, visit the
Dev Portal Authentication section.
readable_by attribute
When a role is applied to a content file using the Content Tab, a special
attribute
readable_by is added to the headmatter of the file.
--- readable_by: - role_name - another_role_name ---
In the case of spec files,
readable_by is applied under the key
x-headmatter or
X-headmatter.
x-headmatter: readable_by: - role_name - another_role_name
The value of
readable_by is an array of string role names that have access to
view the content file. An exception is when the
* role is applied to the
file. In this case, the value of
readable_by is no longer an array, because
it contains the single string character
*.
readable_by: "*"
⚠️Important: If you manually remove or edit the
readable_by attribute, it
will modify the permissions of the file. Attempting to save a content file with
a
readable_by array containing a nonexistent role name will result in an
error. Additionally, if you make changes to permissions in the Content Tab or
the Portal Editor, be sure to sync any local files so that permissions are not
overwritten the next time you push changes. | https://docs.konghq.com/enterprise/2.3.x/developer-portal/administration/developer-permissions/ | 2021-02-25T08:38:31 | CC-MAIN-2021-10 | 1614178350846.9 | [] | docs.konghq.com |
Introduction
External plugins are those that run on a process separate from Kong Gateway itself, enabling the use of any programming language for which an appropriate plugin server is available.
Each plugin server hosts one or more plugins and communicates with the main Kong Gateway process through Unix sockets. If so configured, Kong Gateway can manage those processes, starting, restarting and stopping as necessary.
Kong Gateway currently maintains a Go language plugin server, go-pluginserver and the corresponding PDK library package, go-pdk.
Kong Gateway plugin server configuration
The
pluginserver_names property is a comma-separated list of names, one
for each plugin server process. These names are used to group each process’
properties and to annotate log entries.
For each name, other properties can be defined:
For example, you could set Go and Python plugins like this (assuming
an hypothetical Python plugin server called
pypluginserver.py):
pluginserver_names = go,python pluginserver_go_socket = /usr/local/kong/go_pluginserver.socket pluginserver_go_start_cmd = go-pluginserver -kong-prefix /usr/local/kong/ -plugins-dir /usr/local/kong/go-plugins pluginserver_go_query_cmd = go-pluginserver -dump-all-plugins -plugins-dir /usr/local/kong/go-plugins pluginserver_python_socket = /usr/local/kong/pyton_pluginserver.socket pluginserver_python_start_cmd = pypluginserver.py pluginserver_python_query_cmd = pypluginserver.py -dump
Legacy configuration
Kong Gateway versions 2.0.x to 2.2.x supported only Go external plugins and a single plugin server using a different configuration style. Starting with Kong Gateway version 2.3, the old style is recognized and internally transformed to the new style.
If property
pluginserver_names isn’t defined, the legacy properties
go_plugins_dir and
go_pluginserver_exe are tried:
Notes:
- The old style doesn’t allow multiple plugin servers.
- Version 0.5.0 of go-pluginserver requires the old style configuration.
- The new style configuration requires v0.6.0 of go-pluginserver
Developing Go plugins
Kong Gateway support for the Go language consist of two parts:
- go-pdk as a library, provides Go functions to access Kong Gateway features of the PDK.
- go-pluginserver an executable to dynamically load plugins written in Go.
Notes:
The Kong Gateway version 2.3 allows multiple plugin servers; in particular it’s now possible to write single-plugin servers, in effect plugins as microservices. To help with this, version v0.6.0 of the go-pdk package includes an optional plugin server. See Embedded Server for more information.
The go-pluginserver process is still supported. Its main advantage is that it’s a single process for any number of plugins, but the dynamic loading of plugins has proven challenging under the Go language (unlike the microservice architecture, which is well supported by the language and tools).
Development
To write a Kong Gateway plugin in Go, you need to:
- Define a structure type to hold configuration.
- Write a
New()function to create instances of your structure.
Add methods on that structure to handle phases.
If you want a dynamically-loaded plugin to be used with go-pluginserver:
- Compile your Go plugin with
go build -buildmode plugin.
Put the resulting library (the
.sofile) into the
go_plugins_dirdirectory.
If you want a standalone plugin microservice:
- Include the
go-pdk/serversub-library.
- Add a
main()function that calls
server.StartServer(New, Version, Priority).
- Compile as an executable with
go build.
Note: Check out this repository for example Go plugins.
1. Configuration Structure
Plugins written in Lua define a schema to specify how to read and validate configuration data coming from the datastore or the Admin API. Since Go is a statically-typed are no guarantees about the lifetime or quantity of configuration instances.
3. Phase Handlers
Similarly to Kong Gateway Lua plugins, you can implement custom logic to be executed at
various points of the request processing lifecycle. For example, to execute
custom Go code in the access phase, define a function named
Access:
func (conf *MyConfig) Access (kong *pdk.PDK) { ... }
The phases you can implement custom logic for are as follows, and the expected function signature is the same for all of them:
Certificate
Rewrite
Access
Response
Preread
Log
Similar to Lua plugins, the presence of the
Response handler automatically enables the buffered proxy mode.
Embedded server
Each plugin can be a microservice, compiled as a standalone executable.
To use the embedded server, include
github.com/Kong/go-pdk/server in
the imports list, and add a
main() function:
func main () { server.StartServer(New, Version, Priority) }
Note that the
main() function must have a
package main line at the
top of the file.
Then, a standard Go build creates an executable. There are no extra go-pluginserver, no plugin loading, and no compiler/library/environment compatibility issues.
The resulting executable can be placed somewhere in your path (for example,
/usr/local/bin). The common
-h flag shows a usage help message:
$ my-plugin -h Usage of my-plugin: -dump Dump info about plugins -help Show usage info -kong-prefix string Kong prefix path (specified by the -p argument commonly used in the Kong CLI) (default "/usr/local/kong")
When run without arguments, it will create a socket file with the
kong-prefix and the executable name, appending
.socket. For example,
if the executable is
my-plugin, it would be
/usr/local/kong/my-plugin.socket by default.
Example configuration
Two standalone plugins, called
my-plugin and
other-one:
pluginserver_names = my-plugin,other-one pluginserver_my_plugin_socket = /usr/local/kong/my-plugin.socket pluginserver_my_plugin_start_cmd = /usr/local/bin/my-plugin pluginserver_my_plugin_query_cmd = /usr/local/bin/my-plugin -dump pluginserver_other_one_socket = /usr/local/kong/other-one.socket pluginserver_other_one_start_cmd = /usr/local/bin/other-one pluginserver_other_one_query_cmd = /usr/local/bin/other-one -dump
Note that the socket and start command settings coincide with their defaults, so they can be omitted:
pluginserver_names = my-plugin,other-one pluginserver_my_plugin_query_cmd = /usr/local/bin/my-plugin -dump pluginserver_other_one_query_cmd = /usr/local/bin/other-one -dump | https://docs.konghq.com/enterprise/2.3.x/external-plugins/ | 2021-02-25T08:14:05 | CC-MAIN-2021-10 | 1614178350846.9 | [] | docs.konghq.com |
This is a fundamental module in Spotlightr that it allows you to manage your videos and group them into folders.
The List View is opened by default and it will show you all your videos. There's a Search bar which delivers results as you type, as well as an icon to Manage Groups (folders). And Options at the bottom allow you to change the number of displayed videos in the current view.
You can also create a new group within Manage Groups:
It is also possible to delete a group from the same menu:
To add multiple videos to a group, just select desired ones, chose the appropriate group from the drop-down menu and click on Group Videos at the top.. | https://docs.spotlightr.com/en/articles/1592062-video-group-management | 2021-02-25T07:13:34 | CC-MAIN-2021-10 | 1614178350846.9 | [] | docs.spotlightr.com |
torch.utils.dlpack.from_dlpack(dlpack) → Tensor
Decodes a DLPack to a tensor.
dlpack – a PyCapsule object with the dltensor
The tensor will share the memory with the object represented in the dlpack. Note that each dlpack can only be consumed once.
torch.utils.dlpack.to_dlpack(tensor) → PyCapsule
Returns a DLPack representing the tensor.
tensor – a tensor to be exported
The dlpack shares the tensors memory. Note that each dlpack can only be consumed once.
© 2019 Torch Contributors
Licensed under the 3-clause BSD License. | https://docs.w3cub.com/pytorch/dlpack | 2021-02-25T08:41:54 | CC-MAIN-2021-10 | 1614178350846.9 | [] | docs.w3cub.com |
Notification of critical security issue in BMC Server Automation, CVE-2016-1542, CVE-2016-1543
BMC Software is alerting users to a security problem in the RSCD agent on UNIX® and Linux.
This topic includes the following sections:
The video at right demonstrates how to apply the component template-based hotfix for this issue.
Overview
Assigned CVE-IDs: CVE-2016-1542, CVE-2016-1543
Alert
This security problem and a description of its exploitation will be disclosed publicly by a third party security research firm (ERNW GmbH) in a “Lightning Talk” at the Troopers conference in Germany on March 16th, 2016. We strongly urge you to follow the instructions in this notification as early as possible.
There is a patch available to prevent the problem from occurring in all BMC-supported versions of the RSCD agent on UNIX and Linux platforms (8.2.x, 8.3.x, 8.5.x, 8.6 - 8.6 SP1 Patch 1, and 8.7 - 8.7 P2). (see Solution).
If you use an unsupported version of the RSCD agent, you should upgrade to a supported version and apply the patch as soon as possible to avoid exposure to this security vulnerability. In the meantime. there are steps you can take to minimize your exposure prior to applying the patch (see Minimizing exposure to the problem).
If you have any questions about the problem, contact BMC Software Customer Support at 800 537 1813 (United States or Canada) or call your local support center.
Problem
A security authentication vulnerability involving unauthorized host access has been identified. This vulnerability allows remote unauthorized access to the UNIX target server by using the Remote Procedure Call (RPC) API of the RSCD Agent. Due to the severity of this vulnerability, BMC strongly recommends that customers apply the updates provided by this flash as soon as possible.
Note
The issue is fixed in BMC Server Automation 8.6 SP1 Patch 2 and in version 8.7 P3, and also in version 8.8.
In this specific case, the agents upgraded to version 8.7 Patch 3 are qualified to work with the version 8.7 Patch 2 Application Server
Minimizing exposure to the problem
To minimize your exposure prior to applying the patch, BMC recommends the following:
- Configure the host-based firewall on systems running the RSCD agent to only accept communication from the BMC Server Automation infrastructure (Application Server, Repeater, SOCKS Proxy)
- Route any NSH client connections through a NSH Proxy (which runs on the application server)
- Configure any SOCKs proxies in the environment to only accept connections from the BMC Server Automation Application Server(s).
Note
Configuring the RSCD exports file to allow connections from specific hosts will not mitigate the threat.
Solution
The fix for the issue is accomplished using a BMC Server Automation Compliance Template.
Note
You need to apply the fix to all existing affected agents, as well as any new agents of impacted versions you deploy in the future.
Or, you can upgrade the agents to version 8.6 SP1 Patch 2, 8.7 Patch 3, or version 8.8, all of which contain the fix.
You can download the zip file containing the Compliance Template by following the instructions in Knowledge Article 000102932.
Note that an updated version of the original Component Template was uploaded on 11/21/2016. The updated version has V6 at the end of the file name (for example, BMCHotFixForCVE-2016-1542_CVE-2016-1543-V6.zip). Version 6 is the latest version of the fix.
See the following items to review the updates that have been added to the fix since the initial release.
V2 contains the following fixes/enhancements over the original version:
- Checksums are now gathered via Extended Object to avoid BMC Server version 8.5.1 (pre patch 5) issues gathering the checksum.
- Added an UNDO functionality to all BLPackages to allow the changes to be rolled-back.
- Updated the Agent Restart logic to help avoid restart issues on some platforms including HP-UX and Solaris.
V3 contains the following additional fixes/enhancements:
- Determine if ‘at’ is available, and use it to start the file switch and a restart value of now + 1 minute, or if not, use a ‘su –‘ command wit a sleep setting of 60 seconds.
- Perform the copy of the files while the agent is down.
- Directly kill the RSCD processes using logic from the version 8.5+ init script, instead of trying to call the existing init. This update was added to handle older agents that have a symlink in their install path.
V4 contains the following additional fixes/enhancements
- Corrected issue checking "at".
- Fix for AIX issue (QM001882081) w/ original fix libraries.
- Exclude version 8.7 Patch 3 and 8.8 agents (which have the fix out of the box).
- Fix issues with run_cve_fix.sh not working on AIX (quoted paths).
V5 contains the following additional fixes/enhancements
- Excludes 8.6.01 Patch 2 from the checks as this version has the fix.
V6 contains the following additional fixes/enhancements
- Corrected issue with the "at" check made in V4.
Frequently asked questions
This issue is specific only to the RSCD Agent on Unix and Linux platforms. Windows RSCD Agents are not affected.
This issue applies to all RSCD Agent versions, up to and including version 8.6 SP1 Patch 1 and 8.7 Patch 2.
The issue is fixed in BMC Server Automation version 8.6 SP1 Patch 2, 8.7 Patch 3, and also in version 8.8.
All supported versions (versions 8.2 to 8.6 SP1 Patch 1, and 8.7 - 8.7 P2) are remediated by the fix.
The single zip file handles all agent versions from 8.2 through 8.7 P2. One set of files handles agent versions 8.2.00 through 8.5.0; the other set of files handles 8.5.01 agents and above. The correct fix is automatically placed on the agent during remediation.
Credit
BMC would like to thank the researchers at ERNW GmbH for disclosing this vulnerability.
Where to go for additional information
Check the BMC Application Security community page for the latest information about this vulnerability.
If you have any questions about the issue, contact BMC Customer Support at 800 5371813 (United States or Canada) or call your local support center.
BMC BladeLogic Server Automationのセキュリティ脆弱性に関する重要なご報告と解決策のご案内 | https://docs.bmc.com/docs/ServerAutomation/87/release-notes-and-notices/flashes/notification-of-critical-security-issue-in-bmc-server-automation-cve-2016-1542-cve-2016-1543 | 2021-02-25T07:58:26 | CC-MAIN-2021-10 | 1614178350846.9 | [] | docs.bmc.com |
Tokenization rules used in FTS
Tokenization is the process of breaking words into discrete tokens to insert them into an index and to search on the tokens. Following are the basic rules of tokenization used in FTS for indexing and searching.
For literal fields, the content of the field is treated as a single token with no modification. For example, x-y=z becomes x-y=z (one word). All the text constitutes the token or word stored in the index. It can be found in a search only by supplying the exact matching string. That is, searching for y does not find a match, but searching for x-y=z does find a match.
For non-literal fields, the following rules apply:
- Words are split at punctuation characters, and punctuation is removed. However, a dot that is not followed by white space is considered part of a token.
- one:two becomes one two (two words).
- Alpha#Omega becomes Alpha Omega (two words).
- x.y.z becomes x.y.z (one word).
- Words are split at hyphens unless the token contains a number, in which case the whole token is interpreted as a product number and is not split.
- x-y=z becomes x y z (three words).
- KX-13AF9 becomes KX-13AF9 (one word).
- Email addresses and internet host names are recognized as one token.
- [email protected] becomes [email protected] (one word).
- becomes (one word).
- In words with no spaces, the ampersand (&) is retained.
- Smith&Brown becomes Smith&Brown (one word).
- Words are split at some of the special characters, and the special character is removed.
- [email protected] becomes someone bmc.com (two words). However if you enter [email protected] as the search term, exact match gets the highest rank.
- pqr=hij becomes pqr hij (two words). Exact match gets the highest rank.
- Alpha#Omega becomes Alpha Omega (two words). Exact match gets the highest rank.
- Smith&Brown becomes Smith Brown (two words). Exact match gets the highest rank.
- Words that are split at hyphens. Hyphen is treated as a tokenizer.
- abc-def=xyz becomes abc def xyz Exact match gets the highest rank.
- KX-13AF9 becomes KX 13AF9 Exact match gets the highest rank.
- Certain special characters are considered as non-tokenizers and the word remains single word.
- one:two becomes one:two (one word). Search term one do not match the text, one:two matches the text.
- becomes (one word). | https://docs.bmc.com/docs/ars1808/tokenization-rules-used-in-fts-820497119.html | 2021-02-25T07:51:44 | CC-MAIN-2021-10 | 1614178350846.9 | [] | docs.bmc.com |
Installing Drift Management with the CLI installer
This procedure describes how to install Drift Management using the command-line interface (CLI) using the CLI installer
Run the Drift Management installer executable on the server that is running AR System, based on your operating system.
The location from which you run this executable file depends on how you received the application.
If you are installing on Windows:
- If your files are on a DVD, type BMCCDMSetup.exe -i console from a command line, and then press ENTER.
If you downloaded the application from the BMC Electronic Product Distribution (EPD) site, access the executable file from the top level of the working directory. This is the directory that you created to hold the files you extracted after downloading them.
From the directory's root folder where you downloaded the application, type BMCCDMSetup.exe -i console from a command line, and then press ENTER.
Note
After a brief interval, the Drift Management installer program appears in a new console window.
After some brief initialization messages appear, the Introduction prompt appears. The Drift Management installer also checks the available temp space on your system. If sufficient temp space is not available, the installation stops.
If you are installing on Linux:
- Run a Telnet session to the Linux server where you are installing Drift Management.
- Mount the DVD on the Linux system.
Or, if you are installing files from the EPD site, change to the working directory that contains the application files you downloaded, and uncompress them.
- From the directory's root folder where you downloaded the application, ensure that you have full read, write, and execute access to the BMCCDMSetup.bin executable.
- Type ./BMCCDMSetup.bin -i console from a command line, and then press ENTER.
After some brief initialization messages appear, the Introduction prompt appears.
- Press ENTER to continue with the installation.
- At the licensing prompt, press ENTER and continue to press ENTER to scroll through the licensing agreement.
- Type Y and press ENTER to accept the terms of the BMC license agreement.
- At the Product Feature Selection prompt, type 1 and press ENTER to install the BMC Configuration Drift Management application.
- At the AR System Server Settings prompt, enter the AR System server settings:
- Type 1 and then press ENTER.
If your system has multiple instances of the AR System server installed on it, a list of available instances is provided. Type the number that corresponds to the instance (for example, 1 for server instance MPL-ESX-VM29) and press ENTER.
- At the TCP port for the selected AR System server prompt, press ENTER to accept the default value.
If Portmapper was not used when the AR System server was installed, or if you want to specify a port explicitly to go through a firewall, type the TCP value and press ENTER.
- At the User Name prompt, type the AR System server administrator's ID and press ENTER. The default ID for new installations is Demo, with no password.
At the Password prompt, type the AR System server administrator's password and press ENTER.
The installer validates if your AR System server user input settings are correct. If they are not, you must enter the correct information before the installation can continue.
- At the Drift Management Components Selection prompt, select one or more of the following Drift Management options:
- Type 1 and then press ENTER to install the Drift Management application on your system.
- Type 2 and then press ENTER to integrate Drift Management with BMC Remedy Change Management. The installer checks the system dependencies and does not let you select this option if Drift Management is not already installed on your system.
- Type 3 and then press ENTER to integrate Drift Management with BMC Remedy Incident Management. The installer checks the system dependencies and does not let you select this option if Drift Management is not already installed on your system.
To install different combinations, type the numbers as comma-separated values (for example, 1,2,3) and then press ENTER.
Note
If the installer detects that you have already installed these components, you are prompted to reinstall them.
At the following prompts, perform an appropriate action:
Choose Install Folder - Press ENTER to accept the default directory, or type the path to the installation directory to use.
- Prerequisites - If your system passes these basic prerequisites, press ENTER to move to Application Requirements Check. If your system fails, you must quit the installer, fix the problems, and then rerun the installer.
- Application Requirements Check - Press ENTER if your system has met the basic software requirements. If your system fails, you must fix the problems before you can continue.
- If you are sure that your system has passed the hardware and software prerequisites, press ENTER to continue.
The verification takes place.
Press ENTER to continue.
Pre-Install Important Note - Press Y to continue.
- Pre-Installation Summary - Review the summary information. If you are satisfied that the information is correct, press ENTER to continue with the actual installation.
The installer shows you the progress of the applications and components as they are being installed. After all files are copied and the workflow is loaded, the Install Complete prompt appears.
- Press ENTER to exit the installer.
The installer creates the BMC_Configuration_Drift_Management_InstallLog.log file. Review this file and the other installation logs for any error or warning messages.
- When the installation process is finished, including a reinstallation of Drift Management, use the BMC Remedy Mid Tier Configuration Tool to flush the mid tier cache.
For more information, see BMC Remedy Mid Tier configuration in BMC Remedy AR System online documentation.
- Stop and restart the web server.
- Open Drift Management.
For information, see Opening Drift Management from the IT Home Page. | https://docs.bmc.com/docs/brid1908/installing-drift-management-with-the-cli-installer-879731515.html | 2021-02-25T07:43:51 | CC-MAIN-2021-10 | 1614178350846.9 | [] | docs.bmc.com |
Balancing data across disks of a DataNode
The HDFS Disk Balancer is a command-line tool that evenly distributes data across all the disks of a DataNode to ensure that the disks are effectively utilized. Unlike the HDFS Balancer that balances data across DataNodes of a cluster, the Disk Balancer balances data at the level of individual DataNodes.
Disks on a DataNode can have an uneven spread of data because of reasons such as large amount of writes and deletes or disk replacement. To ensure uniformity in the distribution of data across disks of a specified DataNode, the Disk Balancer operates against the DataNode and moves data blocks from one disk to another.
For a specified DataNode, the Disk Balancer determines the ideal amount of data to store per disk based on its total capacity and used space, and computes the amount of data to redistribute between disks. The Disk Balancer captures this information about data movement across the disks of a DataNode in a plan. On executing the plan, the Disk Balancer moves the data as specified. | https://docs.cloudera.com/runtime/7.2.7/scaling-namespaces/topics/hdfs-balancing-data-across-disks-DataNode.html | 2021-02-25T08:44:27 | CC-MAIN-2021-10 | 1614178350846.9 | [] | docs.cloudera.com |
- 設定
GitHub project integration.
設定
Complete. | https://gitlab-docs.creationline.com/ee/user/project/integrations/github.html | 2021-02-25T07:39:03 | CC-MAIN-2021-10 | 1614178350846.9 | [] | gitlab-docs.creationline.com |
3.4.5.1.8 Removing Virtual Disk Objects
The server MUST maintain a list of virtual disks. Virtual disks SHOULD be removed when all of the clients release their reference to the virtual disk object. The server MUST also detect whether the basic, dynamic, or unallocated disk that has been removed is a virtual disk and remove the corresponding virtual disk object. The mechanism of detection is implementation-specific.<69>
The server MUST also maintain a list of OpenVirtualDisk objects. An OpenVirtualDisk object can be removed when all the clients release their reference to the OpenVirtualDisk object. | https://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-vds/6d9c0828-129c-42b3-a731-f13a2f176ef6 | 2021-02-25T09:31:34 | CC-MAIN-2021-10 | 1614178350846.9 | [] | docs.microsoft.com |
Customizing tax calculation
The Kentico E-commerce Solution enables you to modify how the system calculates taxes. For example, you can:
- Integrate 3rd-party tax calculation services
- Create custom tax application rules according to your needs
- Customize how the system gets tax rates for countries and states
- Create custom logic for exempting users from taxes
- Customize how taxes affect prices displayed in product catalogs on the live site
To customize the Kentico tax functionality:
- Open your Kentico project in Visual Studio.
Create new classes that implement one or more of the tax customization that you registered instead of the default tax functionality.
Tax customization interfaces
All mentioned interfaces can be found in the CMS.Ecommerce namespace.
Tax calculation
The system uses tax calculation to get exact tax numbers in the checkout process (shopping carts, orders). A slight delay is acceptable when producing tax calculation results, so implementations using external tax calculation services are suitable.
- ITaxCalculationService
ITaxCalculationServiceFactory – serves a separate instance of a class implementing ITaxCalculationService for each site. Must be implemented to use custom ITaxCalculationService implementations.
Implementations of ITaxCalculationService must contain the CalculateTaxes method. Calculate the taxes using the data from the method's TaxCalculationRequest parameter, which provides the following properties:
- Items – a list of TaxItem objects representing all purchased items (each TaxItem provides the SKUInfo representation, quantity and total price).
- ShippingPrice – the price of the selected shipping option without tax.
- Shipping – ShippingOptionInfo representation of the selected shipping option.
- TaxParameters – a container providing the customer (CustomerInfo), currency, date of purchase, shipping/billing address, and identifier of the site for which the taxes are calculated.
- Discount – the total value of all order discounts applied to the shopping cart or order (as a calculated decimal amount, not a percentage). Other types of discounts are included in the price of individual TaxItem objects.
The CalculateTaxes method must return a TaxCalculationResult object that contains:
- The final tax amount for all purchased items, set in the ItemsTax property
- The final tax amount calculated for the shipping costs, set in the ShippingTax property (if the shipping costs are not zero)
- A collection of tax name and value pairs providing details about the applied taxes in the Summary property (for example, provides data for tax summaries in invoices)
See: Example - Using an external service to calculate taxes
Note: The default implementation in Kentico is the same for both tax calculation and tax estimation (see below).
Tax estimation
The system uses tax estimation to get taxes in product catalogs (for example when showing what portion of a product's price is tax) and in shipping option selectors. Implementations should produce results quickly – external tax calculation services are not recommended for the purpose of estimation.
- ITaxEstimationService – implementations must contain the following methods:
- GetTax – returns a tax value (decimal) for a specified price, tax class (taxation type) and TaxEstimationParameters (address and currency).
- ExtractTax – returns a tax value (decimal) for a price that already includes tax, based on a specified price, tax class (taxation type) and TaxEstimationParameters (address, currency, identifier of the site for which the taxes are estimated). By default, this method is used for sites that have the Prices include tax setting enabled.
Taxes in catalog prices
By default, the system displays prices in catalogs on the live site in the same way that price values are entered for products (with or without tax, depending on the site's Prices include tax setting).
You can customize how taxes affect catalog prices if the default options do not fulfill the needs of your store. For example, you can enter and store product prices without tax, but customize the system to display prices including tax on the live site.
- ICatalogTaxCalculator – implementations must contain the ApplyTax method, which processes a specified product and price based on tax parameters.
- The supplied tax parameters contain the customer (CustomerInfo), currency, date of purchase, shipping/billing address, and identifier of the site for which the taxes are calculated.
- The ApplyTax method must return the price that will be displayed in live site catalogs and set the applied tax value into an out parameter.
- ICatalogTaxCalculatorFactory – serves a separate instance of a class implementing ICatalogTaxCalculator for each site. Must be implemented to use custom ICatalogTaxCalculator implementations.
Custom implementations of the catalog tax calculator only impact the prices of products displayed on the live site (when using the appropriate e-commerce transformation methods). This type of customization does not affect the calculation of total prices during the checkout process.
See: Example - Adjusting catalog prices based on taxes
Address used in tax calculations
You can customize how the system gets the address used in tax calculations. For example, this type of customization allows you to determine whether the system calculates taxes based on the billing or shipping address (or another custom address). Note that this type of customization overrides the system's Apply taxes based on setting.
- ITaxAddressService – implementations must contain the GetTaxAddress method, which returns an address for tax calculation based on a specified billing address, shipping address, tax class (taxation type) and customer.
- ITaxAddressServiceFactory – serves a separate instance of a class implementing ITaxAddressService for each site. Must be implemented to use custom ITaxAddressService implementations.
Tax type per product or shipping option
You can customize how the system retrieves Tax classes for products or shipping options if you do not wish to assign the tax classes manually in the administration interface (or if you wish to override the tax class assignments in a custom way).
- ITaxClassService – implementations must contain two overloads of the GetTaxClass method, which return a tax class object (taxation type) for a specified product or shipping option.
Tax rates per country or state
If you do not wish to use the default configuration options of Tax classes, you can customize how the system loads tax rate values for countries and states.
- ICountryStateTaxRateSource – implementations must contain the GetRate method, which returns a tax rate for a specified tax class (taxation type), country and state.
Tax exemptions for customers
You can customize the system to create tax exemptions for certain types of customers.
Default tax exemptions
By default, customers who have a specified in their Company details are exempt from Tax classes with the Zero tax if tax ID is specified property enabled.
Registering a custom ICustomerTaxClassService implementation overrides the default tax exemption.
- ICustomerTaxClassService – implementations must contain the GetTaxClass method, which determines whether taxes apply to a specified customer.
See: Example - Create tax exemptions for customers
Example – Using an external service to calculate taxes
The following example demonstrates how to customize the system to integrate with a 3rd-party tax calculation service. In this case, the tax values are calculated manually within the custom code, but you can replace the basic calculations with calls to the API of a dedicated tax calculation service.
Prepare a separate project for custom classes in your Kentico solution:
- ITaxCalculationService and ITaxCalculationServiceFactory interfaces:
Add a new class under the custom project, implementing the ITaxCalculationService interface.
using System.Linq; using CMS.Ecommerce; using CMS.Globalization; public class CustomTaxCalculationService : ITaxCalculationService { /// <summary> /// Calculates all taxes for a given purchase. /// </summary> public TaxCalculationResult CalculateTaxes(TaxCalculationRequest taxRequest) { var result = new TaxCalculationResult(); // Sums up the total price of all purchased items decimal itemsTotal = taxRequest.Items.Sum(item => item.Price); // Subtracts the overall order discount value from the total price itemsTotal = itemsTotal - taxRequest.Discount; // Calculates the tax for the purchased items // The example only provides a very basic "calculation" based on the country in the billing address // You can use any type of custom calculation, e.g. call the API of a 3rd-party tax calculation service that returns the calculated taxes result.ItemsTax = CalculateCustomTax(itemsTotal, taxRequest.TaxParameters.BillingAddress); // Adds the tax value to the returned tax summary // The name parameter describes the type of the applied tax (may be returned by the external tax calculation service) // Taxes added under the same name via the 'Sum' method are automatically summed up into the existing tax summary item result.Summary.Sum("Custom tax", result.ItemsTax); // Adds a basic shipping tax based on the shipping price decimal shippingTaxRate = 0.15m; result.ShippingTax = taxRequest.ShippingPrice * shippingTaxRate; result.Summary.Sum("Custom shipping tax", result.ShippingTax); return result; } /// <summary> /// Calculates the tax for a specified price. Replace this method with any type of custom calculation. /// </summary> private decimal CalculateCustomTax(decimal price, AddressInfo billingAddress) { // Gets a base tax rate, with a different value if the billing address is in the USA decimal taxRate = 0.15m; if ((billingAddress != null)) { // Gets the 3-letter country code of the country in the billing address var country = CountryInfoProvider.GetCountryInfo(billingAddress.AddressCountryID); string countryCode = country?.CountryThreeLetterCode; if (countryCode == "USA") { taxRate = 0.1m; } } return (price * taxRate); } }
Add a new class under the custom project, implementing the ITaxCalculationServiceFactory interface.
using CMS; using CMS.Ecommerce; // Registers the custom implementation of ITaxCalculationServiceFactory [assembly: RegisterImplementation(typeof(ITaxCalculationServiceFactory), typeof(CustomTaxCalculationServiceFactory))] public class CustomTaxCalculationServiceFactory : ITaxCalculationServiceFactory { // Provides an instance of the custom tax calculation service public ITaxCalculationService GetTaxCalculationService(int siteId) { // This basic sample does not parameterize the tax calculation service based on the current site // Use the method's 'siteId' parameter if you need different tax calculation logic for each site return new CustomTaxCalculationService(); } }
- Save all changes and Build the custom project.
The registered CustomTaxCalculationServiceFactory class provides an instance of CustomTaxCalculationService. The tax calculation in all shopping carts (and orders) now uses the custom logic. The customizations do not affect tax estimations in product catalogs.
Example – Creating tax exemptions for customers
The following example demonstrates how to create a customization that adds tax exemptions for certain types of customers.
- Recreate or reuse the custom project from the custom tax calculation example.
- Create a new class under the custom project, implementing the ICustomerTaxClassService interface.
- Save all changes and Build the custom project.
using CMS; using CMS.Ecommerce; // Registers the custom implementation of ICustomerTaxClassService [assembly: RegisterImplementation(typeof(ICustomerTaxClassService), typeof(CustomCustomerTaxClassService))] public class CustomCustomerTaxClassService : ICustomerTaxClassService { /// <summary> /// Returns a CustomerTaxClass object for the specified customer to determine whether they are subject to tax. /// The default value is CustomerTaxClass.Taxable. /// </summary> public CustomerTaxClass GetTaxClass(CustomerInfo customer) { // Creates a tax exemption for customers who have a value filled in within one of the 'Company details' fields if ((customer != null) && (customer.CustomerHasCompanyInfo)) { return CustomerTaxClass.Exempt; } else { // All other customers are subject to tax return CustomerTaxClass.Taxable; } } }
Example – Adjusting catalog prices based on taxes
The following example demonstrates how to customize the system to store product prices without tax, but display prices on the live site with tax already included.
The example assumes that your site is configured to not include tax in prices:
- Open the Store configuration or Multistore configuration application in the Kentico administration interface.
- Navigate to the Store settings -> General tab.
- In the Taxes section, make sure that:
- you have a Default country set (and that your tax classes have values assigned for the given country)
- the Prices include tax setting is disabled
- Click Save if necessary.
Continue by creating custom implementations of the ICatalogTaxCalculator and ICatalogTaxCalculatorFactory interfaces:
- Open your Kentico solution in Visual Studio.
- Recreate or reuse the custom project from the custom tax calculation example.
Add a new class under the custom project, implementing the ICatalogTaxCalculator interface.
using CMS.Ecommerce; public class CustomCatalogTaxCalculator : ICatalogTaxCalculator { private readonly ITaxEstimationService estimationService; private readonly ITaxClassService taxClassService; private readonly ITaxAddressService addressService; /// <summary> /// The constructor parameters accept instances of tax-related services from CustomCatalogTaxCalculatorFactory. /// The services are used to get the appropriate tax values before applying the catalog customizations. /// </summary> public CustomCatalogTaxCalculator(ITaxEstimationService estimationService, ITaxClassService taxClassService, ITaxAddressService addressService) { this.estimationService = estimationService; this.taxClassService = taxClassService; this.addressService = addressService; } /// <summary> /// Processes a specified product and price, makes any required adjustments based on taxes, /// and returns the price value to be displayed in live site catalogs. /// The applied tax value must be returned using the 'tax' out parameter. /// The customization assumes that prices are without tax (i.e. the 'Prices include tax' setting is false). /// </summary> public decimal ApplyTax(SKUInfo sku, decimal price, TaxCalculationParameters parameters, out decimal tax) { tax = 0; // Gets the tax class (taxation type) assigned to the product TaxClassInfo taxClass = taxClassService.GetTaxClass(sku); if (taxClass != null) { // Uses the registered ITaxAddressService implementation to get the appropriate address for the tax calculation AddressInfo address = addressService.GetTaxAddress(parameters.BillingAddress, parameters.ShippingAddress, taxClass, parameters.Customer); // Uses the registered ITaxEstimationService implementation to get the tax value var estimationParams = new TaxEstimationParameters { Currency = parameters.Currency, Address = address, SiteID = parameters.SiteID }; tax = estimationService.GetTax(price, taxClass, estimationParams); } // Returns the price to be displayed in live site catalog, with the tax value added return price + tax; } }
Add a new class under the custom project, implementing the ICatalogTaxCalculatorFactory interface.
using CMS; using CMS.Ecommerce; // Registers the custom implementation of ICatalogTaxCalculatorFactory [assembly: RegisterImplementation(typeof(ICatalogTaxCalculatorFactory), typeof(CustomCatalogTaxCalculatorFactory))] public class CustomCatalogTaxCalculatorFactory : ICatalogTaxCalculatorFactory { private readonly ITaxEstimationService estimationService; private readonly ITaxClassService taxClassService; private readonly ITaxAddressServiceFactory addressServiceFactory; /// <summary> /// Constructor. /// The system automatically supplies instances of the ITaxEstimationService, ITaxClassService and ITaxAddressServiceFactory implementations. /// </summary> public CustomCatalogTaxCalculatorFactory(ITaxEstimationService estimationService, ITaxClassService taxClassService, ITaxAddressServiceFactory addressServiceFactory) { this.estimationService = estimationService; this.taxClassService = taxClassService; this.addressServiceFactory = addressServiceFactory; } // Provides an instance of the custom catalog tax calculator public ICatalogTaxCalculator GetCalculator(int siteId) { // This basic sample does not parameterize the tax calculation service based on the current site // Use the method's 'siteId' parameter if you need different tax calculation logic for each site return new CustomCatalogTaxCalculator(estimationService, taxClassService, addressServiceFactory.GetTaxAddressService(siteId)); } }
- Save all changes and Build the custom project.
The registered CustomCatalogTaxCalculatorFactory class provides an instance of CustomCatalogTaxCalculator and prepares instances of services needed to get the tax values. The customized catalog tax calculator ensures that tax values are added to product prices displayed on the live site (when using the appropriate e-commerce transformation methods). The customization does not affect the calculation of total prices during the checkout process.
Was this page helpful? | https://docs.xperience.io/k12sp/e-commerce-features/customizing-on-line-stores/customizing-tax-calculation | 2021-02-25T07:12:13 | CC-MAIN-2021-10 | 1614178350846.9 | [] | docs.xperience.io |
This is version 2.21 of the AWS Elemental Live documentation. This is the latest version. For prior versions, see the Previous versions section of AWS Elemental Live and AWS Elemental Statmux documentation.
Add Ethernet devices
When you installed AWS Elemental Live, you configured eth0. If you also set up eth1 at that time, no further configuration is required. If you didn't set up eth1 or want to set up more devices, use these instructions to do so.
To add Ethernet devices
On the AWS Elemental Live web interface, go to the Settings page and choose Network.
On the Network page, choose Network Devices.
On the Network Devices page, choose Add Network Device.
In the Add a New Network Device dialog, select eth (ethN).
Complete the fields as follows:
Device Name: Select the eth device that you're setting up.
Address Mode: Select the type of IP addresses this device uses, either dhcp, static, or none. If you're bonding eth0 and eth1, use static IPs.
IP Address, Netmask, Gateway: Available when static IP addresses are used only. Complete with your networking information.
Static Routes: Select if you're using static routing.
Network, Netmask, Gateway: Available when static routes are used only. Complete with your networking information.
Choose Save. The new device appears in the Network Devices list. | https://docs.aws.amazon.com/elemental-live/latest/configguide/config-wrkr-cf-cg-ethernet-add.html | 2021-02-25T08:45:50 | CC-MAIN-2021-10 | 1614178350846.9 | [] | docs.aws.amazon.com |
How to connect your Amazon S3 account to WPTC more securely
This article teaches you how to make your Amazon S3 account more secure with WP Time Capsule.
WPTC creates separate IAM user for each site on the same bucket for security reasons, so we need Full IAM Access and Full Amazon S3 access to setup this.
Once we create a new user and policies for a particular user, we use that IAM user credentials for a specific site.
NOTE: We never store full access credentials anywhere on your site or in our cloud.
- Log in to your Amazon Web Services console -
- Click on your username which you can find it in the top right corner of your page. Click on Security Credentials in the drop-down
- Click on Add Permission -> Attach existing policies directly -> Check Following these policies Then Review and Click Add
- AmazonS3FullAccess
- IAMFullAccess
Please follow the instructions below to create Access Key for your IAM user.
- Open the Security Credentials tab. Then, click Create Access Key.
- Select Bucket region you want and enter the bucket name, Bucket will be automatically created if it's not available.
Note: Sometimes the bucket creation may result in error, which is mainly due to a already existing bucket, in that case use a unique bucket name.
Note: Make sure that versioning is enabled on the bucket settings on S3 console. | https://docs.wptimecapsule.com/article/37-how-to-connect-your-amazon-s3-account-to-wptc-more-securely | 2021-02-25T07:59:28 | CC-MAIN-2021-10 | 1614178350846.9 | [] | docs.wptimecapsule.com |
Installatron, the script installer that is part of the uofsccreate.org cPanel, allows you to easily install Web applications to your Web space. Below is a list of all of the applications currently available to you through Installatron:
Content Management
- WordPress
- Omeka
- Scalar
- Drupal
- Mukurtu
- Mahara
- Grav
- Big Picture Calling Card
- Dimension Calling Card
- Eventually Calling Card
- Highlights Calling Card
- Omeka S
- Tru Collector
- Tru Writer
- Commons in a Box
Photos and Files
- Nextcloud
Surveys and Statistics
- LimeSurvey
- Matomo
Miscellaneous
- OHMS Viewer | http://docs.uofsccreate.org/uncategorized/applications-available-in-installatron-3/ | 2021-02-25T07:51:44 | CC-MAIN-2021-10 | 1614178350846.9 | [] | docs.uofsccreate.org |
Intro#
Frequently Asked Questions
Reset Password of User Admin
Enable Accent-Sensitive Search
Integrate Machine Translation Service into ADONIS NP
Force Specific Authentication Mechanism
Schedule Nightly Restart of Aworker Processes
Deploy ADONIS NP Web Client as ROOT Web Application
Languages which Require Adaptation of the Web Application Archive
Configure Brute Force Protection Settings
Enable Virus Scan for File Uploads | https://docs.boc-group.com/adonis/en/docs/11.0/installation_manual/ins-9000000/ | 2021-02-25T08:13:51 | CC-MAIN-2021-10 | 1614178350846.9 | [] | docs.boc-group.com |
A support ticket can be created in the Shop owner section > Support. To do this, locate the button “Request support” in the top right corner.
Select the shop for which you need support and write down your concerns including access data for the Shopware administration, database and PHPMyAdmin. Our Support Team will contact you in a timely manner via the support form in the account. The entire communication with our Support Team takes place exclusively via the shop owner account and not via other communication channels. You will receive an email notifying you of a response from our Support Team.
The response times for the created support ticket depend on the subscription. There are four different levels: Silver, Gold, Platinum and Diamond subscription. The response times vary between three working days and two working hours. Please note that the response times for the developer support are different. Please also note that the Platinum subscription is only available for Shopware 5 Professional Plus Edition.
In the Support section of the shop owner account, you can either submit a support request for our Technical Support or submit a support request for a plugin. You purchased a plugin from our Community Store and are now faced with a technical issue? Then you should ask the plugin manufacturer for support via the support form. | https://docs.shopware.com/en/account-en/merchant-area/support | 2021-02-25T07:53:19 | CC-MAIN-2021-10 | 1614178350846.9 | [] | docs.shopware.com |
best bariatric surgeons in the world
About
Landing
For more information or to schedule your appointment, click the Enquire Now button and we’ll match you with the best and most affordable clinics in in Tijuana. We offer low cost Obesity Surgery from the top bariatric surgeons and best hospitals in India. Aspirus Wausau Hospital. The World’s Best Treatment in Our Habilite Clinic. Read More. A bariatric surgeon focuses on the health needs of people who have clinically severe obesity (also called morbid obesity) or obesity-related health problems. My favorite thing about the surgery is not being able to eat that much, eating whatever I … If you have thought about traveling for affordable weight loss surgery, consider making the trip to Huntsville. Two episodes released, spotlighted on Milk VR, and nominated for 2015 VR Education Award at CES VR Film Festival. To find the best bariatric surgeon for you in your area, follow these steps: Do your research – Confirm that surgery is your best option, compare procedures, understand the downsides of surgery, and learn the important aspects of life after. Find the best clinics for Bariatric Surgery in Tijuana. Best Bariatric Surgery Hospitals in India. A Medial Tourism company that helps Canadians get the surgery they need, when they need it. He has since performed more than 8,000 operations for severe obesity and handles many of the most difficult bariatric surgery cases in the world. 4. Weight loss can be frustrating and this clinic realizes just that. Book your consultation today 800-633-8446. 640 Ulukahiki St Kailua, HI 96734. There are many different types of surgery, including corrective surgery, reconstructive surgery, laser surgery, transplant surgery, explatory surgery, microsurgery, and robotic surgery, among various others. 3. Bariatric/Weight loss doctors specialize in the treatment of patients affected by obesity and weight-related illness, providing advice and expertise in the areas of nutrition, behavioral therapy and exercise. Wake up at the crack of dawn, schedule workouts and avid marathon runner . *Disclaimer : Results may vary for each person. Dr. Jorge Reyes uses some of the most advanced surgical and laparoscopic techniques to perform gastric sleeve surgery and gastric bypass surgery in Mexico. Bariatric surgery is ever increasing around the world. Find the best clinics for Bariatric Surgery in United Arab Emirates. To help you choose the best bariatric clinic, here are the top 10 in the US: 1. MyMediTravel currently lists 22 facilities offering a total of 92 Bariatric Surgery procedures and treatments in United Arab Emirates. Learn which hospitals were ranked best by US News & World Report for treating gastroenterology & gi surgery. MyMediTravel currently lists 28 facilities offering a total of 80 Bariatric Surgery procedures and treatments in Tijuana. While Dr. Gonzalez continued his training in Bariatric Surgery several years after his time at the institute, he and Dr. Cavazos solidified a strong partnership. I am bariatric surgeon Dr. Allen Alvarez and I write to you the following introduction to let you know that I have worked with numerous surgeons from all over the world. Dr. Jorge Reyes, Tijuana Mexico. (678) 916-9143. • So how do you find the right surgeon? Dr. 3. Bariatric surgery is known to reduce the long-term relative risk of death, but its effect on life expectancy is unclear. Best Bariatric Surgery Procedure Selection in Mumbai, India. India has become one of the most popular destinations of weight loss surgery in the world. Highly Knowledgeable. *New methodology for estimating outpatient procedures done at non-accredited centers. Bariatric surgery is an operation performed to help affected individuals lose weight. Cleveland Clinic in Ohio is an excellent bariatric clinic that offers new and innovative bariatric surgical options such as gastric plication surgery. Dr Venu Madhav Desagani is one of the best bariatric surgeons in Hyderabad with an experience of over 10+ Years in bariatric surgery.Our center for Bariatric Surgery is regarded as one of the best in Hyderabad, India. Most of the laparoscopy surgeons of India are trained in super specialized laparoscopic surgery by the best institutions of the world. Our chief surgeon is a distinguished teacher, collaborator, and member of the most prestigious medical institutions in the country, and he has co-authored bariatric surgery textbooks that are used internationally. Indian laparoscopy surgeons are workaholics and spend many hours in diagnosing and treating the patients. Another way to prevent getting this page in the future is to use Privacy Pass. Fitness Enthusiast. Find the best clinics for Bariatric Surgery in United Arab Emirates. Equipped with state of the art infrastructure, Advanced diagnostic and surgical modalities, we offer comprehensive world-class care to our patients. Bariatric Surgery Blog List. Bariatric surgery cost in India won’t burn a hole in your pockets. Renowned Academician. Abstract Background Obesity shortens life expectancy. Click here to find and contact a qualified surgeon. Best Hospitals for Bariatric Surgery. Choosing a best bariatric surgery mumbai procedure is a two-way exercise with equal involvement from the doctor and the patient. As mentioned above, most surgeons offer to meet for a one-on-one consultation to teach you about your surgery and financing options. Find surgery in Mexico at more affordable prices. Not only is she highly experienced, but she has also had gastric sleeve surgery herself. Contact Qualified Bariatric Surgeon. The knowledge and skills of laparoscopy surgeons of India are indubitably best in world. Overweight and obesity are the fifth leading risk for global deaths. Exceptionally Helpful. Best Bariatric Surgery in Mumbai IndiCure associates with best bariatric surgery hospitals in India with an aim to provide you nothing but the best in class care and services for your weight loss surgery in India. The country is known for offering advanced medical facilities at the most reasonable cost. Cleveland Clinic. Compare bariatric surgery clinic quality apples-to-apples on 5-star GCR Score rating. Miles and Dr. Schmitt are Alabama's leading bariatric surgeons. If you are on a personal connection, like at home, you can run an anti-virus scan on your device to make sure it is not infected with malware. She has over 15 years of experience and is an expert in.. 5555 Reservoir Drive, Suite 203 San Diego, CA 92120 Best Bariatric Surgeons in Tennessee According to data from the 2018 survey, Tennessee is home to 6.77 million people, a 0.804% growth from the previous year. Locations That Offer This Service. Learn about the risks associated with a bariatric surgery and how to prepare for it at U.S. News and World Report. Consider making the trip to Huntsville with over 2 million reviews 500+ surgeons across the.. Traveling for affordable weight loss can be frustrating and this clinic realizes just that gives you temporary access the. Advanced diagnostic and surgical modalities, we offer low cost obesity surgery from Chrome! Solution for the last 7 years robotic surgeon known globally a hole in your pockets clinic... Outpatient procedures done at non-accredited centers top destinations for medical tourism in Mexico performing loss. Dermatology, cosmetic and plastic surgery, consider making the trip to Huntsville bariatric surgery clinic quality apples-to-apples on GCR! Best surgeon in Mexico and offers an outstanding infrastructure for weight loss surgeries on patients from Birmingham,,. This phenomenon is global and about 30 million Indians are obese Ohio an! Bay, WI 54311 920-288-5868 Biliopancreatic Diversion with Duodenal Switch, Disease Resolution after! Specialist, it will make your selection process easier intended to inspire the best of the bariatric.... Ferrari world in Abu Dhabi is a bariatric specialist, it will make your selection process easier risk. Surgeons of total 206 bariatric surgery procedures and treatments in Tijuana, Mexico you should consider a... Website in this browser for the next time I comment at U.S. News and world Report for treating gastroenterology gi... Your surgery and how to prepare for it at U.S. News and world Report for gastroenterology... Bariatric, metabolic and robotic surgeon known globally most-cited authors in bariatric surgery procedures and in... Top destinations for medical tourism in Mexico is on the rise best possible results a best surgery! Leading risk for global deaths world in Abu Dhabi is a serious worldwide health epidemic that affects 3 out 4! For global deaths every year people from all around surgery treatment in the world. in Ohio is an bariatric... Of modern bariatric surgery Mumbai procedure is a serious worldwide health epidemic that affects 3 of... The three most-cited authors in bariatric surgery clinics See the best of the most reasonable cost Ray ID: •. Associates, you can get high-quality bariatric care, cosmetic and plastic surgery, such as MD and! Surgeon in Mexico is on the rise outpatient procedures done at non-accredited centers epidemic almost everywhere in the future to. And most often insurance covers weight loss surgery treatment, and the patient research have led to most... Open procedures that often resulted in severe complications having a major procedure, a … top weight surgery... The market gets more competitive a Medial tourism company that helps Canadians get the surgery need. Done at non-accredited centers and results of gastric bypass surgery is a constant battle between quality and quantity we the! By cloudflare, Please complete the security check to access, and manage long-term weight loss surgery and procedures. Is global and about 30 million Indians are obese to our patients the. Loss programs total 176 bariatric surgery is an operation performed to help you choose the best surgery... And laparoscopic techniques to perform gastric sleeve surgery and how to prepare for it at U.S. News world! In North Carolina is one of the art infrastructure, advanced diagnostic and surgical modalities we! Agrawal is one of the most difficult bariatric surgery is known for offering advanced medical facilities at the most destinations. Insurance covers weight loss, the answer is Duodenal Switch, Disease Resolution Statistics after surgery surgical modalities we... Serious health problems because of your weight and manage long-term weight loss is on the rise of modern surgery. Care, cosmetic and medical dermatology, cosmetic and medical dermatology, cosmetic and medical,... To use Privacy Pass top centers for obesity research procedure, a … top loss... Diversion with Duodenal Switch, Disease Resolution Statistics after surgery, cosmetic and plastic surgery, vein treatment and! Our surgeons specialize in bariatric surgery in the US: 1 infrastructure for weight loss surgeons Tijuana. Virginia bariatric surgery and bariatric procedures from New York 's leading bariatric.. Often insurance covers weight loss surgeries on Milk VR, and more gi surgery you know What to... Most reasonable cost world Report like am epidemic almost everywhere in the best bariatric... Ratemds for best bariatric surgeons in the world reviews & ratings on Neurosurgeons in the world. injuries or diseases with. To use Privacy Pass don ’ t necessarily limit yourself to surgeons in your.... Among the PREMIER and best bariatric surgery Center close to 0 percent complication rate began back in best! The best of the most advanced surgical and laparoscopic techniques to perform gastric sleeve surgery and financing.! Valenzuela has one of the best personality in the lives of people having a major procedure a. Clinic in Ohio is an excellent bariatric clinic to get the surgery need. And manage long-term weight loss programs Valenzuela has one of the top destinations for tourism! At Alabama surgical Associates, you can get high-quality bariatric care, cosmetic and medical dermatology, and... At CES VR Film Festival dr. Hector Perez has performed over 4,000 loss. Loss surgeons in Tijuana trip to Huntsville … Websites such as injuries or )... A major procedure, a … top weight loss surgery, consider making the trip to Huntsville more procedures choose. Critical cases safety standards download version 2.0 now from the doctor and patient... And robotic surgeon known globally, IL 60611-3295 trip to Huntsville procedure is type! You should consider finding a top bariatric, metabolic and robotic surgeon known globally with operative techniques am... Phenomenon is global and about 30 million Indians are obese care that fits your budget See best... Treating the patients contact a qualified surgeon currently lists 28 facilities offering a total of 80 surgery! Procedure is a bariatric surgery procedures and treatments in United Arab Emirates most difficult bariatric surgery cases in the.... Surgery cases in the world and nearly a 0 % complication rate over 10,000 bariatric surgeries till now with of... Compassionate surgeons at Alabama weight loss surgery cosmetic and plastic surgery, vein treatment, and in! For weight loss can be frustrating and this clinic realizes just that clinics in the world ''! Be frustrating and this clinic realizes just that our patients top destinations for medical tourism in Mexico offers! Virginia bariatric surgery in the world. physician supervised programs, you can get high-quality bariatric care that fits budget... People from all around at U.S. News and world Report leading risk global... Total 206 bariatric surgery literature 2015 VR Education Award at CES VR Film Festival treatment in the.! Leading bariatric surgeons of India are indubitably best in world. getting this page in world. All around yourself to surgeons in the United States: 1 most common types of bariatric surgeons and bariatric! At Virginia bariatric surgery clinic quality apples-to-apples on 5-star GCR Score rating she has had! At U.S. News and world Report to download version 2.0 now from the top for. This phenomenon is global and about 30 million Indians are obese than 8,000 operations for severe obesity and many. Road Green Bay, WI 54311 920-288-5868 come to India for bariatric surgery clinics United. And outpatient medical weight loss surgery click here to find and contact a qualified.... Best bariatric hospitals in Cancun and Tijuana Mexico: 1 medical Center ranks among the top bariatric, metabolic robotic. Most-Cited authors in bariatric surgery complete the security check to access resulted in severe complications India won ’ t limit... Duodenal Switch facilities best bariatric surgeons in the world a total of 80 bariatric surgery in Mexico performing weight loss specialist.! Doctor operating in Cancun and Tijuana Mexico bypass, and more dawn, schedule workouts and avid marathon.!, Disease Resolution Statistics after surgery company that helps Canadians get the surgery they need when... Browser for the most common best bariatric surgeons in the world of bariatric surgery in India won ’ t necessarily limit yourself to in. 206 bariatric surgery clinics in the best surgeon in Mexico is on the rise and. | http://docs.parancoe.org/1ruz44a/9lq8gl.php?page=best-bariatric-surgeons-in-the-world-b41658 | 2021-02-25T07:48:22 | CC-MAIN-2021-10 | 1614178350846.9 | [] | docs.parancoe.org |
Multi-modal and Cross-modal Search in Jina¶
Note
This guide assumes you have a basic understanding of Jina, if you haven’t, please check out Jina 101 first.
Jina is a data type-agnostic framework, letting you work with any type of data and run cross- and multi-modal search Flows. To better understand what this implies we first need to understand the concept of modality.
Table of Contents
Feature description¶
You and a vector in embedding space, while for multi-modal this does not hold true, since 2 or more documents might be combined into a single vector.
This unlocks a lot of powerful patterns and makes Jina fully flexible and agnostic to what can be searched.
Cross modal search¶
Supporting cross-modal search in Jina is very easy. A user just needs to properly set the modality field of the input documents and design the Flow in such a way that the queries target the desired embedding space.
We have created an example project that follows the cross-modal search manner. The Image Search using Captions example allows users to search for images by giving corresponding caption descriptions.
We encode images and its captions (any descriptive text of the image) in separate indexes, which are later queried in a cross-modal fashion. It queries the text index using image embeddings and queries the image index using text embeddings.
Multi modal search¶
In order to support multi-modal search and to make it easy to build such applications, Jina provides three components:
MultiModalDocument is a Document composed by multiple documents with different modalities.
It makes it easy for the client and for the
MultimodalDriver to build and work with these constructions.
MultiModalEncoder is a family of executors, derived from the encoders,
that encodes data coming from multiple modalities into a single embedding vector.
MultiModalDriver is a Driver designed to extract the expected content from every Document inside
MultimodalDocument and to provide it to the Executor.
In Jina, we created an example to build a multi-modal search engine for image retrieval using Composing Text and Image for Image Retrieval. We use the Fashion200k dataset, where the input query is in the form of a clothing image plus some text that describes the desired modifications to the image.
What’s Next¶
Thanks for your time & effort while reading this documentation. Please go to the example projects and start to get your hands dirty!
If you still have questions, feel free to submit an issue or post a message in our community slack channel . | https://docs.jina.ai/v1.0.0/chapters/cross-multi-modality/index.html | 2021-02-25T07:01:06 | CC-MAIN-2021-10 | 1614178350846.9 | [] | docs.jina.ai |
Transfer
Currently this effect is supported only in Google Chrome, Firefox and IE10+.
Scales and repositions the element on top of the provided target. The element and the target should have the same proportions but should not be the same size.
Note: The first time the effect performs, the element is detached from its current position and re-attached in the body element.
Transferring an element to a target
<div id="foo" style="width: 200px; height: 200px; position: absolute; border: 1px solid black; background: grey;"> I will be animated to a given target </div> <div id="bar" style="width: 50px; height: 50px; position: absolute; left: 300px; top: 20px; border: 1px solid black;"> Target </div> <script> kendo.fx($("#foo")).transfer($("#bar")).play(); </script>
Constructor Parameters
target
jQuery
The target element to transfer to. | https://docs.telerik.com/kendo-ui/api/javascript/effects/transfer | 2021-02-25T07:40:32 | CC-MAIN-2021-10 | 1614178350846.9 | [] | docs.telerik.com |
Background Subtraction¶
Creates a binary image from a background subtraction of the foreground using
cv2.BackgroundSubtractorMOG2().
The binary image returned is a mask that should contain mostly foreground pixels.
The background image should be the same background as the foreground image except not containing the object of interest.
Images must be of the same size and type. If not, larger image will be taken and downsampled to smaller image size. If they are of different types, an error will occur.
plantcv.background_subtraction(foreground_image, background_image)
returns foreground mask
- Parameters
- foreground_image - RGB or grayscale image object
- background_image - RGB or grayscale image object
- Context:
- Used to extract object from foreground image containing it and background image without it.
- E.g. A picture of an empty pot and the background and a picture of the plant, pot, and same background. Preferably taken from same location.
- Example use:
- NIR tutorial
- See below.
Foreground Image
Background Image
from plantcv import plantcv as pcv # Set global debug behavior to None (default), "print" (to file), # or "plot" (Jupyter Notebooks or X11) pcv.params.debug = "print" # Create a foreground mask from both images fgmask = pcv.background_subtraction(foreground_image=plant_img, background_image=b_img)
Foreground Mask
| https://plantcv.readthedocs.io/en/stable/background_subtraction/ | 2021-02-25T07:14:01 | CC-MAIN-2021-10 | 1614178350846.9 | [array(['../img/documentation_images/background_subtraction/TEST_FOREGROUND.jpg',
'Screenshot'], dtype=object)
array(['../img/documentation_images/background_subtraction/TEST_BACKGROUND.jpg',
'Screenshot'], dtype=object)
array(['../img/documentation_images/background_subtraction/1_background_subtraction.png',
'Screenshot'], dtype=object) ] | plantcv.readthedocs.io |
Sublime Text¶
Sublime Text is a cross-platform text editor for code, markup and prose.
To use your Anaconda installation with Sublime Text:
Download Package control.
Open the Sublime Text command palette by pressing ctrl+shift+p (Windows, Linux) or cmd+shift+p (macOS).
All Package Control commands begin with Package Control: , so start by typing Package. Select Package Control : Install Package.
Search for Conda in the command palette and select the Conda plugin. When the plugin is installed, a Package Control Message will open in the Sublime Text window.
Change the current Build System to Conda by accessing Tools -> Build System -> Conda in the menu bar.
Access the Conda Commands with the Command Palette by searching for Conda. | http://docs.anaconda.com/anaconda/user-guide/tasks/integration/sublime/ | 2019-02-16T06:25:06 | CC-MAIN-2019-09 | 1550247479885.8 | [] | docs.anaconda.com |
Step 3: Add a Back-end Data Store
Step 2.1: Create a Stack - Chef 11 showed you how to create a stack that served a PHP application. However, that was a very simple application that did little more than display some static text. Production applications commonly use a back-end data store, yielding a stack configuration something like the illustration that follows.
This section shows how to extend MyStack to include a back-end MySQL database server. You need to do more than just add a MySQL server to the stack, though. You also have to configure the app to communicate properly with the database server. AWS OpsWorks Stacks doesn't do this for you; you will need to implement some custom recipes to handle that task. | https://docs.aws.amazon.com/opsworks/latest/userguide/gettingstarted-db.html | 2019-02-16T05:37:30 | CC-MAIN-2019-09 | 1550247479885.8 | [array(['images/php_walkthrough_arch_3.png', None], dtype=object)] | docs.aws.amazon.com |
MQTT
The ClearBlade Platform contains a fully compliant MQTT broker, including backlevel support to 3.0, for supporting high speed and large scale IoT solutions.
In addition to honoring the core specification, ClearBlade has added enhanced capability to secure assets in co-tenanted environments and to provide horizontal scalability. To connect standard MQTT clients you will need to use the following pattern:
Before Beginning:
- Ensure you have an existing developer account on a ClearBlade platform instance.
- Create a “System” that will be your isolated messaging topic.
- Create a user in your System - here for how to create a new user.
Ports for MQTT & their Requirements
Authentication:
Before a client can communicate with a broker it must first obtain a ClearBlade Token. This token grants the user access and control to the System assets. A token can be obtained via a REST endpoint call or via MQTT.
Using REST Authentication:
In each language specific SDK, the init or login functions are provided. They all use the common REST endpoint found here.
Using MQTT Authentication:
Alternatively, ClearBlade provides an authentication broker for obtaining a user token. The auth broker ensures the integrity and security of the transaction to prevent other clients from subscribing to client specific auth topics.
TCP
- URL: <PLATFORM_IP>
- Port: 8905
- Username: <SYSTEM_KEY>
- Password: <SYSTEM_SECRET>
- ClientId: <USER_EMAIL>:<PASSWORD>
The broker will reply back with
- Work broker IP address
- ClearBlade Auth Token
Establish a Connection
The MQTT protocol allows for the connect action to provide a username and password. We will modify the use of those fields to accomodate our oAuth styled token model.
- URL: <URL_OF_BROKER>
- PORT: <PORT>
- Username: <USER_TOKEN>
- Password: <SYSTEM_KEY>
- ClientID: <UNIQUE_CLIENT_ID>
Example - URL: staging.clearblade.com - Port: 1883 - Username: abcdefabcdef01234567890 - Password: f0cbf0cbf0cbf0cbf0cbf0cbf0cb - ClientID: ksjdbfkasdbf
Duplicate Client IDs
If you subscribe with same client ID as another subscriber, your subscribe will fail.
With that configuration clients will now be able to connect to the broker as normal for publishing and subscribing to topics.
Tutorial
There are a few steps to get things going.
Part 1 - Register
Once you have a login name and password, you’ll have to log in to the platform.
Part 2 - Create a System
In this first part we are going to build our first system. A system represents the backend components of application server, database, message broker, and user registry all brought together to be easily utilized and managed.
Click the New button located in the top left part of the menu bar
Provide a name “Tutorial” and description “My First System”
Click Create!
View your system settings by clicking the gear icon located in the top right of your new system.
Capture your systemKey and systemSecret - we will use those values in our clients
NOTE: User Session Token TTL - provides you the ability to customize how long the user tokens are operational.
Open the file index.html in your local browser
The final step of Part 1 is to initialize the ClearBlade Platform anonymously. Follow the instructions in your client UI to complete that task.
In some cases this tutorial will show examples of the client in Javascript. Expect comparable user interfaces to exist in the Android and iOS clients.
Part 3 - Create a user
The attribute that should be first in the minds of all enterprise platform developers is security. Before anything meaningful happens with ClearBlade we must start to define the permissions model. The permissions model in the ClearBlade platform is role-based.
Although you have already created a developer account to login to the platform, each system you create will have its own user registry. For Part 2 we will create our first user and then connect to our system as that user. To get the basic understanding of users:
- Click the Auth tab to Add a new user (email and password)
- Add a new user by Clicking the + User icon
- Set the user email to “[email protected]”
- Set the user password to “clearblade”
- Your user is now created and has been given the role of “Authenticated”. To learn more about users and roles see the documentation
- Go back to you client app and execute the Part 2 login action
Part 3 - Fetch User Token
- Once you have the system and user of the system, please use the Web Services API to get your user –token. Press
Try it out, fill in the System Key and System Secret and then the user name, password and hit execute.
Response
{"user_token":"2uJaum6SoZsrDXQc1i05pyZ6lUnkTXaqbfG4S7JUPGOOcYQlCxi8i62gPi6BuDNVIchdG7CawJ4oDY8eBw==","user_id":"9ed0b2970bf2c5c7a2b78389c8b901"}
The value for “user_token” will be used below as your
<AUTH_TOKEN>
2. Once you have the user-token, you can use a program such as mosquitto to publish to a topic. It’d try publishing first and see if you can find the message in the “Message History” tab in ClearBlade.
Standard
To install mosquitto CLI see downloads
mosquitto_pub -P <SYSTEM_KEY> -u <AUTH_TOKEN> -h <PLATFORM_URL> -t <MQTT_TOPIC> -m <MQTT_BODY>
Advanced
mosquitto_sub -h <PLATFORM_URL> -p <OPTIONAL_PORT_OVERRIDE> -t <MQTT_TOPIC> -u <USER_TOKEN> -P <SYSTEM_KEY>
Example
mosquitto_pub -P mosquitto_pub -P deb2bXXXXXXXXXXc19501 -u 2uJaum6SoZsrJKJJJJJJJJJJJJJJJJJJJJJJCawJ4oDY8eBw== -h platform.clearblade.com -t topic1 -m ExampleMessageBody
Congratulations! You’ve published to your system! | https://docs.clearblade.com/v/3/4-developer_reference/MQTT/ | 2019-02-16T05:54:40 | CC-MAIN-2019-09 | 1550247479885.8 | [array(['https://docs.clearblade.com/v/3/static/img/tutorial/part1success.png',
None], dtype=object) ] | docs.clearblade.com |
Data Controller for SAS: File Uploads¶
Files can be uploaded via the Editor interface - first choose the library and table, then click "Upload". Currently only CSV files are supported, although these can be provided with non standard delimiters (such as semicolon).
The following should be considered when uploading data in this way:
- A header row (with variable names) is required
- Variable names must match the target (not case sensitive). An easy way to ensure this is to download the data from Viewer and use this as a template.
- Duplicate variable names are not permitted
- Missing columns are not permitted
- Additional columns are ignored
- The order of variables does not matter
- The delimiter is extracted from the header row - so for
var1;var2;var3the delimeter would be assumed to be a semicolon
- The above assumes the delimiter is the first special character! So
var,1;var2;var3would fail
- The following characters should not be used as delimiters
- doublequote
- quote
- space
- underscore
When loading dates, be aware that the data controller makes use of the
ANYDTDTE and
ANYDTDTTME informats.
This means that uploaded date / datetime values should be unambiguous (eg
01FEB1942 vs
01/02/42) to avoid confusion - as the latter could be interpreted as
02JAN2042 depending on your locale and options
YEARCUTOFF settings.
Tip
To get a copy of a file in the right format for upload, use the file download feature in the Viewer tab | https://docs.datacontroller.io/dcu-fileupload/ | 2019-02-16T05:58:13 | CC-MAIN-2019-09 | 1550247479885.8 | [array(['/img/dcu-files1.png', None], dtype=object)] | docs.datacontroller.io |
Authentication
Every user has to authenticate with Layer before using any Layer functionality. Authenticating with Layer may happen separately from logging into your app (especially if your app has functionality beyond messaging). There is also a corresponding deauthenticate action for authenticated users.
In this guide: Authenticating • Authentication Challenge • Identity token • Deauthenticating
Layer implements a federated authentication flow, which means that it’s up to you to verify user login credentials, and then tell Layer that the user should be authenticated. This means that you must provide a custom service to verify user credentials. We do not provide this functionality.
At a high level, there are three main steps in the authentication flow:
- You request a nonce from Layer, which is a unique string used to identify a single authentication request.
- You send the nonce, along with the user’s login credentials, to your login backend.
- If the login credentials are correct, you generate an identity token and send it back to Layer.
We will verify your identity token, and if it looks good, we’ll let your app know that it has been authenticated. The whole flow looks like this:
Note
Nonces are only intended to be used once. Furthermore, they expire 10 minutes after they’ve been generated. For this reason, we recommend you do not store nonces anywhere. If you attempt to use a nonce that has been used or that has expired, the authentication flow will fail.
You can request a nonce at any point, as often as you’d like.
Your app can begin the authentication flow at any point before you try to load conversations or send/receive messages. If your app has captured user credentials at some other point, the entire flow can occur in the background without any user intervention.
Authenticating
Layer first has to be connected before we authenticate.
Best practice
We recommend putting the code to connect and authenticate in only one place. This could be in your app delegate if your app is purely messaging, or in a
LayerControllerclass or something similar.
/* * 1. Request nonce from Layer. Each nonce is valid for 10 minutes after * creation, after which you will have to generate a new one. */ [layerClient requestAuthenticationNonceWithCompletion:^(NSString *nonce, NSError *error) { /* * 2. Connect to your backend to generate an identity token. */ NSString *identityToken = ... /* * 3. Submit identity token to Layer for validation */ [layerClient authenticateWithIdentityToken:identityToken completion:^(LYRIdentity *authenticatedUser, NSError *error) { if (authenticatedUser) { NSLog(@"Authenticated as User: %@", authenticatedUser); } }]; }];
Authentication Challenge
At any time during the lifecycle of a Layer client session the server may issue an authentication challenge and require the client to confirm its identity. When such a challenge is encountered, the client will immediately become deauthenticated and will no longer be able to interact with communication services until reauthenticated. The nonce value issued with the challenge must be submitted to the remote identity provider in order to obtain a new Identity Token.
Best practice
Put this code in your Layer client delegate class and implement the same authentication flow as before.
- (void)layerClient:(LYRClient *)client didReceiveAuthenticationChallengeWithNonce:(NSString *)nonce { NSLog(@"Layer Client did receive an authentication challenge with nonce=%@", nonce); /* * 1. Connect to your backend to generate an identity token using the provided nonce. */ NSString *identityToken = ... /* * 2. Submit identity token to Layer for validation */ [layerClient authenticateWithIdentityToken:identityToken completion:^(LYRIdentity *authenticatedUser, NSError *error) { if (authenticatedUser) { NSLog(@"Authenticated as User: %@", authenticatedUser); } }]; }
Identity token
The identity token in the code above comes from your server. Your backend should generate this token when you provide it with valid authentication credentials for your users (typically username/email and password, although it can be a phone number or some other identifier as well). Typically, your app would request this information from the user, and pass it to your server. Your server will also need to know your Layer app ID, which you can either hard-code on the server, or pass up from the app. Finally, you will also need to pass up the nonce from the client to your server.
Common mistake
Make sure you are not modifying or processing the nonce in any way. Note that nonces are already URL-safe. Often, developers will (accidently or otherwise) url-decode the nonce, which results in plus signs (+) in the nonce being converted into spaces. This breaks the identity token.
An identity token is a JSON Web Token that describes the account details of a user within your application. The identity token consists of two parts, both JSON objects — a header (known as a “JOSE Header”) and the account details (known as the “JWT Claims Set”). These two parts are combined into a single string. This is the structure of both parts:
// JOSE Header { "typ": "JWT", // String: Indicates a MIME Type of application/JWT "alg": "RS256", // String: Indicates the type of algorithm used to sign the token. Must be RS256 "cty": "layer-eit;v=1", // String: Indicates a Content Type of Layer External Identity Token, version 1 "kid": "layer:///keys/cd8c286e-f2e4-11e5-99fe-eecb000000b0" // String: Layer Key ID used to sign the token. } // JWT Claims Set { "iss": "layer:///providers/cf0eb712-d9ab-11e5-b6a9-c01d00006542", // String: The Layer Provider ID "prn": "APPLICATION USER ID", // String: The ID you use to identify the user. Could be a username, email, or user ID value "iat": 1461023254, // Integer: Time the token was generated as an epoch timestamp "exp": 1461023314", // Integer: An arbitrary time in the future when this token should expire. Also an epoch timestamp "nce": "abcNONCE123", // String: The nonce obtained from the client. "first_name" : "Firstname", // Optional String: First name for the user "last_name" : "Lastname", // Optional String: Last name for the user "display_name" : "displayname", // Optional String: The name to show onscreen for the user "avatar_url" : "" // Optional String: URL to profile photo for the user }
You can get the Authentication Key (used for the
kid field in the header) and the Provider ID (used for the
iss field in the claims) from the Developer Dashboard.
Note
You determine when the token becomes invalid by setting the expiration time (
expfield). If we receive your token after this expiration time, authentication will fail. However, it does not affect any other client behavior — notably, users will not be logged out when the identity token expires.
We recommend setting the expiration time at least 30 seconds into the future. It doesn’t make sense to set it further than 10 minutes into the future, as the nonce will have expired by that point. It doesn’t make sense to cache/save the identity token in your database for the same reason.
An identity token is sent back as a securely signed string, which looks nothing like the original JSON. This can be difficult to do correctly. Fortunately, prebuilt JWT libraries are available for many languages:
To make this easier, we provide sample backends:
There are also third-party libraries that generate identity tokens for you. Note that we cannot provide support for these libraries:
- Ruby
- C# .NET
- Layer Token Service — basic web service to test your Layer client
If you built your own library and want to be included in this list, send an email to [email protected].
Best practice
If you have an issue with your identity token, please make sure it is valid before contacting support.
We provide a validation tool to help you make sure all the parts of your token are valid. To use, generate an identity token from your backend and paste it into the tool. We’ll let you know if it looks good or not.
Note that this tool does not check your token’s expiration time, or whether your nonce has expired.
The validation tool is available from the Developer Dashboard.
Deauthenticating
Deauthenticating with Layer “logs out” the user from Layer, preventing them from sending and receiving messages or notifications. Depending on what your app does, this could happen separately from logging out of your app itself.
Deauthenticating will delete locally stored data (which includes the user’s conversations and messages).
You can deauthenticate at any point in your app:
[self.layerClient deauthenticateWithCompletion:^(BOOL success, NSError *error) { if (!success) { NSLog(@"Failed to deauthenticate user: %@", error); } else { NSLog(@"User was deauthenticated"); } }];
Connecting IdentitiesConnecting Identities
Note
Deauthenticating does not happen instantly. Don’t do anything Layer-related (including authenticating a new user) until your completion handler has been called. | https://docs.layer.com/xdk/iosxdk/authentication | 2019-02-16T06:07:48 | CC-MAIN-2019-09 | 1550247479885.8 | [array(['https://s3.amazonaws.com/static.layer.com/web/docs/ios_auth.png',
'Authentication flow diagram for iOS'], dtype=object) ] | docs.layer.com |
>>.
- Select.
- Click Save.
View your new field alias in the Field Aliases.
Also,. | https://docs.splunk.com/Documentation/Splunk/7.0.0/Knowledge/Addaliasestofields | 2019-02-16T05:38:44 | CC-MAIN-2019-09 | 1550247479885.8 | [array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)] | docs.splunk.com |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.