content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
Note: Most user interface tasks can be performed in Edge Classic or the New Edge experience. For an overview, getting started topics, and release notes specific to the New Edge experience, see the docs.
Creating an Apigee Edge account
You need an account to use Apigee Edge. You can create your own or an administrator can create one for you.
For an overview of Apigee Edge, see What is Apigee Edge?
Create your own Apigee Edge account
Edge provides different levels of accounts, including a free trial. This section describes how to create a free trial account.
Here's a little context about what you'll create, which may help you make a good naming decision. When you create an Edge account, an organization will be provisioned for you with the username you specify. (Make it a good name, because you won't be able to change it). An organization is a container for your API proxies and the environments you deploy them to (you get two environments by default, "test" and "prod"). To make calls to your API proxies in your Edge cloud free trial account, you and your developers use a URL that includes your organization name and an environment name.
For example:
http://<your_org_name>-<environment>.apigee.net/path/to/resource
You'll also use your organization name in the URL for any calls you make to the Edge management API, as described below in the What's next section.
To create a free trial account and log in:
- Go to.
- Enter the required info.
- The Username is the name you want for your organization. Use only alphanumeric characters.
You'll have one opportunity to change this name in the steps below before the organization is created.
- Click Create account.
- Important: In the email you receive, click the link to verify your email.
- After you click the email link, a browser window is opened to the login page (). Log in with the email and password you used to sign up.
- After you log in, you're taken to the account dashboard:
- Click the API Management / Create & Manage APIs box.
- In the Activate dialog that appears, you have one last opportunity to change your organization name.
- If you want to change the organization name, click the create link, then enter the new org name and click Create.
- If you want to keep the organization name show, click Activate.
- On the accounts dashboard, a message shows that Edge is provisioning your organization.
- When the provisioning is complete, click the API Management box again. You're taken to the Edge management UI.
Congratulations! You have a new Edge account and organization. The email used to set up the account is automatically added to the Organization Administrator role. That means you can do just about everything in your organization.
What's next?
Now that you have a new Edge organization, you can get started building API proxies and infrastructure. You can also start using the Edge management Create management API
Using the Edge management API is a slightly advanced topic, but we mention it here because it involves the organization name credentials you've created. Your Edge email and password let you make calls to the Edge management API (since you're automatically an Organization Administrator with permissions to do just about anything).
For example, say you want to get a list of all your API proxies. You can make that management (currently with basic auth). The call above shows how your credentials are passed using cURL. With other API clients, enter your credentials in the appropriate place for basic auth.
Finding your organization name
After you create an organization, you can find its name in a few places:
In the URL of the management UI
In the management UI "Organization" drop-down
Switching to API BaaS and other components
In the management UI, you can switch to API BaaS and other components from the API Management drop-down menu.
Create an Apigee Edge account as an organization administrator
An organization administrator can add a new user account on Apigee Edge. The new user is automatically added to the administrator's organization. For more information, see Managing organization users.
Help or comments?
- If something's not working: Ask the Apigee Community or see Apigee Support.
- If something's wrong with the docs: Send Docs Feedback
(Incorrect? Unclear? Broken link? Typo?) | http://docs.apigee.com/api-services/content/creating-apigee-edge-account?rate=UqzTdODU7zyU2iTakTGvADezjFWTwJ7nxy3M-vZ_2S8 | 2017-06-22T20:45:53 | CC-MAIN-2017-26 | 1498128319902.52 | [array(['http://d3grn7b5c5cnw5.cloudfront.net/sites/docs/files/org_name_url.png',
None], dtype=object)
array(['http://d3grn7b5c5cnw5.cloudfront.net/sites/docs/files/org_name_dropdown.png',
None], dtype=object)
array(['http://d3grn7b5c5cnw5.cloudfront.net/sites/docs/files/edge_component_menu.png',
None], dtype=object) ] | docs.apigee.com |
By default, Alfresco is configured to save files or content items in the File content store and orphaned files in the Deleted content store. Alfresco also provides other content stores, which may be used in place of or in addition to the default stores. This information provides an overview on the File content store and additional content stores that you can use with Alfresco.
You are here
Content store types
Sending feedback to the Alfresco documentation team
You don't appear to have JavaScript enabled in your browser. With JavaScript enabled, you can provide feedback to us using our simple form. Here are some instructions on how to enable JavaScript in your web browser. | http://docs.alfresco.com/5.1/concepts/cs-types.html | 2017-06-22T20:44:44 | CC-MAIN-2017-26 | 1498128319902.52 | [] | docs.alfresco.com |
Scheduling State¶
Overview¶
The life of a computation with Dask can be described in the following stages:
- The user authors a graph using some library, perhaps Dask.delayed or dask.dataframe or the
submit/mapfunctions on the client. They submit these tasks to the scheduler.
- The schedulers assimilates these tasks into its graph of all tasks to track and as their dependencies become available it asks workers to run each of these tasks.
- The worker receives information about how to run the task, communicates with its peer workers to collect dependencies, and then runs the relevant function on the appropriate data. It reports back to the scheduler that it has finished.
- The scheduler reports back to the user that the task has completed. If the user desires, it then fetches the data from the worker through the scheduler.
Most relevant logic is in tracking tasks as they evolve from newly submitted, to waiting for dependencies, to actively running on some worker, to finished in memory, to garbage collected. Tracking this process, and tracking all effects that this task has on other tasks that might depend on it, is the majority of the complexity of the dynamic task scheduler. This section describes the system used to perform this tracking.
For more abstract information about the policies used by the scheduler, see Scheduling Policies.
State Variables¶
We start with a description of the state that the scheduler keeps on each task. Each of the following is a dictionary keyed by task name (described below):
tasks:
{key: task}:
Dictionary mapping key to a serialized task.
A key is the name of a task, generally formed from the name of the function, followed by a hash of the function and arguments, like
'inc-ab31c010444977004d656610d2d421ec'.
The value of this dictionary is the task, which is an unevaluated function and arguments. This is stored in one of two forms:
{'function': inc, 'args': (1,), 'kwargs': {}}; a dictionary with the function, arguments, and keyword arguments (kwargs). However in the scheduler these are stored serialized, as they were sent from the client, so it looks more like
{'function': b'\x80\x04\x95\xcb\...', 'args': b'...', }
{'task': (inc, 1)}: a tuple satisfying the dask graph protocol. This again is stored serialized.
These are the values that will eventually be sent to a worker when the task is ready to run.
dependencies and dependents:
{key: {keys}}:
These are dictionaries which show which tasks depend on which others. They contain redundant information. If
dependencies[a] == {b, c}then the task with the name of
adepends on the results of the two tasks with the names of
band
c. There will be complimentary entries in dependents such that
a in dependents[b]and
a in dependents[c]such as
dependents[b] == {a, d}. Keeping the information around twice allows for constant-time access for either direction of query, so we can both look up a task’s out-edges or in-edges efficiently.
waiting and waiting_data:
{key: {keys}}:
These are dictionaries very similar to dependencies and dependents, but they only track keys that are still in play. For example
waitinglooks like
dependencies, tracking all of the tasks that a certain task requires before it can run. However as tasks are completed and arrive in memory they are removed from their dependents sets in
waiting, so that when a set becomes empty we know that a key is ready to run and ready to be allocated to a worker.
The
waiting_datadictionary on the other hand holds all of the dependents of a key that have yet to run and still require that this task stay in memory in services of tasks that may depend on it (its
dependents). When a value set in this dictionary becomes empty its task may be garbage collected (unless some client actively desires that this task stay in memory).
task_state:
{key: string}:
The
task_statedictionary holds the current state of every key. Current valid states include released, waiting, no-worker, processing, memory, and erred. These states are explained further below.
priority:
{key: tuple}:
The
prioritydictionary provides each key with a relative ranking. This ranking is generally a tuple of two parts. The first (and dominant) part corresponds to when it was submitted. Generally earlier tasks take precedence. The second part is determined by the client, and is a way to prioritize tasks within a large graph that may be important, such as if they are on the critical path, or good to run in order to release many dependencies. This is explained further in Scheduling Policy
A key’s priority is only used to break ties, when many keys are being considered for execution. The priority does not determine running order, but does exert some subtle influence that does significantly shape the long term performance of the cluster.
processing:
{worker: {key: cost}}:
Keys that are currently allocated to a worker. This is keyed by worker address and contains the expected cost in seconds of running that task.
rprocessing:
{key: worker}:
The reverse of the
processingdictionary. This is all keys that are currently running with the workers that is currently running them. This is redundant with
processingand just here for faster indexed querying.
who_has:
{key: {worker}}:
For keys that are in memory this shows on which workers they currently reside.
has_what:
{worker: {key}}:
This is the transpose of
who_has, showing all keys that currently reside on each worker.
released:
{keys}
The set of keys that are known, but released from memory. These have typically run to completion and are no longer necessary.
unrunnable:
{key}
The set
unrunnablecontains keys that are not currently able to run, probably because they have a user defined restriction (described below) that is not met by any available worker. These keys are waiting for an appropriate worker to join the network before computing.
host_restrictions:
{key: {hostnames}}:
A set of hostnames per key of where that key can be run. Usually this is empty unless a key has been specifically restricted to only run on certain hosts. These restrictions don’t include a worker port. Any worker on that hostname is deemed valid.
worker_restrictions:
{key: {worker addresses}}:
A set of complete host:port worker addresses per key of where that key can be run. Usually this is empty unless a key has been specifically restricted to only run on certain workers.
loose_restrictions:
{key}:
Set of keys for which we are allowed to violate restrictions (see above) if not valid workers are present and the task would otherwise go into the
unrunnableset.
resource_restrictions:
{key: {resource: quantity}}: and tracebacks:
{key: Exception/Traceback}:
Dictionaries mapping keys to remote exceptions and tracebacks. When tasks fail we store their exceptions and tracebacks (serialized from the worker) here so that users may gather the exceptions to see the error.
exceptions_blame:
{key: key}:
If a task fails then we mark all of its dependent tasks as failed as well. This dictionary lets any failed task see which task was the origin of its failure.
suspicious_tasks:
{key: int}
Number of times a task has been involved in a worker failure. Some tasks may cause workers to fail (such as
sys.exit(0)). When a worker fails all of the tasks on that worker are reassigned to others. This combination of behaviors can cause a bad task to catastrophically destroy all workers on the cluster, one after another. Whenever a worker fails we mark each task currently running on that worker as suspicious. If a task is involved in three failures (or some other fixed constant) then we mark the task as failed.
who_wants:
{key: {client}}:
When a client submits a graph to the scheduler it also specifies which output keys it desires. Those keys are tracked here where each desired key knows which clients want it. These keys will not be released from memory and, when they complete, messages will be sent to all of these clients that the task is ready.
wants_what:
{client: {key}}:
The transpose of
who_wants.
nbytes:
{key: int}:
The number of bytes, as determined by
sizeof, of the result of each finished task. This number is used for diagnostics and to help prioritize work.
Example Event and Response¶
Whenever an event happens, like when a client sends up more tasks, or when a worker finishes a task, the scheduler changes the state above. For example when a worker reports that a task has finished we perform actions like the following:
Task `key` finished by `worker`:
task_state[key] = 'memory' who_has[key].add(worker) has_what[worker].add(key) nbytes[key] = nbytes processing[worker].remove(key) del rprocessing[key] if key in who_wants: send_done_message_to_clients(who_wants[key]) for dep in dependencies[key]: waiting_data[dep].remove(key) for dep in dependents[key]: waiting[dep].remove(key) for task in ready_tasks(): worker = best_wrker(task): send_task_to_worker(task, worker)
State Transitions¶
The code presented in the section above is just for demonstration. In practice
writing this code for every possible event is highly error prone, resulting in
hard-to-track-down bugs. Instead the scheduler moves tasks between a fixed
set of states, notably
'released', 'waiting', 'no-worker', 'processing',
'memory', 'error'. Transitions between common pairs of states are well
defined and, if no path exists between a pair, the graph of transitions can be
traversed to find a valid sequence of transitions. Along with these
transitions come consistent logging and optional runtime checks that are useful
in testing.
Tasks fall into the following states with the following allowed transitions
- Released: known but not actively computing or in memory
- Waiting: On track to be computed, waiting on dependencies to arrive in memory
- No-worker (ready, rare): Ready to be computed, but no appropriate worker exists
- Processing: Actively being computed by one or more workers
- Memory: In memory on one or more workers
- Erred: Task has computed and erred
- Forgotten (not actually a state): Task is no longer needed by any client and so it removed from state
Every transition between states is a separate method in the scheduler. These
task transition functions are prefixed with
transition and then have the
name of the start and finish task state like the following.
def transition_released_waiting(self, key): def transition_processing_memory(self, key): def transition_processing_erred(self, key):
These functions each have three effects.
- They perform the necessary transformations on the scheduler state (the 20 dicts/lists/sets) to move one key between states.
- They return a dictionary of recommended
{key: state}transitions to enact directly afterwards on other keys. For example after we transition a key into memory we may find that many waiting keys are now ready to transition from waiting to a ready state.
- Optionally they include a set of validation checks that can be turned on for testing.
Rather than call these functions directly we call the central function
transition:
def transition(self, key, final_state): """ Transition key to the suggested state """
This transition function finds the appropriate path from the current to the final state. It also serves as a central point for logging and diagnostics.
Often we want to enact several transitions at once or want to continually
respond to new transitions recommended by initial transitions until we reach a
steady state. For that we use the
transitions function (note the plural
s).
def transitions(self, recommendations): recommendations = recommendations.copy() while recommendations: key, finish = recommendations.popitem() new = self.transition(key, finish) recommendations.update(new)
This function runs
transition, takes the recommendations and runs them as
well, repeating until no further task-transitions are recommended.
Stimuli¶
Transitions occur from stimuli, which are state-changing messages to the scheduler from workers or clients. The scheduler responds to the following stimuli:
- Workers
- Task finished: A task has completed on a worker and is now in memory
- Task erred: A task ran and erred on a worker
- Task missing data: A task tried to run but was unable to find necessary data on other workers
- Worker added: A new worker was added to the network
- Worker removed: An existing worker left the network
- Clients
- Update graph: The client sends more tasks to the scheduler
- Release keys: The client no longer desires the result of certain keys
Stimuli functions are prepended with the text
stimulus, and take a variety
of keyword arguments from the message as in the following examples:
def stimulus_task_finished(self, key=None, worker=None, nbytes=None, type=None, compute_start=None, compute_stop=None, transfer_start=None, transfer_stop=None): def stimulus_task_erred(self, key=None, worker=None, exception=None, traceback=None)
These functions change some non-essential administrative state and then call transition functions.
Note that there are several other non-state-changing messages that we receive from the workers and clients, such as messages requesting information about the current state of the scheduler. These are not considered stimuli.
API¶
- class
distributed.scheduler.
Scheduler(center=None, loop=None, delete_interval=500, synchronize_worker_interval=60000, services=None, allowed_failures=3, extensions=[<class 'distributed.channels.ChannelScheduler'>, <class 'distributed.publish.PublishExtension'>, <class 'distributed.stealing.WorkStealing'>, <class 'distributed.recreate_exceptions.ReplayExceptionScheduler'>, <class 'distributed.queues.QueueExtension'>, <class 'distributed.variable.VariableExtension'>], validate=False, scheduler_file=None, security=None, *
- processing:
{worker: {key: cost}}:
- Set of keys currently in execution on each worker and their expected duration
- rprocessing:
{key: worker}:
- The worker currently executing a particular task
- who_has:
{key: {worker}}:
- Where each key lives. The current state of distributed memory.
- has_what:
{worker: {key}}:
- What worker has what keys. The transpose of who_has.
- released:
{keys}
- Set of keys that are known, but released from memory
- unrunnable:
{key}
- Keys that we are unable to run
- host_restrictions:
{key: {hostnames}}:
- A set of hostnames per key of where that key can be run. Usually this is empty unless a key has been specifically restricted to only run on certain hosts.
- worker_restrictions:
{key: {workers}}:
- Like host_restrictions except that these include specific host:port worker names
- loose_restrictions:
{key}:
- Set of keys for which we are allow to violate restrictions (see above) if not valid workers are present.
- resource_restrictions:
{key: {str: Number}}:
-:
.
- ncores:
{worker: int}:
- Number of cores owned by each worker
- idle:
{worker}:
- Set of workers that are not fully utilized
- worker_info:
{worker: {str: data}}:
- Information about each worker
- host_info:
{hostname: dict}:
- Information about each worker host
- worker_bytes:
{worker: int}:
- Number of bytes in memory on each worker
- occupancy:
{worker: time}
- Expected runtime for all tasks currently processing on a worker
- services:
{str: port}:
- Other services running on this scheduler, like HTTP
- loop:
IOLoop:
- The running Tornado IOLoop
- comms:
[Comm]:
- A list of Comms from which we both accept stimuli and report results
- task_duration:
{key-prefix: time}
- Time we expect certain functions to take, e.g.
{'sum': 0.25}
- coroutines:
[Futures]:
- A list of active futures that control operation
add_client(*args, **kwargs)[source]¶
Add client to network
We listen to all future messages from this Comm.
add_keys(comm=None, worker=None, keys=())[source]¶
Learn that a worker has certain keys
This should not be used in practice and is mostly here for legacy reasons.
add_plugin(plugin)[source]¶
Add external plugin to scheduler
See
add_worker(comm=None, address=None, keys=(), ncores=None, name=None, resolve_address=True, nbytes=None, now=None, resources=None, host_info=None, **info)[source]¶
Add a new worker to the cluster
close(*args, **kwargs)[source]¶
Send cleanup signal to all coroutines then wait until finished
See also
close_worker(*args, **kwargs)
feed(*args, **kwargs)[source]¶
Provides a data Comm to external requester
Caution: this runs arbitrary Python code on the scheduler. This should eventually be phased out. It is mostly used by diagnostics.
get_comm_cost(key, worker)[source]¶
Get the estimated communication cost (in s.) to compute key on the given worker.
get_task_duration(key, default=0.5)[source]¶
Get the estimated computation cost of the given key (not including any communication cost).
get_worker_service_addr(worker, service_name)[source]¶
Get the (host, port) address of the named service on the worker. Returns None if the service doesn’t exist.
handle_client(*args, **kwargs)[source]¶
Listen and respond to messages from clients
This runs once per Client Comm or Queue.
See also
Scheduler.worker_stream
- The equivalent function for workers
handle_worker(*args, **kwargs)[source]¶
Listen to responses from a single worker
This is the main loop for scheduler-worker interaction
See also
Scheduler.handle_client
- Equivalent coroutine for clients
rebalance(*args, **kwargs)[source]¶
Rebalance keys so that each worker stores roughly equal bytes
Policy
This orders the workers by what fraction of bytes of the existing keys they have. It walks down this list from most-to-least. At each worker it sends the largest results it can find and sends them to the least occupied worker until either the sender or the recipient are at the average expected load.
reevaluate_occupancy(*args, **kwargs).
remove_worker(comm=None, address=None, safe=False, close=True)[source]¶
Remove worker from cluster
We do this when a worker reports that it plans to leave or when it appears to be unresponsive. This may send its tasks back to a released state.
replicate(*args, **kwargs)[source]¶
Replicate data throughout cluster
This performs a tree copy of the data throughout the network individually on each piece of data.
See also
report(msg, client=None)[source]¶
Publish updates to all listening Queues and Comms
If the message contains a key then we only send the message to those comms that care about the key.
run_function(stream, function, args=(), kwargs={})[source]¶
Run a function within this process
See also
Client.run_on_scheduler
start(addr_or_port=8786, start_queues=True)[source]¶
Clear out old state and restart all running coroutines
start_ipython(comm=None)[source]¶
Start an IPython kernel
Returns Jupyter connection info dictionary.
stimulus_missing_data(cause=None, key=None, worker=None, ensure=True, **kwargs)[source]¶
Mark that certain keys have gone missing. Recover., *args, **kwargs)[source]¶
Transition a key from its current state to the finish state
See also
Scheduler.transitions
- transitive version of this function
Examples
>>> self.transition('x', 'waiting') {'x': 'processing'}
transitions(recommendations)[source]¶
Process transitions until none are left
This includes feedback from previous transitions and continues until we reach a steady state
update_data(comm=None, who_has=None, nbytes=None, client=None)[source]¶
Learn that new data has entered the network from an external source
See also
Scheduler.mark_key_in_memory
update_graph(client=None, tasks=None, keys=None, dependencies=None, restrictions=None, priority=None, loose_restrictions=None, resources=None)[source]¶
Add new computations to the internal dask graph
This happens whenever the Client calls submit, map, get, or compute.
valid_workers(key)[source]¶
Return set of currently valid worker addresses for key
If all workers are valid then this returns
True. This checks tracks the following state:
- worker_restrictions
- host_restrictions
- resource_restrictions
worker_objective(key, worker)[source]¶
Objective function to determine which worker should get the key
Minimize expected start time. If a tie then break with data storate.
workers_list(workers)[source]¶
List of qualifying workers
Takes a list of worker addresses or hostnames. Returns a list of all worker addresses that match
workers_to_close(memory_ratio=2)[source]¶
Find workers that we can close with low cost
This returns a list of workers that are good candidates to retire. These workers are idle (not running anything) and are storing relatively little data relative to their peers. If all workers are idle then we still maintain enough workers to have enough RAM to store our data, with a comfortable buffer.
This is for use with systems like
distributed.deploy.adaptive.
distributed.scheduler.
decide_worker(dependencies, occupancy, who_has, valid_workers, loose_restrictions, objective, key)[source]¶
Decide which worker should take task
>>> dependencies = {'c': {'b'}, 'b': {'a'}} >>> occupancy = {'alice:8000': 0, 'bob:8000': 0} >>> who_has = {'a': {'alice:8000'}} >>> nbytes = {'a': 100} >>> ncores = {'alice:8000': 1, 'bob:8000': 1} >>> valid_workers = True >>> loose_restrictions = set()
We choose the worker that has the data on which ‘b’ depends (alice has ‘a’)
>>> decide_worker(dependencies, occupancy, who_has, has_what, ... valid_workers, loose_restrictions, nbytes, ncores, 'b') 'alice:8000'
If both Alice and Bob have dependencies then we choose the less-busy worker
>>> who_has = {'a': {'alice:8000', 'bob:8000'}} >>> has_what = {'alice:8000': {'a'}, 'bob:8000': {'a'}} >>> decide_worker(dependencies, who_has, has_what, ... valid_workers, loose_restrictions, nbytes, ncores, 'b') 'bob:8000'
Optionally provide valid workers of where jobs are allowed to occur
>>> valid_workers = {'alice:8000', 'charlie:8000'} >>> decide_worker(dependencies, who_has, has_what, ... valid_workers, loose_restrictions, nbytes, ncores, 'b') 'alice:8000'
If the task requires data communication, then we choose to minimize the number of bytes sent between workers. This takes precedence over worker occupancy.
>>> dependencies = {'c': {'a', 'b'}} >>> who_has = {'a': {'alice:8000'}, 'b': {'bob:8000'}} >>> has_what = {'alice:8000': {'a'}, 'bob:8000': {'b'}} >>> nbytes = {'a': 1, 'b': 1000}
>>> decide_worker(dependencies, who_has, has_what, ... {}, set(), nbytes, ncores, 'c') 'bob:8000' | http://distributed.readthedocs.io/en/latest/scheduling-state.html | 2017-06-22T20:39:42 | CC-MAIN-2017-26 | 1498128319902.52 | [] | distributed.readthedocs.io |
{"_id":"570bfe02e5f8280e006b198":11,"project":"544ecceacf9f860800800dcc","user":"552be96c4432451700277d5c","updates":[],"next":{"pages":[],"description":""},"createdAt":"2016-04-11T19:41:54.042Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"results":{"codes":[]},"settings":"","auth":"never","params":[],"url":""},"isReference":false,"order":0,"body":"Kickbox offers several simple ways to import your email list data into Kickbox for verification.\n[block:api-header]\n{\n \"type\": \"basic\",\n \"title\": \"Uploading a List\"\n}\n[/block]\nKickbox can verify email lists uploaded as Comma Separated Value files (`.csv`), Microsoft® Excel® files (`.xls`, `.xlsx`), or Operational Data Store (`.ods`).\n\nTo ensure Kickbox can successfully verify your file, it should meet the following criteria:\n\n* List should contain one (and only one) email address in each row.\n* The first row in your list can (optionally) contain field names.\n* Email addresses should appear in the same column in each row.\n* Your list should have no more than 1 million addresses.\n[block:callout]\n{\n \"type\": \"info\",\n \"body\": \"It's perfectly acceptable if your list contains additional columns such as names and addresses, but we recommend a list not exceed 25 columns. Kickbox will preserve this data and simply append its verification results to the end of each row.\"\n}\n[/block]\n\n[block:image]\n{\n \"images\": [\n {\n \"image\": [\n \"\",\n \"201412f2kl4.png\",\n \"1086\",\n \"103\",\n \"#954404\",\n \"\"\n ],\n \"caption\": \"Example list before verification\"\n }\n ]\n}\n[/block]\n\n[block:image]\n{\n \"images\": [\n {\n \"image\": [\n \"\",\n \"2015037o1v0.jpg\",\n \"1183\",\n \"101\",\n \"#648494\",\n \"\"\n ],\n \"caption\": \"Example list after verification, with verification results appended to each row\"\n }\n ]\n}\n[/block]\nTo upload your list, log in to Kickbox, navigate to the **Verify** page, and click on **Add List** in the top right. \n[block:image]\n{\n \"images\": [\n {\n \"image\": [\n \"\",\n \"Screen Shot 2016-04-11 at 2.28.03 PM.png\",\n \"992\",\n \"625\",\n \"#56a5dd\",\n \"\"\n ],\n \"caption\": \"\"\n }\n ]\n}\n[/block]\nDrag and drop your email list file, or click **Select Lists** to select from your hard drive. You can also import files from services such as Dropbox, Google Drive, etc. by selecting the **From Cloud** option.\n\nOnce**.\n[block:image]\n{\n \"images\": [\n {\n \"image\": [\n \"\",\n \"2016043b5h8.png\",\n \"761\",\n \"286\",\n \"#f6f6f6\",\n \"\"\n ]\n }\n ]\n}\n[/block]\n\n[block:api-header]\n{\n \"type\": \"basic\",\n \"title\": \"Importing from your ESP\"\n}\n[/block]\nIf you use AWeber, Campaign Monitor, Constant Contact, Drip, Eloqua, MailChimp, MailUp, or VerticalResponse, Kickbox can import your mailing lists for verification.\n\nOnce logged in to Kickbox, navigate to the **Verify** page and click on **Add List** in the top right corner. Click on **Add Integration** and select your ESP.\n[block:image]\n{\n \"images\": [\n {\n \"image\": [\n \"\",\n \"Screen Shot 2016-04-11 at 3.04.02 PM.png\",\n \"986\",\n \"628\",\n \"#b35143\",\n \"\"\n ]\n }\n ]\n}\n[/block]\nYou will be prompted to enter your username and password for your provider:\n[block:callout]\n{\n \"type\": \"info\",\n \"body\": \"This log in information is sent directly to your email service provider and is never shared with Kickbox.\"\n}\n[/block]\n\n[block:image]\n{\n \"images\": [\n {\n \"image\": [\n \"\",\n \"Screen Shot 2016-04-11 at 3.07.58 PM.png\",\n \"712\",\n \"428\",\n \"#27b1e3\",\n \"\"\n ]\n }\n ]\n}\n[/block]\nAfter**:\n[block:image]\n{\n \"images\": [\n {\n \"image\": [\n \"\",\n \"updated.png\",\n \"970\",\n \"607\",\n \"#356a83\",\n \"\"\n ]\n }\n ]\n}\n[/block]\nCongrats! Your list will be imported into Kickbox. All that's left is to click **Start List**:\n[block:image]\n{\n \"images\": [\n {\n \"image\": [\n \"\",\n \"201604gacfo.png\",\n \"755\",\n \"261\",\n \"#f6f6f6\",\n \"\"\n ]\n }\n ]\n}\n[/block]\n\n[block:api-header]\n{\n \"type\": \"basic\",\n \"title\": \"What's next:\"\n}\n[/block]\n* [Downloading and Exporting Results](downloading-and-exporting-results)\n* [Terminology]()","excerpt":"","slug":"uploading-and-importing-lists","type":"basic","title":"Uploading and Importing Lists"} | https://docs.kickbox.io/docs/uploading-and-importing-lists | 2017-06-22T20:25:33 | CC-MAIN-2017-26 | 1498128319902.52 | [] | docs.kickbox.io |
Custom charts In previous releases, you could configure custom reports. Custom chart creation is now deprecated. Multiple data set functionality is now found in the report designer. See Using multiple datasets in a report. Data collection based on scripts or formulas is now a function of Performance Analytics. See Performance Analytics data collection and cleanup. | https://docs.servicenow.com/bundle/jakarta-performance-analytics-and-reporting/page/use/reporting/reference/r_CustomChartsPlugin.html | 2018-01-16T17:47:04 | CC-MAIN-2018-05 | 1516084886476.31 | [] | docs.servicenow.com |
Prerequisites
Complete the Getting Started Setting up a Local Repository procedure.
- -
To finish setting up your repository, complete the following:
Steps
Obtain the tarball for the repository you would like to create.
Copy the repository tarballs to the web server directory and untar.
Browse to the web server directory you created.
cd /var/www/html/
Untar the repository tarballs to the following locations: where <web.server>, <web.server.directory>, <OS>, <version>, and <latest.version> represent the name, home directory, operating system type, version, and most recent release version, respectively.
Untar Locations for a Local Repository - No Internet Access
Confirm you can browse to the newly created local repositories.
URLs for a Local Repository - No Internet Access
where <web.server> = FQDN of the web server host.
Optional: If you have multiple repositories configured in your environment, deploy the following plug-in on all the nodes in your cluster.
Install the plug-in.
yum install yum-plugin-priorities
Edit the
/etc/yum/pluginconf.d/priorities.conffile to add the following:
[main]
enabled=1
gpgcheck=0
More Information
Obtaining the Repositories | https://docs.hortonworks.com/HDPDocuments/Ambari-2.6.1.0/bk_ambari-installation-ppc/content/setting_up_a_local_repository_with_no_internet_access.html | 2018-01-16T17:20:31 | CC-MAIN-2018-05 | 1516084886476.31 | [] | docs.hortonworks.com |
Knowledge management in non-scoped HR An important benefit of the HR system is the Human Resources knowledge base that can contain policies, benefits, holiday schedules, and so on, in the non-scoped version of HR. cycle: Create, review and may have different HR policies and benefits packages based on location or business unit. For example, holiday schedules and medical benefits may vary by country, or different policies may be in effect non-scoped About this task When you open an active HR case record,. You can preview an article, and, if it is relevant, attach it to the case. You can also scroll to the Attached Knowledge related list to add or view knowledge articles related to that record. Procedure Navigate to HR - Case Management > Case Management and open one of the modules, such as Assigned to me or Open. Open the HR case.. When you attach an article, its text is copied to Additional comments. After you save the case, it appears also in the Attached Knowledge related list.. | https://docs.servicenow.com/bundle/jakarta-hr-service-delivery/page/product/human-resources-global/concept/c_HRKnowledge-global.html | 2018-02-18T01:15:05 | CC-MAIN-2018-09 | 1518891811243.29 | [] | docs.servicenow.com |
Manage a stack Use the Stack Details page to view details and status for a stack and to perform life cycle operations on a stack. Before you beginRole required: sn_cmp.cloud_service_user Procedure Use one of the following methods to open the Stack Details page: Click Stacks on the toolbar. Use the Search text box to search for the stack. On the home page, click a stack in the Usage, Recent Stacks, or Stack Health section. On the home page, click Manage Stacks. By default, all stacks in all service categories are listed. In the Stacks list, select a stack category, and then click a stack Name. The Stack Details page displays the following information: 1 Breadcrumb navigation. 2 Name of the stack. Click the dependency button () to view the dependency map for the stack. 3 Status and detailed information. 4 List of all resources in the stack. Click a resource to view details in the Properties section. Color codes indicate status of the stack: Green: On/Active. Yellow: Turned off or Processing. Red: Terminated or Error. 5 Some properties are set in the stack request form and others are set by policy. 6 System-generated tags that identify data for reporting in the Cloud Management user dashboards. 7 Activities associated with the stack. Click a tab to view: Change requests that are associated with the stack. See Track a change request. Incidents that were raised for the stack. See Submit an incident for a stack and Track an incident. Cloud events that are associated with the stack. See View cloud events. 8 Perform an operation on the stack. See Perform a life cycle operation on a stack or resource. 9 Text box used to search the Cloud User Portal for stacks, resources based on types, stack requests, change requests, incidents, keys, and catalog items. Table 1. Stack properties Name Name that you specified when requesting the stack. Service Catalog item Service Catalog item that was used when requesting the stack. Stack Status Current status of the stack: Active (provisioned stacks) Terminated (deprovisioned stacks) Unmanaged (discovered stacks) Error (errored stacks) Processing (stacks being processed) Owner Group User Group that was selected while requesting the stack. Cloud Account Cloud account that the stack is associated with Owned by Username of the requester. Created/Updated Date and time that the stack was requested. Terminated on Date and time that the stack was terminated (unprovisioned) | https://docs.servicenow.com/bundle/jakarta-it-operations-management/page/product/cloud-management-v2-user/task/cloudmgt-manage-stacks.html | 2018-02-18T01:14:26 | CC-MAIN-2018-09 | 1518891811243.29 | [] | docs.servicenow.com |
Package fileutil
Overview ▹
Overview ▾
Package fileutil implements utility functions related to files and paths.
Index ▹
Index ▾
Package files
dir_unix.go fileutil.go lock.go lock_flock.go lock_unix.go preallocate.go preallocate_darwin.go purge.go sync_darwin ( // PrivateFileMode grants owner to read/write a file. PrivateFileMode = 0600 // PrivateDirMode grants owner to make/remove files inside the directory. PrivateDirMode = 0700 )
Variables
var ( ErrLocked = errors.New("fileutil: file already locked") )
func CreateDirAll ¶
func CreateDirAll(dir string) error
CreateDirAll is similar to TouchDirAll but returns error if the deepest directory was not empty.
func Exist ¶
func Exist(name string) bool
func Fdatasync ¶
func Fdatasync(f *os.File) error
Fdatasync on darwin platform invokes fcntl(F_FULLFSYNC) for actual persistence on physical drive media.
func Fsync ¶
func Fsync(f *os.File) error
Fsync on HFS/OSX flushes the data on to the physical drive but the drive may not write it to the persistent media for quite sometime and it may be written in out-of-order sequence. Using F_FULLFSYNC ensures that the physical drive's buffer will also get flushed to the media.
func IsDirWriteable ¶
func IsDirWriteable(dir string) error
IsDirWriteable checks if dir is writable by writing and removing a file to dir. It returns nil if dir is writable.
func OpenDir ¶
func OpenDir(path string) (*os.File, error)
OpenDir opens a directory for syncing.
func Preallocate ¶
func Preallocate(f *os.File, sizeInBytes int64, extendFile bool) error
Preallocate tries to allocate the space for given file. This operation is only supported on linux by a few filesystems (btrfs, ext4, etc.). If the operation is unsupported, no error will be returned. Otherwise, the error encountered will be returned.
func PurgeFile ¶
func PurgeFile(dirname string, suffix string, max uint, interval time.Duration, stop <-chan struct{}) <-chan error
func ReadDir ¶
func ReadDir(dirpath string) ([]string, error)
ReadDir returns the filenames in the given directory in sorted order.
func TouchDirAll ¶
func TouchDirAll(dir string) error
TouchDirAll is similar to os.MkdirAll. It creates directories with 0700 permission if any directory does not exists. TouchDirAll also ensures the given directory is writable.
func ZeroToEnd ¶
func ZeroToEnd(f *os.File) error
ZeroToEnd zeros a file starting from SEEK_CUR to its SEEK_END. May temporarily shorten the length of the file.
type LockedFile ¶
type LockedFile struct{ *os.File }
func LockFile ¶
func LockFile(path string, flag int, perm os.FileMode) (*LockedFile, error)
func TryLockFile ¶
func TryLockFile(path string, flag int, perm os.FileMode) (*LockedFile, error) | http://docs.activestate.com/activego/1.8/pkg/github.com/coreos/etcd/pkg/fileutil/ | 2018-02-18T01:34:40 | CC-MAIN-2018-09 | 1518891811243.29 | [] | docs.activestate.com |
Advanced Restore/Recover/Retrieve Options (General)
Use this dialog box to access additional restore/recover/retrieve options. Note that all the options described in this help may not be available and only the options displayed in the dialog box are applicable to the component installed on the client...
Skip Errors and Continue
For index-based agents, this advanced restore/recover/retrieve option enables a restore/recover/retrieve job to continue despite media errors. This option also provides an output file that lists the full path names of the files that failed to restore/recover/retrieve.
Use Exact Index
Specifies whether to use the index associated with the data protection operation performed at a specific time or the latest index. By default, the index associated with the most recent data protection operation when you Browse the data is used.
Recover all protected mails
For the Exchange Mailbox Archiver Agent, specifies whether to recover all messages that were backed up or archived in the selected mailboxes or folders from the latest data or point-in-time through the oldest available index.
Disaster recovery/Media recovery (to another machine)
For Lotus Notes Database, specifies whether to restore and replay transactions when the active extent of the transaction log is lost. This option is also used when you perform a cross-machine restore that includes transaction logs.
Do not select this option unless you are performing either disaster recovery or a cross-machine restore to a new partition.
Impersonate (Windows) User
Specifies whether to submit the Windows.
Restoring Options
Select one of the options to restore archived messages
Restore as Data
This option is selected by default and it allows you to restore the stubbed messages as data.
Restore as Stubs
Select this option if stubbing is enabled for the archived data, and you wish to restore a backed up stub as stub.
- Leave message body in the stub
Select this checkbox if you wish the message content to be displayed in the restored stub.
- Add recall link to stub body
- Select this checkbox if you wish you the recall link to be embedded in the restored stub.
Restore as Backed Up
Backups may contain a combination of both data and stubs. Select this option to restore the data as it was backed up. | http://docs.snapprotect.com/netapp/v11/article?p=en-us/universl/restore/file_system/advrest.htm | 2018-02-18T01:06:13 | CC-MAIN-2018-09 | 1518891811243.29 | [] | docs.snapprotect.com |
Welcome to Castor Documentation
For more information about Castor and to create an account, please visit
Getting Started
Everything you need to know about Castor, from subscribing to presenting your Dashboard can be found in the Getting Started with Castor section of this guide.
Custom Data Sources
Custom Data Sources unleash the power of Castor by allowing you to aggregate multiple data sources in a single dashboard. Learn more about Data Security, Data Styling and everything about external Data in Castor in the Data Sources section of this documentation.
Webhooks
Webhooks are a great way of sending data directly from your server, app or service directly to your Castor dashboard. To learn more about the Webhooks specification Castor uses, read the Webhooks section of this guide. | https://docs.getcastor.com/ | 2018-02-18T00:45:17 | CC-MAIN-2018-09 | 1518891811243.29 | [] | docs.getcastor.com |
Embed.
Figure 1
Figure 2
If you do not see this category listed you may need to expand the number of rows by using the "Show rows" drop-down menu in the lower right-hand corner of the report (Figure 3.5).
Figure 3
I want to add this report to my dashboard | https://docs.pbs.org/plugins/viewsource/viewpagesrc.action?pageId=355767 | 2018-02-18T01:26:09 | CC-MAIN-2018-09 | 1518891811243.29 | [] | docs.pbs.org |
Docmentation for Shedbuilt GNU/Linux is provided under the Creative Commons Attribution-NonCommercial 4.0 International license (CC BY-NC 4.0). Except where otherwise noted, source code that appears in the documentation may be extracted under the MIT License (MIT).
All Shedbuilt packaging files are provided under the MIT License (MIT). Source code and binaries referenced by or included in Shedbuilt System Images and packaging may be governed by other licenses. Please review individuals packages in our repository for specific licensing terms.
Shedbuilt GNU/Linux is committed to strict observation of licensing terms and the defense of your freedom to use, inspect, modify and share community-maintained software. | https://docs.shedbuilt.net/license | 2021-02-25T04:19:52 | CC-MAIN-2021-10 | 1614178350717.8 | [] | docs.shedbuilt.net |
Introduction
HelloID's Servicedesk allows you to perform tasks within your organization, via interactive forms.
The Unlock Account form lets you unlock locked Microsoft Active Directory accounts.
You will only have this form on your Servicedesk tab if your IT department has given you the necessary permissions.
Unlock account
- Go to the Servicedesk.
- Select the Unlock account tile.
- Select a user from the drop down menu. If you don't see the user you want to disable, contact your IT department.
- Select the Unlock account button to confirm. The user's account has now been unlocked. | https://docs.helloid.com/hc/en-us/articles/360022223534-Unlock-Account-form | 2021-02-25T04:55:43 | CC-MAIN-2021-10 | 1614178350717.8 | [] | docs.helloid.com |
- Unpacking the Imply package
tar -xzf imply-2021.01.1.tar.gz cd imply-2021.01.1
Edit the supervisor config to only start the Pivot (
imply-uiservice).user for this, you can create any API user you want.
All done, you can now use your on-prem Pivot with your cloud Druid cluster. | https://docs.imply.io/2021.01/crossover/ | 2021-02-25T04:40:47 | CC-MAIN-2021-10 | 1614178350717.8 | [array(['/2021.01/assets/crossover-pivot.png', 'Connection config'],
dtype=object) ] | docs.imply.io |
Crate wgpu_types
Version 0.7.0
See all wgpu_types's items
This library describes the API surface of WebGPU that is agnostic of the backend.
Information about an adapter.
Represents the backends that wgpu will use.
Describes a single binding inside a bind group.
Describes the blend state of a pipeline.
View of a buffer which can be used to copy to/from a biasing setting for the depth target.
Describes the depth/stencil state in a render pipeline.
Describes a [Device].
Device
Argument buffer layout for dispatch_indirect commands.
Argument buffer layout for draw_indexed_indirect commands.
Argument buffer layout for draw_indirect commands.
Extent of a texture related operation.
Features that are not guaranteed to be supported.
Represents the sets of limits an adapter/device supports.
Describes the multi-sampling state of a render pipeline.
Origin of a copy to/from a texture.
Flags for which pipeline data should be recorded.
Describes the state of primitive assembly and rasterization in a render pipeline.
A range of push constant memory to pass to a shader stage.
Describes how to create a QuerySet.
Describes a [RenderBundle].
RenderBundle
Options for requesting adapter.
Flags controlling the shader processing.
Describes the shader stages that a binding will be visible from.
Describes stencil state in a render pipeline.
State of the stencil operation (fixed-pipeline stage).
Describes a [SwapChain].
SwapChain
View of a texture which can be used to copy to/from a buffer/texture.
Layout of a texture in a buffer's memory.
Describes a [Texture].
Texture
Feature flags for a texture format.
Features supported by a given texture format
Information about a texture format.
Different ways that you can use a texture.
Vertex inputs (attributes) to shaders.
How edges should be handled in texture addressing.
Backends supported by wgpu.
Specific type of a binding.
Alpha blend factor.
Alpha blend operation.
Specific type of a buffer binding.
Comparison function used for depth and stencil operations.
Type of faces to be culled.
Supported physical device types.
Texel mixing mode when sampling between texels.
Winding order which classifies the "front" face.
Format of indices used with pipeline.
Rate that determines when vertex data is advanced.
Operation to perform on the stencil value.
Specific type of a sample in a texture binding.
Maximum queries in a query set
Size of a single piece of query data.
Vertex buffer strides have to be aligned to this number.
Integral type used for buffer offsets.
Integral type used for buffer slice sizes.
Integral type used for dynamic bind group offsets.
Integral type used for binding locations in shaders. | https://docs.rs/wgpu-types/0.7.0/wgpu_types/ | 2021-02-25T04:57:40 | CC-MAIN-2021-10 | 1614178350717.8 | [] | docs.rs |
Congratulations on your newly-minted Shedbuilt setup! Right away you have access to a trove of powerful, command-line utilities that would be the envy of any mid-1990s Linux user. That said, I expect that you have much more in mind for your little ARM SBC so here are some tips to get your project off on the right foot.
You can build and install software packaged by the community using the included
shedmake utility. The quickest route to obtaining pre-packaged software is to:
utils,
games, etc.) in
/var/shedmake/repos/remoteand select an interesting package.
shedmaketo install the package.
For example, let's say you'd like to install the Zork interpreter
frotz which is available in the Shedbuilt Games repository installed to
/var/shedmake/repos/remote/games. Simply pass the name of the package to the
install action and
shedmake will handle the rest:
sudo shedmake install frotz
When
shedmakeis instructed to install a package that has not yet been compiled, it will attempt to fetch a compatible binary. If none is advertised, it will instead fetch and build the source. Some packages have 'dependencies' for compilation and/or installation, requiring you to install other packages first. If required package have not been installed,
shedmakewill alert you and the install will fail. To have
shedmakeinstall dependencies automatically, supply the option
--install-dependencies(or
-i) to the
installaction.
Check out the Shedmake Reference for more details concerning its capabilities and usage or peruse our Fun and Games section for more suggested packages.
Shedbuilt GNU/Linux is designed to facilitate the creation of new software packages. Check out our Packaging documentation to learn how to use
shedmake to build software and distribute it to others in the Shedbuilt community!
Each Shedbuilt system image includes the all tools you need to compile a complete GNU/Linux system from scratch on your device. If you're interested in how operating systems are put together, or want to experiment with creating your own remix system images, we encourage you to explore our extensive Bootstrapping docs. | https://docs.shedbuilt.net/installation/next-steps | 2021-02-25T05:33:55 | CC-MAIN-2021-10 | 1614178350717.8 | [] | docs.shedbuilt.net |
Wait Step
Overview
The Wait Step is used to delay the next step execution with the specified time period.
Usages
- Create polling loop while waiting the Application Under Test to perform a long operation, usually in conjunction with a Goto Step.
New to Telerik Test Studio? Download free 30-day trial
The Wait Step is used to delay the next step execution with the specified time period.. | https://docs.telerik.com/teststudio-apis/features/steps/wait | 2021-02-25T05:43:56 | CC-MAIN-2021-10 | 1614178350717.8 | [array(['/teststudio-apis/img/features/steps/wait.png', 'Wait Step'],
dtype=object) ] | docs.telerik.com |
Onboarding section
Include important steps for easy start with a module
- Assign Account — open page for add new ebay Account
- Ebay Marketplace — allows download/update information about ebay marketplace
- Selling Profiles — easy access to create new Selling Profile
- Selling List — easy access to new Selling List creation | https://docs.salest.io/article/90-onboarding-section | 2021-02-25T05:36:31 | CC-MAIN-2021-10 | 1614178350717.8 | [array(['https://involic.com/images/prestabay-manual/prestashop-ebay-module-onboarding-section-58.png',
'PrestaShop ebay module — Onboarding section PrestaShop ebay module — Onboarding section'],
dtype=object) ] | docs.salest.io |
Product image - this is the main product image and will be used by Shopify in catalog and related products.
Recommended image sizes: minimum 1000 x 1000px, maximum 4000x4000px
Scene - this will be used as a base layer or background for your build; you can pick a color or use an image.
Recommended image size: minimum 1600x900 or 16:9 aspect ratio; recommended format: PNG.
Thumbnail image - this is an option icon; you can pick a color or use an image.
Recommended image size: 400x400 or 1:1 aspect ratio; recommended format: PNG.
Layer image - layer image will be placed over your scene.
Layer image should be exactly the same size as your scene and the same size as your other layers.
Here you can download all image assets for bicycle tutorial - download | https://docs.appclay.com/product-configurator/image-guide | 2021-02-25T04:16:42 | CC-MAIN-2021-10 | 1614178350717.8 | [] | docs.appclay.com |
Integration with the Helm Operator¶
You can release charts to your cluster via "GitOps", by combining Flux and the Helm Operator.
The essential mechanism is this: the declaration of a Helm release is represented by a custom resource, specifying the chart and its values. If you put such a resource in your git repo as a file, Flux will apply it to the cluster, and once it's in the cluster, the Helm Operator will make sure the release exists by installing or upgrading it.
Upgrading images in a
HelmRelease using Flux¶
If the chart you're using in a
HelmRelease lets you specify the
particular images to run, you will usually be able to update them with
Flux, the same way you can with Deployments and so on.
Note
For automation to work, the repository and tag should be
defined (either as a whole string, or under separate keys), as Flux
determines image updates based on what it reads in the
.spec.values
of the
HelmRelease.
Automated image detection¶
Flux interprets certain commonly used structures in the
values
section of a
HelmRelease as referring to images, at least an
image key needs to be specified. The following are understood
(showing just the
values section):
values: image: repo/image:version
values: image: repo/image tag: version
values: registry: docker.io image: repo/image tag: version
values: image: repository: repo/image tag: version
values: image: registry: docker.io repository: repo/image tag: version
These can appear at the top level (immediately under
values:), or in
a subsection (under a key, itself under
values:). Other values
may be mixed in arbitrarily. Here's an example of a values section
that specifies two images:
values: persistent: true # image that will be labeled "chart-image" image: repo/image1:version subsystem: # image that will be labeled "subsystem" image: repository: repo/image2 tag: version imagePullPolicy: IfNotPresent port: 4040
Annotations¶
If Flux does not automatically detect your image, it is possible to map the image paths by alias with YAML dot notation annotations. An alias overrules a detected image.
The following annotations are available, and
repository.fluxcd.io
is required for any of these to take effect.
Note
Note: Glob patterns following
glob: are sensitive to spaces
The following example
HelmRelease specifies two images:
metadata: annotations: # image and tag repository.fluxcd.io/app: appImage tag.fluxcd.io/app: appTag filter.fluxcd.io/app: 'glob:*' # nested image with registry and tag registry.fluxcd.io/submarine: sub.marinesystem.reg repository.fluxcd.io/submarine: sub.marinesystem.img tag.fluxcd.io/submarine: sub.marinesystem.tag spec: values: # image and tag appImage: repo/image1 appTag: version sub: marinesystem: # nested image with registry and tag reg: domain.com img: repo/image2 tag: version
Filters¶
You can use the same annotations in
the
HelmRelease as you would for a Deployment or other workload,
to control updates and automation. For the purpose of specifying
filters, the container name is either
chart-image (if at the top
level), the key under which the image is given (e.g.,
"subsystem"
from the example above), or the alias you are using in your
annotations.
Top level image example:
kind: HelmRelease metadata: annotations: fluxcd.io/automated: "true" filter.fluxcd.io/chart-image: semver:~4.0 spec: values: image: repository: bitnami/mongodb tag: 4.0.3
Sub-section images example:
kind: HelmRelease metadata: annotations: fluxcd.io/automated: "true" filter.fluxcd.io/prometheus: semver:~2.3 filter.fluxcd.io/alertmanager: glob:v0.15.* filter.fluxcd.io/nats: regex:^0.6.* spec: values: prometheus: image: prom/prometheus:v2.3.1 alertmanager: image: prom/alertmanager:v0.15.0 nats: image: repository: nats-streaming tag: 0.6.0 | https://docs.fluxcd.io/en/1.21.2/references/helm-operator-integration/ | 2021-02-25T04:19:17 | CC-MAIN-2021-10 | 1614178350717.8 | [] | docs.fluxcd.io |
Content-Type: "text/html"
Iris can be used to parse and render markdown contents as HTML to the web client.
Markdown(contents []byte, options ...Markdown) (int, error)
It accepts the raw
[]byte contents and optionally
Markdown options structure which contains a single
Sanitize bool field. If
Sanitize is set to true then it takes a
[]byte that contains a HTML fragment or document and applies the
UGCPolicy. The UGCPolicy is a policy aimed at user generated content that is a result of
HTML WYSIWYG tools and Markdown conversions. This is expected to be a fairly rich document where as much markup as possible should be retained. Markdown permits raw HTML so we are basically providing a policy to sanitise HTML5 documents safely but with the least intrusion on the formatting expectations of the user.
func handler(ctx iris.Context) {response := []byte(`# Hello Dynamic Markdown -- Iris`)ctx.Markdown(response)}
Result
<h1>Hello Dynamic Markdown – Iris</h1> | https://docs.iris-go.com/iris/responses/markdown | 2021-02-25T04:27:15 | CC-MAIN-2021-10 | 1614178350717.8 | [] | docs.iris-go.com |
3.1.4.6.2 Dismissing Recurring Calendar Objects
To dismiss a reminder for a Recurring Calendar object if there is a future instance (including exceptions) with a pending reminder (in other words, the reminder is not disabled individually on all future instances), the client MUST set the value of the PidLidReminderSignalTime property (section 2.2.1.2) to the start time of that instance minus the value of the PidLidReminderDelta property (section 2.2.1.3).
If no more instances (including exceptions) have a pending reminder, it is recommended that the client avoid setting the PidLidReminderSet property (section 2.2.1.1) to FALSE, and the client MUST set the PidLidReminderSignalTime property to the PtypTime ([MS-OXCDATA] section 2.11.1) value Low:0xA3DD4000 High:0x0CB34557 (4501/01/01 00:00:00.000).
It is recommended that the client avoid setting the PidLidReminderSet property to FALSE when dismissing reminders for Recurring Calendar objects, even when no more instances require a reminder to signal. This is to preserve the user's intent to signal reminders, in case the recurrence is extended at a later date, to include instances in the future.
Dismissing a reminder for a Recurring Calendar object never causes an instance to become an exception. | https://docs.microsoft.com/en-us/openspecs/exchange_server_protocols/ms-oxormdr/fbb6161a-c1b1-46cb-a69f-0e4cfcf55379 | 2021-02-25T06:38:22 | CC-MAIN-2021-10 | 1614178350717.8 | [] | docs.microsoft.com |
1.6 Applicability Statement
This protocol is designed for flowing user and group data across the user profile store and external directory services (DS). It is applicable when the protocol client is acting as a broker between directory services and the user profile store.
This protocol was designed with the intention of supporting a scale point of approximately:
2 million users
an average of 100 member groups per user profile, up to a total of 1 million member groups
10 million group memberships
This protocol does not specify how the data should be stored in the external directory services, how the protocol client should connect to the external directory services, or what synchronization logic should be used by the protocol client when flowing data between the user profile store and the external directory services. | https://docs.microsoft.com/en-us/openspecs/sharepoint_protocols/ms-upiews/f1c3d282-c411-41c1-b72b-1f8ba07320a4 | 2021-02-25T05:54:40 | CC-MAIN-2021-10 | 1614178350717.8 | [] | docs.microsoft.com |
The Job details page enables you to view status and other information about specific protection job tasks that are running, that are queued, or that have completed. You can use this information to monitor protection job progress and to troubleshoot job failures.
The command buttons enable you to perform the following tasks:
The Job tasks list displays in a table all the tasks associated with a specific job and the properties related to each task. | https://docs.netapp.com/ocum-97/topic/com.netapp.doc.onc-um-protect/GUID-108D0BDA-3FF0-4CD6-969C-9370008C70A4.html?lang=en | 2021-02-25T06:05:30 | CC-MAIN-2021-10 | 1614178350717.8 | [] | docs.netapp.com |
.6 Deployment and Setup Guide.
This preference file is also applicable to VVol datastore creation. Virtual Storage Console, VASA Provider, and Storage Replication Adapter Deployment and Setup Guide for 9.6 virtual appliance for VSC, VASA Provider, and SRA supports the following protocols:
VSC can create a datastore on either an NFS volume or a LUN:
If a storage capability profile is not specified during provisioning, you can later use the Storage Mapping page to map a datastore to a storage capability profile. | https://docs.netapp.com/vapp-97/topic/com.netapp.doc.vsc-iag/GUID-D7CAD8AF-E722-40C2-A4CB-5B4089A14B00.html?lang=en | 2021-02-25T06:04:07 | CC-MAIN-2021-10 | 1614178350717.8 | [] | docs.netapp.com |
Learn the different ways to use the comment widget, from submitting and editing comments, interacting with other users, blocking other users, etc.Check Guides
Here for the first time? See how Vuukle can help to encrease your page views, revenue and understand your users.Check Guides
How our products are priced and our own terms and privacy policies.Check Guides
Learn how to create and set up your profile and manage your commentsCheck Guides | https://docs.vuukle.com/ | 2021-02-25T05:12:36 | CC-MAIN-2021-10 | 1614178350717.8 | [] | docs.vuukle.com |
Git.
Simply navigate to the Admin > Extensions page in your instance of BuildMaster and click on the GitLab extension to install it.
If your instance doesn't have internet access, you can manually install the GitLab extension after downloading the GitLab Extension Package..
To connect to a self-managed instance of GitLab Enterprise, make sure the API URL of the credentials is configured to the API URL () for your GitLab Enterprise installation.
SSH connections are not supported using the built-in GitLab | https://docs.inedo.com/docs/buildmaster/integrations/gitlab | 2021-02-25T04:15:36 | CC-MAIN-2021-10 | 1614178350717.8 | [] | docs.inedo.com |
Business Policy information
Business policies are the payment, shipping and return details you specify for buyers in your listings.
Payment policies are where you specify how buyers can pay you - through PayPal, for instance.
Shipping policies are where you specify your dispatch time, the delivery services you offer, and P&P costs.
Returns policies are where you specify whether or not you accept returns. If you do accept returns, include how long a buyer has to return an item and who pays for return postage.
PrestaBay module supports selecting Business Policy inside Selling Profile. Edit “Business Policy” currently not supported by the module.
In order to create new Business Policy or modify existing one, please open your ebay account and navigate to Business Policy section.
In Selling Profile for sections "Payment", "Shipping", "Return Policy" you can select one of already created Business Policy.
If you decided to use Business Policy for one of the section, please notice that you need to use it also for other sections.
Ebay does not allow using mix selection. For example, if you select Business Policy (BP) in the Payment section you "must" select BP in Shipping section and also in the "Return Policy" section. If you try to mix data ebay can respond with the error like "Internal Error to Application". | https://docs.salest.io/article/73-business-policy-information | 2021-02-25T05:32:40 | CC-MAIN-2021-10 | 1614178350717.8 | [] | docs.salest.io |
.02 from imply.io/get-started and unpack the release archive.
tar -xzf imply-2021.02.tar.gz cd imply-2021.02, which includes Druid, Pivot, and ZooKeeper. You can use the included supervise program to start everything with a single command:
bin/supervise -c conf/supervise/quickstart.conf
You should see a log message printed out for each service that starts up. You can view detailed logs
for any service by looking in the
var/sv/ directory using another terminal.
Later on, if you'd like to stop the services, CTRL-C the supervise program in your terminal. If you
want a clean start after stopping the services, remove the
var/ directory.
Congratulations, now it's time to load data!.
6. Configure the time column parsing. Druid uses a timestamp column to partition your data. This page allows you to
identify which column should be used as the primary time column and how the timestamp is formatted. In this case, the
loader should have automatically detected the
timestamp column and chosen the
iso format.
Click Next:.
In the dialog that comes up, make sure that
wikipedia is the selected Source and that Auto-fill dimensions and measures is selected.
Continue by clicking Next: Create data cube.
From here you can configure the various aspects of your data cube, including defining and customizing the cube's dimensions and measures. The data cube creation flow can intelligently inspect the columns in your data source and determine possible dimensions and measures automatically. We enabled this when we selected Auto-fill dimensions and measures on the previous screen and you can see that the cube's settings have been largely pre-populated. In our case, the suggestions are appropriate so we can continue by clicking on the Save button in the top-right corner.
Pivot's data cubes are highly configurable and give you the flexibility to represent your dataset, as well as derived and custom columns, in many different ways. The documentation on dimensions and measures is a good starting point for learning how to configure a.
Here, you can explore a dataset by filtering and splitting it across any dimension. For each filtered split of your data, you will see the aggregate value of your selected measures. For example, on the wikipedia dataset, you can see the most frequently edited pages by splitting on Page. Drag Page to the Show bar, and SQL section. If you are in the visualization view, you can navigate to this screen by selecting SQL from the hamburger menu in the top-left corner of the page. Once there, try running the following query, which will return the most edited Wikipedia pages:
SELECT page, COUNT(*) AS Edits FROM wikipedia WHERE "__time" BETWEEN TIMESTAMP '2016-06-27 00:00:00' AND TIMESTAMP '2016-06-28 00:00:00' GROUP BY page ORDER BY Edits DESC LIMIT 5
You should see results like the following:. | https://docs.imply.io/2021.02/quickstart/ | 2021-02-25T05:33:54 | CC-MAIN-2021-10 | 1614178350717.8 | [array(['/2021.02/assets/quickstart-01.png', 'quickstart 1'], dtype=object)
array(['/2021.02/assets/quickstart-02.png', 'quickstart 2'], dtype=object)
array(['/2021.02/assets/quickstart-03.png', 'quickstart 3'], dtype=object)
array(['/2021.02/assets/quickstart-04.png', 'quickstart 4'], dtype=object)
array(['/2021.02/assets/quickstart-05.png', 'quickstart 5'], dtype=object)
array(['/2021.02/assets/quickstart-08.png', 'quickstart 8'], dtype=object)
array(['/2021.02/assets/quickstart-07.png', 'quickstart 7'], dtype=object)
array(['/2021.02/assets/quickstart-09.png', 'quickstart 9'], dtype=object)
array(['/2021.02/assets/quickstart-10.png', 'quickstart 10'], dtype=object)
array(['/2021.02/assets/quickstart-11.png', 'quickstart 11'], dtype=object)
array(['/2021.02/assets/quickstart-12.png', 'quickstart 12'], dtype=object)
array(['/2021.02/assets/quickstart-13.png', 'quickstart 13'], dtype=object)
array(['/2021.02/assets/quickstart-14.png', 'quickstart 14'], dtype=object)
array(['/2021.02/assets/quickstart-15.png', 'quickstart 15'], dtype=object)] | docs.imply.io |
Version: latest
Create a cluster
After login you'll see the console overview page. Since you haven't created a cluster, the overview will be empty.
In the upper right area click Create New Cluster. After you've assigned a name and created the cluster, a new entry appears in the overview:
For this Getting Started Tutorial, close the dialog box that pops up immediately after creating a new cluster.
The cluster is now being set up. During this phase, its state is Creating. After one or two minutes the cluster is ready for use and changes its state to Healthy:
After the cluster has been created, you can jump into the cluster detail page by clicking on the cluster name. | https://docs.camunda.io/docs/guides/getting-started/create-cluster/ | 2021-02-25T05:21:00 | CC-MAIN-2021-10 | 1614178350717.8 | [array(['/assets/images/cluster-overview-empty-59f8eac846b7b9700fdd46a9e9b39f5a.png',
'cluster-creating'], dtype=object)
array(['/assets/images/cluster-overview-new-cluster-creating-7ac5c6f44011aa0956b65c07e7ba6d53.png',
'cluster-creating'], dtype=object)
array(['/assets/images/cluster-overview-new-cluster-healthy-16c899b5abec57b414c50db39a5dc361.png',
'cluster-healthy'], dtype=object) ] | docs.camunda.io |
Workspaces¶
This section describes how to view and configure workspaces. Analogous to a namespace, a workspace is a container which organizes other items. In GeoServer, a workspace is often used to group similar layers together. Layers may be referred to by their workspace name, colon, layer name (for example
topp:states). Two different layers can have the same name as long as they belong to different workspaces (for example
sf:states and
topp:states).
Edit a Workspace¶
To view or edit a workspace, click the workspace name. A workspace configuration page will be displayed.
A workspace is defined by a name and a Namespace URI (Uniform Resource Identifier). The workspace name is limited to ten characters and may not contain space. A URI is similar to a URL, except URIs do not need to point to a actual location on the web, and only need to be a unique identifier. For a Workspace URI, we recommend using a URL associated with your project, with perhaps a different trailing identifier. For example, is the URI for the “topp” workspace.
The Security tab allows to set data access rules at workspace level.
To create/edit workspace’s data access rules simply check/uncheck checkboxes according to the desidered role. The Grant access to any role checkbox grant each role for any access mode.
Root Directory for REST PathMapper¶
This parameter is used by the RESTful API as the Root Directory for uploaded files, following the structure:
${rootDirectory}/workspace/store[/<file>]
Note
This parameter is visible only when the Enabled parameter of the Settings section is checked.
Add a Workspace¶
The buttons for adding and removing a workspace can be found at the top of the Workspaces view page.
To add a workspace, select the Add new workspace button. You will be prompted to enter the the workspace name and URI.
Remove a Workspace¶
To remove a workspace, select it by clicking the checkbox next to the workspace. Multiple workspaces can be selected, or all can be selected by clicking the checkbox in the header. Click the Remove selected workspaces(s) button. You will be asked to confirm or cancel the removal. Clicking OK removes the selected workspace(s).
Isolated Workspaces¶
Isolated workspaces content is only visible and queryable in the context of a virtual service bound to the isolated workspace. This means that isolated workspaces content will not show up in global capabilities documents and global services cannot query isolated workspaces contents. Is worth mentioning that those restrictions don’t apply to the REST API.
A workspace can be made isolated by checking the Isolated Workspace checkbox when creating or editing a workspace.
An isolated workspace will be able to reuse a namespace already used by another workspace, but its resources (layers, styles, etc …) can only be retrieved when using that workspace virtual services and will only show up in those virtual services capabilities documents.
It is only possible to create two or more workspaces with the same namespace in GeoServer if only one of them is non isolated, i.e. isolated workspaces have no restrictions in namespaces usage but two non isolated workspaces can’t use the same namespace.
The following situation will be valid:
-
Prefix: st1 Namespace: Isolated: false
-
Prefix: st2 Namespace: Isolated: true
-
Prefix: st3 Namespace: Isolated: true
But not the following one:
-
Prefix: st1 Namespace: Isolated: false
-
Prefix: st2 Namespace: Isolated: false
-
Prefix: st3 Namespace: Isolated: true
At most only one non isolated workspace can use a certain namespace.
Consider the following image which shows to workspaces (st1 and st2) that use the same namespace () and several layers contained by them:
In the example above st2 is the isolated workspace. Consider the following WFS GetFeature requests:
-
-
-
-
-
-
The first request is targeting WFS global service and requesting layer2, this request will use layer2 contained by workspace st1. The second request is targeting st2 workspace WFS virtual service, layer2 belonging to workspace st2 will be used. Request three and four will use layer2 belonging to workspace, respectively, st1 and st2. The last two requests will fail saying that the feature type was not found, isolated workspaces content is not visible globally.
The rule of thumb is that resources (layers, styles, etc …) belonging to an isolated workspace can only be retrieved when using that workspaces virtual services and will only show up in those virtual services capabilities documents. | https://docs.geoserver.org/latest/en/user/data/webadmin/workspaces.html | 2021-02-25T04:49:46 | CC-MAIN-2021-10 | 1614178350717.8 | [array(['../../_images/data_workspaces_security_edit.png',
'../../_images/data_workspaces_security_edit.png'], dtype=object)] | docs.geoserver.org |
Make sure, you have finished the hardware video part 3, before you attempt to setup your wallet ()
The Lntxbot is a "Custodial Bitcoin Wallet". This means, you are trusting those who run the software behind this wallet with your satoshis and you are not in control of your private keys. Be careful and don't keep too many satoshis on there..
Install the mobile application "Telegram" on your phone and after you have created your account, go into the search bar and type
Lntxbotor click here.
You should now be able to talk to the Lntxbot. If you type
/help you will get a list of commands that are available.
You can inspect all the different commands that are available for you on this mobile Lightning Wallet.
In order for this Lightning Wallet to work with our ATM, we need to fund it and have some satoshis on there. Type
/invoice <amount> into the message box and replace
<amount> by a certain amount of Satoshis you want to fund it with.
Now, you will have to pay this invoice with another wallet in order for your Lntxbot to receive them and later be available at your ATM.
After you payed this invoice check your balance with
/balance to make sure it all worked out.
We will now connect the Lntxbot.
Next, we will generate a QR code with our Lntxbot credentials. Go to the message box in Lntxbot and type
/lightningatm. This will generate a QR code with the credentials that we need.
We'll now have to put our ATM into the "credentials scanning" mode. This can be done by pushing the button 3 times.
After you pushed the button three times your display should say
Please scan your wallet credentials. Now take your mobile phone with the Lntxbot and show the previously generated QR code with your credentials to the camera.
It will now scan your credentials and safe it to the configuration file of the ATM. If you've been successful, your screen will say
Success!! and show you the current balance of your Lntxbot.
If you every wanted to renew your API credentials just send the command
/api_refresh to the Lntxbot. This will revoke the current credentials and replace them with new ones.
Let's make a first proper transaction now!
Insert some coins into the coin acceptor of the ATM and see how the balance increases on the display (give the ATM some time between coins for coin recognition).
When you've inserted enough, press the button once (your balance on the Lntxbot needs to be big enough to cover the requested satoshis).
The ATM will now create a QR code and display it on the screen with a note that says
Scan to receive. Take your mobile Lightning Wallet and scan this QR code to receive the satoshis.
Join the Telegram group here: | https://docs.lightningatm.me/lightningatm-setup/wallet-setup/lntxbot | 2021-02-25T05:28:56 | CC-MAIN-2021-10 | 1614178350717.8 | [] | docs.lightningatm.me |
How To
Q: How can I alter the height of the RadRibbonBar control?
A: Since we have different items placed in the RadRibbonBar control (for instance Quick access Toolbar, ContextualTabs, Groups, Buttons whose image size may vary from small to large as described here) the current design of the RadRibbonBar control does not allow altering of its height. Another reason for that is a very possible distortion of the control's layout. For example you might have issues with the layout of the buttons and/or the groups in the tabs. Therefore we would not recommend that you change the height of the control.
Q: How can I collapse RadRibbonBar control?
A: Since Q2 2012 you can use the EnableMinimizing property of the control in a combination with the Minimized property. | https://docs.telerik.com/devtools/aspnet-ajax/controls/ribbonbar/how-to | 2021-02-25T05:33:39 | CC-MAIN-2021-10 | 1614178350717.8 | [] | docs.telerik.com |
This is the standard exception that is thrown by the engine or the base model if a requested interaction on the SUT cannot be performed, e.g. because the required image for a click could not by found.
The exception stops the execution of the current test step and sets the report to Aborted. All following steps (except CleanupStep) are set to NotExecuted. A screenshot will be provided automatically.
Namespace: Progile.ATE.Common
Object Exception TrioExecutionException TestStepAbortedException
public TestStepAbortedException (string Reason) | https://docs.testresults.io/designer/automationframework/teststepabortedexception | 2021-02-25T04:11:41 | CC-MAIN-2021-10 | 1614178350717.8 | [] | docs.testresults.io |
These instructions are for upgrading from OpenClinica 3.0.x to version 3.1 or to a 3.1.x maintenance release on Linux systems running the essential software dependencies: Java 6, Tomact 6, and PostgreSQL 8.4. When you see v.x or v.x.x or similar in the instructions, use the version number you are upgrading to, e.g. 3.1 or 3.1.2.
The instructions apply only if you followed the upgrade instructions to get to 3.0.x; if you did not, you might need to do other things in order to upgrade your system.
If you are running an earlier version of OpenClinica and/or its related software dependencies, make sure you upgrade these to the required versions before upgrading beyond OpenClinica 3.0.x. Please note that these instructions do not cover upgrading related software dependencies for 3.1.x. To upgrade these components, you'll need to follow the instructions provided by those technologies. Make sure you back up everything so your existing files are not lost.
If you are upgrading from OpenClinica 2.5.x, you need to first upgrade to 3.0.x, and you will need to upgrade the software dependencies so they meets the requirements for OpenClinica 3.0.x. Perform a full backup of your 2.5.x instance, perform a fresh install of 3.0.x, and then restore your 2.5.x data. For information about backing up your 2.5.x instance, see the 3.0.x upgrade documentation that is included with the downloaded files for 3.0.x.
Follow this process to upgrade OpenClinica. In each step, click the link to view detailed instructions for the step:
Before upgrading, follow this process to back up the database and files needed by OpenClinica:
Run upgrade for OpenClinica Web Services only if you are currently using Web Services
Now, you'll need to make database updates for your database: either PostgreSQL or Oracle. The instructions apply to upgrading OpenClinica on Linux and Windows systems:
Web Services should have the same version as OpenClinica application.
Approved for publication by Benjamin Baumann (bbaumann), Principal. Signed on 2014-03-25 12:07PM
Not valid unless obtained from the OpenClinica document management system on the day of use. | http://docs.openclinica.com/3.1/installation/upgrading-openclinica-linux | 2018-05-20T11:57:21 | CC-MAIN-2018-22 | 1526794863410.22 | [] | docs.openclinica.com |
user object or performs a search to retrieve multiple user objects.
The Identity parameter specifies the Active Directory user to get. You can identify a user by its distinguished name (DN), GUID, security identifier (SID), Security Accounts, see about_ActiveDirectory_Filter. If you have existing --------------------------
C:\PS>Get-ADUser -Filter * -SearchBase "OU=Finance,OU=UserAccounts,DC=FABRIKAM,DC=COM"
Description
Get all users under the container 'OU=Finance,OU=UserAccounts,DC=FABRIKAM,DC=COM'.
-------------------------- EXAMPLE 2 --------------------------
C:\PS>Get-ADUser -Filter 'Name -like "*SvcAccount"' | FT Name,SamAccountName -A Name SamAccountName ---- -------------- SQL01 SvcAccount SQL01 SQL02 SvcAccount SQL02 IIS01 SvcAccount IIS01
Description
Get all users that have a name that ends with 'SvcAccount'.
-------------------------- EXAMPLE 3 --------------------------
C:\PS>Get-ADUser GlenJohn -Properties * Surname : John Name : Glen John UserPrincipalName : GivenName : Glen Enabled : False SamAccountName : GlenJohn=Glen John,OU=NorthAmerica,OU=Sales,OU=UserAccounts,DC=FABRIKAM,DC=COM
Description
Get all properties of the user with samAccountName 'GlenJohn'.
-------------------------- EXAMPLE 4 --------------------------
C:\PS>Get-ADUser -Filter {Name -eq "GlenJohn"} -SearchBase "DC=AppNC" -Properties mail -Server lds.Fabrikam.com:50000
Description
Get the user with name 'GlenJohn' on the AD LDS instance.
Required Parameters>, see about_ActiveDirectory_ObjectModel. Parameters. |Get-Member
Specifies the number of objects to include in one page for an Active Directory Domain Services query.
The default is 256 objects per page.
The following example shows how to set this parameter.
-ResultPageSize 500 .
The following example shows how to set this parameter to search under an OU.
-SearchBase "ou=mfg,dc=noam,dc=corp,dc=contoso,dc=com".
The following example shows how to set this parameter to an empty string. -SearchBase "".
The following example shows how to set this parameter to a subtree search.
-SearchScope Subtree. | https://docs.microsoft.com/en-us/powershell/module/activedirectory/get-aduser?view=winserver2012-ps | 2018-05-20T12:43:27 | CC-MAIN-2018-22 | 1526794863410.22 | [] | docs.microsoft.com |
Printing EFS Files
An EFS file is as transparent to a printer or other output device as it is to the monitor. If you can read an encrypted file on screen, it prints in plaintext. If you cannot read it on screen, you cannot print it.
This transparency requires that the same physical controls imposed on the computer also be imposed on the printer. The printer itself and the cabling to it must be secured so that an attacker cannot tap into them. If you print with a print server, the server must also be secured.
During printing, Windows 2000 copies the print job onto a spool (.spl) file that resides on the local print provider. In local printing, the local print provider on the local computer is used. In client/server printing, this is bypassed, and the .spl file resides on the local print provider of the server.
By default, .spl files are stored in the SystemRoot \System32\Spool\Printers folder. If that folder is unencrypted (as it generally is), the encryption that was in the original file is lost. You can avoid this by encrypting the folder, but this would slow processing by causing every .spl file to be encrypted. A better way is to create a special printer for encrypted files. This printer might use the same print hardware device, but with different print instructions. It should be local and unshared, and it should bypass the default folder by using one of the two following techniques:
- Select the Print directly to the printer check box on the Advanced page of the printer's Properties dialog box. The print job is not spooled, and no .spl file is created.
Note
Unspooled print jobs cannot be scheduled or prioritized.
- Create an encrypted folder and specify that .spl files are to be routed to it. The procedure is described in "Network Printing" in the Microsoft ® Windows ® 2000 Server Resource Kit Server Operations Guide .
By default, when the print job is complete, the .spl file is deleted. You can override the default by selecting the Keep printed documents check box on the Advanced page. If you select this option, you can resubmit a document to the printer from the printer queue instead of from the program. This is not recommended because the security risk does not outweigh the benefit. Even though the .spl files are encrypted, it is not a good practice to leave multiple copies of sensitive data in different folders. | https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-2000-server/cc962114(v=technet.10) | 2018-05-20T12:22:33 | CC-MAIN-2018-22 | 1526794863410.22 | [] | docs.microsoft.com |
LoRaWAN (ABP)
ABP stands for Authentication By Personalisation. It means that the encryption keys are configured manually on the device and can start sending frames to the Gateway without needing a 'handshake' procedure to exchange the keys (such as the one performed during an OTAA join procedure). binascii import struct # ABP authentication params dev_addr = struct.unpack(">l", binascii.unhexlify('00000005'))[0] nwk_swkey = binascii.unhexlify('2B7E151628AED2A6ABF7158809CF4F3C') app_swkey = bin) | https://docs.pycom.io/chapter/tutorials/lora/lorawan-abp.html | 2018-05-20T12:03:24 | CC-MAIN-2018-22 | 1526794863410.22 | [] | docs.pycom.io |
Attribute Input Types
When viewed from the AdminThe password-protected back office of your store where orders, catalog, content, and configurations are managed., attributes are the fields that you complete when you create a product. The input type that is assigned to an attributeA characteristic or property of a product; anything that describes a product. Examples of product attributes include color, size, weight, and price. determines the type of data that can be entered and the format of the field or input control. From the standpoint of the customer, attributes provide information about the product, and are the options and data entry fields that must be completed to purchase a product.
A quick rating takes only 3 clicks. Add a comment to help us improve Magento even more. | https://docs.magento.com/m2/2.2/b2b/user_guide/stores/attributes-input-types.html | 2019-01-16T06:47:44 | CC-MAIN-2019-04 | 1547583656897.10 | [] | docs.magento.com |
When you start up your Maltego client, you are first greeted by the Home page.
The Home page includes the Start Page on the left and the Transform Hub on the right.
Start Page
The Start Page includes links to our social media accounts and sometimes important notifications. We generaly use Twitter to post notifications about new features and we use YouTube to post any new video tutorials that we do. Any critical notifications will be posted directly on this page.
Any upcoming public trainings will be advertised on the start page.
Transform Hub
On the right-hand side of the Home page you will find the Transform Hub.
The Transform Hub allows you to install transforms that are provided by 3rd party transform vendors as well as additional transforms that are provided by Paterva. Each of the transform packages on the Transform Hub are referred to as Transform Hub Items.
| https://docs.maltego.com/support/solutions/articles/15000008831-home-page | 2019-01-16T05:30:32 | CC-MAIN-2019-04 | 1547583656897.10 | [array(['https://s3-eu-central-1.amazonaws.com/euc-cdn.freshdesk.com/data/helpdesk/attachments/production/15003755485/original/84f01AwbX6L5CMZ_yh2Y5tWQWnqIAHTyZA.png?1526477719',
None], dtype=object)
array(['https://s3-eu-central-1.amazonaws.com/euc-cdn.freshdesk.com/data/helpdesk/attachments/production/15003755570/original/p2bZjBb1NnQ6aVjiLnwiyga7m5Vvguu-7w.png?1526477872',
None], dtype=object) ] | docs.maltego.com |
Memory sharing is a proprietary ESXi technique that can help achieve greater memory density on a host.
Memory sharing relies on the observation that several virtual machines might be running instances of the same guest operating system, have the same applications or components loaded, or contain common data. In such cases, a host uses a proprietary Transparent Page Sharing (TPS) technique to eliminate redundant copies of memory pages. With memory sharing, a workload running in virtual machines often consumes less memory than it would when running on physical machines. As a result, higher levels of overcommitment can be supported efficiently. The amount of memory saved by memory sharing depends on whether the workload consists of nearly identical machines which might free up more memory, while a more diverse workload might result in a signifigantly lower percentage of memory savings.
Due to security concerns, inter-virtual machine transparent page sharing is disabled by default and page sharing is being restricted to intra-virtual machine memory sharing. This means page sharing does not occur across virtual machines and only occurs inside of a virtual machine. See Sharing Memory Across Virtual Machines for more information. | https://docs.vmware.com/en/VMware-vSphere/6.0/com.vmware.vsphere.resmgmt.doc/GUID-FEAC3A43-C57E-49A2-8303-B06DBC9054C5.html | 2019-01-16T06:26:24 | CC-MAIN-2019-04 | 1547583656897.10 | [] | docs.vmware.com |
Django Channels¶
Channels is a project that takes Django and extends its abilities beyond HTTP - to handle WebSockets, chat protocols, IoT protocols, and more. It’s built on a Python specification called ASGI.
It does this by taking the core of Django and layering a fully asynchronous layer underneath, running Django itself in a synchronous mode but handling connections and sockets asynchronously, and giving you the choice to write in either style.
To get started understanding how Channels works, read our Introduction, which will walk through how things work. If you’re upgrading from Channels 1, take a look at What’s new in Channels 2? to get an overview of the changes; things are substantially different.
If you would like complete code examples to read alongside the documentation or experiment on, the channels-examples repository contains well-commented example Channels projects.
Warning
This is documentation for the 2.x series of Channels. If you are looking
for documentation for the legacy Channels 1, you can select
1.x from the
versions selector in the bottom-left corner.
Projects¶
Channels is comprised of several packages:
- Channels, the Django integration layer
- Daphne, the HTTP and Websocket termination server
- asgiref, the base ASGI library
- channels_redis, the Redis channel layer backend (optional)
This documentation covers the system as a whole; individual release notes and instructions can be found in the individual repositories.
Topics¶
- Introduction
- Installation
- Tutorial
- Consumers
- Routing
- Database Access
- Channel Layers
- Sessions
- Authentication
- Security
- Testing
- Worker and Background Tasks
- Deploying
- What’s new in Channels 2?
Reference¶
- ASGI
- Channel Layer Specification
- Community Projects
- Contributing
- Support
- Release Notes
- 1.0.0 Release Notes
- 1.0.1 Release Notes
- 1.0.2 Release Notes
- 1.0.3 Release Notes
- 1.1.0 Release Notes
- 1.1.1 Release Notes
- 1.1.2 Release Notes
- 1.1.3 Release Notes
- 1.1.4 Release Notes
- 1.1.5 Release Notes
- 1.1.6 Release Notes
- 2.0.0 Release Notes
- 2.0.1 Release Notes
- 2.0.2 Release Notes
- 2.1.0 Release Notes
- 2.1.1 Release Notes
- 2.1.2 Release Notes
- 2.1.3 Release Notes
- 2.1.4 Release Notes
- 2.1.5 Release Notes
- 2.1.6 Release Notes | https://channels.readthedocs.io/en/latest/index.html | 2019-01-16T07:04:04 | CC-MAIN-2019-04 | 1547583656897.10 | [] | channels.readthedocs.io |
Changelog¶
:py:class:`Program`s. | https://pyquil.readthedocs.io/en/1.9/changes.html | 2019-01-16T05:23:38 | CC-MAIN-2019-04 | 1547583656897.10 | [] | pyquil.readthedocs.io |
The OfficeScan agent generates logs when it detects viruses and malware and sends the logs to the server.
Logs > Agents > Security Risks
Date and time of virus/malware detection
Endpoint
Security threat
Infection source
Infected file or object
File path
Infection channel
Scan type that detected the virus/malware
Scan results
For more information on scan results, see Virus/Malware Scan Results.
IP address
MAC address
Log details (Click View to see the details.)
The CSV file contains the following information:
All information in the logs
User name logged on to the endpoint at the time of detection | http://docs.trendmicro.com/en-us/enterprise/officescan-120-server-online-help/scanning-for-securit/security-risk-logs/viewing-virusmalware.aspx | 2019-01-16T06:38:40 | CC-MAIN-2019-04 | 1547583656897.10 | [] | docs.trendmicro.com |
This page describes how ClustrixDB architecture was designed for Consistency, Fault Tolerance, and Availability.
Consistency
Many distributed databases have embraced eventual consistency over strong consistency to achieve scalability. However, eventual consistency comes with a cost of increased complexity for the application developer, who must develop for anomalies that may arise with inconsistent data, see Concurrency Control.
ClustrixDB takes the following approach to consistency:
- Synchronous replication within the cluster. All nodes participating in a write must provide an acknowledgment before a write is complete. Writes are performed in parallel.
- The Paxos protocol is used for distributed transaction resolution.
- ClustrixDB supports for Read Committed and Repeatable Read (Snapshot) isolation levels with limited support for Serializable.
- Multi-Version Concurrency Control (MVCC allows) for lockless reads and ensures that writes will not block reads.
Fault Tolerance
ClustrixDB provides fault tolerance by maintaining multiple copies of data across the cluster. By default, ClustrixDB can accommodate a single node failure and automatically recover with no loss of data. The degree of fault tolerance (nResiliency) is configurable and ClustrixDB can be set up to handle multiple node failures and zone failure.
For more information, including how to adjust fault tolerance in ClustrixDB, see Understanding Fault Tolerance, MAX_FAILURES, and Zones.
Availability
In order to understand ClustrixDB's availability modes and failure cases, it is necessary to understand our group membership protocol.
Group Membership and Quorum
ClustrixDB uses a distributed group membership protocol. The protocol maintains two fundamental sets:
- The static set of all nodes known to the cluster
- The set of nodes that can currently communicate with each other.
The cluster cannot form unless more than half the nodes in the static membership are able to communicate with each other (a quorum).
For example, if a six-node cluster experiences a network partition resulting in two sets of three nodes, ClustrixDB will be unable to form a cluster.
However, if more than half the nodes are able to communicate, ClustrixDB will form a cluster.
For performance reasons, MAX_FAILURES defaults to 1 to provide for the loss of one node or one zone.
Partial Availability
In the above example, ClustrixDB formed a cluster because a quorum of nodes remained. However, such a cluster could offer only partial availability because the cluster may not have access to the complete dataset.
In the following example, ClustrixDB was configured to maintain two replicas. However, both nodes holding replicas for A are unable to participate in the cluster (due to some failure). When a transaction attempts to access data on slice A, the database will generate an error that will surface to the application.
Availability Requirements. | http://docs.clustrix.com/display/CLXDOC/Consistency,+Fault+Tolerance,+and+Availability | 2019-01-16T05:27:39 | CC-MAIN-2019-04 | 1547583656897.10 | [] | docs.clustrix.com |
{"_id":"59a436cfef7eec00196fa80c",-08-28T15:29:19.617Z","changelog":[],"body":"##Support for Amazon Web Services Spot Instances\n\nSeven Bridges has introduced support for Spot instances on the Amazon Web Services (AWS) deploy of the Cancer Genomics Cloud. Spot instance support can be selected as a default for projects and and an option for each task execution. By selecting a spot instance execution costs can be dramatically reduced. Our testing indicates an execution cost savings of over 75% on common workflows.\n\nDue to the nature of how AWS handles Spot instances, they can be interrupted while tasks are running. If a Spot instance is interrupted, Seven Bridges’ job retry functionality will automatically restart interrupted and remaining unfinished jobs on an On-Demand instance to prevent further interruptions. Such an interruption may impact the cost savings from using a Spot instance and can result in a longer overall runtime, but the reliability of task execution is unaffected. SBFS to make project files available on a local file system and thus as accessible as any other locally available file. This eliminates the need for downloading complete files to a local machine, which is especially useful when working with large files exceeding the size of a local disk. With SBFS, parts of a file are accessible without necessitating a complete file download and users can perform interactive analyses on a local machine (or server instance) without needing to bring their tool to the CGC. \n\nSBFS is available for Linux and macOS operating systems, and beta version is available for download from the new Data tools page.\n\nLearn more from [SBFS documentation](doc:about-sbfs).","slug":"release-note-082817","title":"Release note 08.28.17"} | https://docs.cancergenomicscloud.org/blog/release-note-082817 | 2019-01-16T06:41:53 | CC-MAIN-2019-04 | 1547583656897.10 | [] | docs.cancergenomicscloud.org |
5.1. Building Iroha¶
In this guide we will learn how to install all dependencies, required to build Iroha and how to build it.
5.1.1. Preparing the Environment¶
In order to successfully build Iroha, we need to configure the environment. There are several ways to do it and we will describe all of them.
Currently, we support Unix-like systems (we are basically targeting popular Linux distros and macOS). If you happen to have Windows or you don’t want to spend time installing all dependencies you might want to consider using Docker environment. Also, Windows users might consider using WSL
Hint
Having troubles? Check FAQ section or communicate to us directly, in case you were stuck on something. We don’t expect this to happen, but some issues with an environment are possible.
5.1.1.1. Docker¶
Note
You don’t need Docker to run Iroha, it is just one of the possible choices.
First of all, you need to install
docker and
docker-compose. You can
read how to install it on a
Docker’s website
Note
Please, use the latest available docker daemon and docker-compose.
Then you should clone the Iroha repository to the directory of your choice.
git clone -b master --depth=1
Hint
--depth=1 option allows us to download only latest commit and
save some time and bandwidth. If you want to get a full commit history, you
can omit this option.
After it, you need to run the development environment. Run the
scripts/run-iroha-dev.sh script:
bash scripts/run-iroha-dev.sh
Hint
Please make sure that Docker is running before executing the script.
macOS users could find a Docker icon in system tray, Linux user could use
systemctl start docker
After you execute this script, following things happen:
1. The script checks if you don’t have containers with Iroha already running.
Successful completion finishes with the new container shell.
2. The script will download
hyperledger/iroha:develop-build and
postgres images.
hyperledger/iroha:develop-build image contains all development dependencies and is
based on top of
ubuntu:16.04.
postgres image is required for starting
and running Iroha.
3. Two containers are created and launched.
4. The user is attached to the interactive environment for development and
testing with
iroha folder mounted from the host machine. Iroha folder
is mounted to
/opt/iroha in Docker container.
Now your are ready to build Iroha! Please go to Building Iroha section.
5.1.1.2. Linux¶
5.1.1.2.1. Boost¶
Iroha requires Boost of at least 1.65 version.
To install Boost libraries (
libboost-all-dev), use current release from Boost webpage. The only
dependencies are thread, system and filesystem, so use
./bootstrap.sh --with-libraries=thread,system,filesystem when you are building
the project.
5.1.1.2.2. Other Dependencies¶
To build Iroha, you need following packages:
build-essential
automake
libtool
libssl-dev
zlib1g-dev
libc6-dbg
golang
git
tar
gzip
ca-certificates
wget
curl
file
unzip
python
cmake
Use this code to install dependencies on Debian-based Linux distro.
apt-get update; \ apt-get -y --no-install-recommends install \ build-essential automake libtool \ libssl-dev zlib1g-dev \ libc6-dbg golang \ git tar gzip ca-certificates \ wget curl file unzip \ python cmake
Note
If you are willing to actively develop Iroha and to build shared libraries, please consider installing the latest release of CMake.
5.1.1.3. macOS¶
If you want to build it from scratch and actively develop it, please use this code to install all dependencies with Homebrew.
xcode-select --install brew install cmake boost postgres grpc autoconf automake libtool golang soci
Hint
To install the Homebrew itself please run
ruby -e "$(curl -fsSL)"
5.1.2. Build Process¶
5.1.2.1. Cloning the Repository¶
Clone the Iroha repository to the directory of your choice.
git clone -b master cd iroha
Hint
If you have installed the prerequisites with Docker, you don’t need
to clone Iroha again, because when you run
run-iroha-dev.sh it attaches
to Iroha source code folder. Feel free to edit source code files with your
host environment and build it within docker container.
5.1.2.2. Building Iroha¶
To build Iroha, use those commands
mkdir build; cd build; cmake ..; make -j$(nproc)
Alternatively, you can use these shorthand parameters (they are not documented though)
cmake -H. -Bbuild; cmake --build build -- -j$(nproc)
Note
On macOS
$(nproc) variable does not work. Check the number of
logical cores with
sysctl -n hw.ncpu and put it explicitly in the command
above, e.g.
cmake --build build -- -j4
5.1.2.3. CMake Parameters¶
We use CMake to build platform-dependent build files. It has numerous flags
for configuring the final build. Note that besides the listed parameters
cmake’s variables can be useful as well. Also as long as this page can be
deprecated (or just not complete) you can browse custom flags via
cmake -L,
cmake-gui, or
ccmake.
Hint
You can specify parameters at the cmake configuring stage (e.g cmake -DTESTING=ON).
5.1.2.4. Running Tests (optional)¶
After building Iroha, it is a good idea to run tests to check the operability of the daemon. You can run tests with this code:
cmake --build build --target test
Alternatively, you can run following command in the
build folder
cd build ctest . --output-on-failure
Note
Some of the tests will fail without PostgreSQL storage running,
so if you are not using
scripts/run-iroha-dev.sh script please run Docker
container or create a local connection with following parameters:
docker run --name some-postgres \ -e POSTGRES_USER=postgres \ -e POSTGRES_PASSWORD=mysecretpassword \ -p 5432:5432 \ -d postgres:9.5 | https://iroha.readthedocs.io/en/latest/guides/build.html | 2019-01-16T05:36:49 | CC-MAIN-2019-04 | 1547583656897.10 | [] | iroha.readthedocs.io |
Getting Started¶
Installation¶
You will need the following
- Python 2.7 or 3.4+
- DataRobot account
- pip
Installing for Cloud DataRobot¶
If you are using the cloud version of DataRobot, the easiest way to get the latest version of the package is:
pip install datarobot
Note
If you are not running in a Python virtualenv, you probably want to use
pip install --user datarobot.
Installing for an On-Site Deploy¶
If you are using an on-site deploy of DataRobot, the latest version of the package is not the most appropriate for you. Contact your CFDS for guidance on the appropriate version range.
pip install "datarobot>=$(MIN_VERSION),<$(EXCLUDE_VERSION)"
For some particular installation of DataRobot, the correct value of $(MIN_VERSION) could be 2.0 with an $(EXCLUDE_VERSION) of 2.3. This ensures that all the features the client expects to be present on the backend will always be correct.
Note
If you are not running in a Python virtualenv, you probably want to use
pip install --user "datarobot>=$(MIN_VERSION),<$(MAX_VERSION).
Configuration.
Common Issues¶
This section has examples of cases that can cause issues with using the DataRobot client, as well as known fixes.
InsecurePlatformWarning¶
On versions of Python earlier than 2.7.9 you might have InsecurePlatformWarning in your output. To prevent this without updating your Python version you should install pyOpenSSL package:
pip install pyopenssl ndg-httpsclient pyasn1
AttributeError: ‘EntryPoint’ object has no attribute ‘resolve’¶
Some earlier versions of setuptools will cause an error on importing DataRobot. The recommended fix is upgrading setuptools. If you are unable to upgrade setuptools, pinning trafaret to version <=7.4 will correct this issue.
>>> import datarobot as dr ... File "/home/clark/.local/lib/python2.7/site-packages/trafaret/__init__.py", line 1550, in load_contrib trafaret_class = entrypoint.resolve() AttributeError: 'EntryPoint' object has no attribute 'resolve'
To prevent this upgrade your setuptools:
pip install --upgrade setuptools
ConnectTimeout¶
If you have a slow connection to your DataRobot installation, you may see a traceback like
ConnectTimeout: HTTPSConnectionPool(host='my-datarobot.com', port=443): Max retries exceeded with url: /api/v2/projects/ (Caused by ConnectTimeoutError(<requests.packages.urllib3.connection.VerifiedHTTPSConnection object at 0x7f130fc76150>, 'Connection to my-datarobot.com timed out. (connect timeout=6.05)'))
You can configure a larger connect timeout (the amount of time to wait on each request attempting
to connect to the DataRobot server before giving up) using a connect_timeout value in either
a configuration file or via a direct call to
datarobot.Client. | https://datarobot-public-api-client.readthedocs-hosted.com/en/v2.11.0/setup/getting_started.html | 2018-11-13T03:13:52 | CC-MAIN-2018-47 | 1542039741192.34 | [] | datarobot-public-api-client.readthedocs-hosted.com |
Get-Organization
Relationship
Use the Get-OrganizationRelationship cmdlet to retrieve settings for an organization relationship that has been created for federated sharing with other federated Exchange organizations or for hybrid deployments with Exchange Online.
For information about the parameter sets in the Syntax section below, see Exchange cmdlet syntax ().
Syntax
Get-OrganizationRelationship [[-Identity] <OrganizationRelationshipIdParameter>] [Relationship -Identity Contoso
This example retrieves the organization relationship settings for Contoso using the Identity parameter.
-------------------------- Example 2 --------------------------
Get-OrganizationRelationship -DomainController 'mail.contoso.com'
This example retrieves the organization relationship settings by using the FQDN of the domain controller. organizational relationship. You can use the following values:
Canonical name
GUID. | https://docs.microsoft.com/en-us/powershell/module/exchange/sharing-and-collaboration/get-organizationrelationship?view=exchange-ps | 2018-11-13T02:35:16 | CC-MAIN-2018-47 | 1542039741192.34 | [] | docs.microsoft.com |
Tip: Use Custom Libraries in Windows 7 to Ensure You Backup All your Data
Follow Our Daily Tips
RSS | Twitter | Blog | Facebook
Tell Us Your Tips
Share your tips and tweaks.
Do you store important data files outside your user profile? Custom libraries can help you ensure that those files are always backed up. Create a new library, call it Backup, and make sure it’s selected in your current backup settings. Add locations to the new library that you want to ensure are backed up. The files themselves remain in their original location, but as long as they’re on a local drive they’ll be backed up. If you remove a folder from the Backup library, it will no longer be backed up. Any new folder you add here, even if it’s outside your user profile, will automatically be included in your next backup.
From the Microsoft Press book Windows 7 Inside Out by Ed Bott, Carl Siechert, and Craig Stinson.
Looking for More Tips?
For more tips on Windows 7 and other Microsoft technologies, visit the TechNet Magazine Tips library. | https://docs.microsoft.com/en-us/previous-versions/technet-magazine/ee851686(v=msdn.10) | 2018-11-13T02:14:19 | CC-MAIN-2018-47 | 1542039741192.34 | [] | docs.microsoft.com |
Getting Started
Midtrans mobile SDK enable merchants to accept online payments natively in their mobile apps. We provide the drop-in User interface for making transactions on all payment types supported by Midtrans. Watch the video for the default SDK example.
There are four parties involved in the payment process:
- Merchant Server: The merchant backend implementation
- Customers
- Midtrans Backend (Payment Processor)
- Midtrans Mobile SDK
Transaction flow
- Checkout: Customer clicks the Checkout button on the Host application and the app makes a request to the Merchant Server
- Token request: Merchant Server makes a request to Midtrans Midtrans Backend for payment Processing.
- Charge response: Mobile SDK receives the response from Midtrans Backend and triggers the handler on Mobile App with success/failure/pending status
- Charge notification: Midtrans Backend sends a notification to the Merchant backend confirming the completion of transaction.
Security Aspects
- There are 2 separate keys CLIENT_KEY and SERVER_KEY (available on MAP)
- CLIENT_KEY is used for tokenizing the credit card. It can only be used from the Client(mobile device)
- SERVER_KEY is used for acquiring the token from the Midtrans server. It is not to be used from the device, all API requests that use the SERVER_KEY need to be made from the Merchant Server.
- We use strong encryption for making connections to Merchant server, please make sure it has valid https Certificate.
The following are configurable parameters of SDK that can be used while performing transaction -
- Merchant server Endpoint / Base URL : URL of server to which transaction data will be sent. This will also be referred to as a merchant server.
- Transaction details - contains payment information like amount, order Id, payment method etc.
- Midtrans Client Key - token that is specified by merchant server to enable the transaction using
credit card. Available on the MAP
Prerequisites
- Create a merchant account in MAP
- In MAP, setup your merchant account settings, in particular the Notification URL.
- Setup your merchant server. A server side implementation is required for Midtrans mobile SDK to work. You can check the server implementation reference, and walk through the API’s that you may need for implementation on your backend server.
- Minimum requirements:
- AndroidSDK: Android 4.0 Ice Cream Sandwich API Level 14
- iOS SDK: iOS 7 and Xcode 8.0
Merchant Server Implementation
Sample Project to implement Merchant Server sample merchant server.
SDK Changes
There are some changes required on merchant server to make use of the latest SDK.
1.0.xSDK needs merchant server to redirect the request to
/payin Snap endpoint.
1.1.xSDK needs merchant server to redirect the request to
/transactionsin Snap endpoint.
Merchants need to implement the following web services for SDK to work correctly. Note the below points, accept and return Content-Type
JSON
- Content-Type: application/json
- Accept: application/json
Mandatory
Create Token (Checkout) : This API proxies the request to Snap Backend. This process generates the token necessary for the secure communication between Mobile SDK and Midtrans.
Endpoint:
POSTon
/charge
- Add header in the request.
- Authorization: Basic Base64(SERVER_KEY:)
- Request will be sent to checkout / create transaction endpoint at Snap Backend.
- Note the above request does not go to
api.midtrans.combut to
app.midtrans.com
- Sandbox Endpoint:
- Production Endpoint:
Optional
We also additionally call a few endpoints if 1-Click or 2-Clicks is enabled :
- To Store Credit Card Tokens: We explicitly invoke
POSTon
/users/<userid>/tokensto allow the backend to store generated credit card token after a successful Credit card charge.
- To Retrieve Credit card Tokens : We explicitly invoke
GET
GET /users/<user_id>/tokensto retrieve saved card list.
The UserId refers to a generated UUID to associate card collection to each unique user. The mobile SDK generates this UUID during the initialization.
Store Credit Card Token
Endpoint:
POSTon
/users/<userid>/tokens
Request Body
[ { "status_code": "200", "cardhash": "481111-1114", "token_id": "481111ROMUdhBGMQhjVtEPNcsGee1114" }, { "status_code": "200", "cardhash": "481111-1114", "token_id": "481111ROMUdhBGMQhjVtEPNcsGee1114" } ]
Response Code : 200 Response Body
Card is saved
List Credit Cards
Endpoint:
GETon
/users/<userid>/tokens
Request Body
None
Response Body
[ { "token_id": "481111ROMUdhBGMQhjVtEPNcsGee1114", "cardhash": "481111-1114" }, { "token_id": "481111ROMUdhBGMQhjVtEPNcsGee1114", "cardhash": "481111-1114" } ]
Supported Payment Methods
Credit/Debit Cards
Support for making payments via credit cards and or debit cards. We support
Visa,
Mastercard,
American Express and
JCB. We also support additional features like two clicks, one click, installment, and pre-authorization.
Bank Transfer
Support payment using BCA Virtual Account, Permata Virtual Account, BNI Virtual Account, and Mandiri Bill Payment.
BCA Virtual Account
BCA Virtual Account is a virtual payment method offered by Bank BCA. Users can pay using their BCA Bank account. Payment can be made through all of Bank BCA’s channels (KlikBCA, m-BCA, and ATM).
Permata Virtual Account
Permata Virtual Account is a virtual payment method facilitated by Bank Permata. Users can pay using any Indonesian Bank account. Payment can be made through ATM Bersama, Prima, or Alto ATM networks.
BNI Virtual Account
BNI Virtual Account is a virtual payment method facilitated by Bank BNI. Users can pay using any Indonesian Bank account. Payment can be made through ATM Bersama, Prima, or Alto ATM networks.
Mandiri Bill Payment
Mandiri Bill is a virtual payment method offered by Bank Mandiri. Users can pay using their Mandiri bank account. Payment can be made through all of Bank Mandiri’s channels (Internet Banking, SMS Banking & ATM).
GO-PAY
GO-PAY is an e-Wallet payment method by GO-JEK. Users will pay using the GO-JEK app.
The user flow varies when using a tablet compared to a smartphone.
When users make a purchase using GO-PAY on a tablet
- Users see a QR code on their tablet.
- Users open the GO-JEK app on their phone.
- Users tap the Scan QR function on the GO-JEK app.
Note : The Scan QR button won’t appear if your GO-PAY balance is less than Rp10,000.
- Users point their camera to the QR Code.
- Users check their payment details on the GO-JEK app and then tap Pay.
- The transaction is complete and the users’ GO-PAY balance is deducted.
When users make a purchase on their smartphone
- Users are automatically redirected to the GO-JEK app when making purchases on their smartphone.
- Users finish the payment on the GO-JEK app.
- The transaction is complete and their GO-PAY balance is deducted.
To enable GO-PAY in Midtrans Mobile SDK, all you need to do is enable GO-PAY payment method on your Merchant Dashboard (MAP). If you want to use it explicitly by applying direct payment, please take a look at this sample code.
KlikBCA
Internet banking direct payment method by Bank BCA. User will be redirected to the KlikBCA website for payment.
BCA Klikpay
Internet banking direct payment method by Bank BCA. User will be redirected to the BCA KlikPay website for payment.
Mandiri Clickpay
Internet banking direct payment method by Bank Mandiri. User will be redirected to the Mandiri Clickpay website for payment.
CIMB Clicks
Internet banking direct payment method by Bank CIMB. User will be redirected to the CIMB Clicks website for payment.
Danamon Online Banking
Internet banking direct payment method by Bank Danamon. User will be redirected to the Danamon Online Banking website for payment.
ePay BRI
Internet banking direct payment method by Bank BRI. User will be redirected to the BRI ePay website for payment.
LINE Pay / Mandiri e-cash
E-Wallet payment method by Bank Mandiri. User will pay through their LINE PAY / Mandiri e-cash account.
Indomaret - Payment via convenience Stores
Convenience store payment method by Indomaret. User will pay through the physical Indomaret convenience store.
Akulaku
Akulaku is payment using Akulaku. It uses Webview to handle the payment.
Transaction status
For the synchronous payment methods, the latest status will be provided on the response of payment step.
And for asynchronous payment methods, Your merchant server needs to implement Midtrans
/status endpoint
in here for your mobile application to work correctly since Midtrans SDK does not have your
server_key. Note the below points, accept and return Content-Type JSON
Headers :
- Content-Type: application/json
- Accept: application/json
- Authorization : Basic Base64(SERVER_KEY:)
Request
GETMIDTRANS_API_BASE_URL/v2/{order_id & transaction_id}/status
Android SDK
This SDK provides an UI to take required information from user to execute transaction.
Sample Project to implement SDK sample project.
Latest Version
Latest Released version logs on Github release page.
Download Demo App
If you want to have a look at SDK in more convenient manner, you can download latest version of our demo app from Google Play Store. This app is using the latest version of SDK.
Installation
Add SDK installation following into your build.gradle
Midtrans Bintray repository
repositories { jcenter() // Add the midtrans repository into the list of repositories maven { url "" } maven { url "" } }
Sample SDK Sandbox Dependencies
dependencies { // For using the Midtrans Sandbox implementation 'com.midtrans:uikit:1.21.2-SANDBOX' // change the number to the latest version }
Sample SDK Production Dependencies
dependencies { // For using the Midtrans Production implementation 'com.midtrans:uikit:1.21.2' // change the number to the latest version }
You need to add Midtrans SDK inside your app’s module
build.gradle. Make sure to use the proper environment (SANDBOX / PRODUCTION)
Midtrans SDK Initialization();
Then you need to initialize it on your activity or application class.
Note:
- CONTEXT: Application/activity context
- CLIENT_KEY: Your midtrans client key (provided in MAP)
- BASE_URL: Your merchant server URL
Differentiate Sandbox and Production in one app (Optional)
Differentiate Sandbox and Production Flavors
android { ... // Define Merchant BASE URL and CLIENT KEY for each flavors productFlavors { sandbox { buildConfigField "String", "BASE_URL", "\"\"" buildConfigField "String", "CLIENT_KEY", "\"VT-CLIENT-sandbox-client-key\"" } production { buildConfigField "String", "BASE_URL", "\"\"" buildConfigField "String", "CLIENT_KEY", "\"VT-CLIENT-production-client-key\"" } } ... } // Define Midtrans SDK dependencies for each flavors dependencies { ... sandboxImplementation 'com.midtrans:uikit:1.21.2-SANDBOX' // change the version to latest one productionImplementation 'com.midtrans:uikit:1.21.2' // change the version to latest one ... }
You can support two payment environments in your app by defining two flavors in your
build.gradle.
Initialize Midtrans SDK using provided base URL and client key in BuildConfig();
Initialize your SDK using merchant BASE_URL and CLIENT_KEY provided by BuildConfig data.
Adding external card scanner (Optional)
Scan card SDK Dependencies
//..other dependencies implementation ('com.midtrans:scancard:1.21.1'){ exclude module: 'uikit' }
We provide a plugin to integrate card.io to allow customers to read/scan the credit card/debit card information using their mobile phone camera.
You can add external card scanner using
ScanCardLibrary implementation by midtrans scan card library into your app’s dependencies in
build.gradle.
initialize SDK with ScanCard library
SdkUIFlowBuilder.init(...) // initialization for using external scancard .setExternalScanner(new ScanCard()) .buildSDK();
Then, when initializing the SDK you can
setExternalScanner(new ScanCard()) on
SdkUIFlowBuilder
Adding Customer Detail (Optional)
Create UserDetail object
UserDetail userDetail = LocalDataHandler.readObject("user_details", UserDetail.class); if (userDetail == null) { userDetail = new UserDetail(); userDetail.setUserFullName("Budi Utomo"); userDetail.setEmail("[email protected]"); userDetail.setPhoneNumber("08123456789"); // set user ID as identifier of saved card (can be anything as long as unique), // randomly generated by SDK if not supplied userDetail.setUserId("budi-6789"); ArrayList<UserAddress> userAddresses = new ArrayList<>(); UserAddress userAddress = new UserAddress(); userAddress.setAddress("Jalan Andalas Gang Sebelah No. 1"); userAddress.setCity("Jakarta"); userAddress.setAddressType(com.midtrans.sdk.corekit.core.Constants.ADDRESS_TYPE_BOTH); userAddress.setZipcode("12345"); userAddress.setCountry("IDN"); userAddresses.add(userAddress); userDetail.setUserAddresses(userAddresses); LocalDataHandler.saveObject("user_details", userDetail); }
By default, SDK expect you to supply it with customer detail. Customer detail usually consists of first and last name, email, phone number, billing address, and shipping address. For addresses, you can set billing and shipping as one address. If you want to send customer detail to your merchant server during checkout, please create UserDetail object and supply it into SDK. During checkout, SDK will automatically look for UserDetail instance and put it in JSON request as CustomerDetail.
If you don’t want to supply customer detail at all, you can use UIKitCustomSetting to skip this process.
Installation using a wizard (or setup assistant)
Pre-condition :
- You need to have account in Midtrans, and keep the server key and client key. If you don’t have it, please create an account from this link.
- Create new project in Android Studio (named Jual-in), select API level 19 as minimum SDK and add ‘Basic Activity’. Let it be by now.
- Setup server that will be functioned as a merchant server. In this case, using localhost and already did the server integration with Devsupport AI.
Integration
Please follow below steps in order to connect with us.
- You need to download Devsupport AI application.
- Install and open the App, you should see a screen below:
- You can either open the folder containing the Android source code, drag-and-drop the source folder, or just choose one from “Recent Projects”.
- In this case, please choose the previously created project, named Jual-in. Proceed it by click on Integrate icon.
- Type/Search for “Midtrans” (without quotes) when asked for the product you would like to integrate.
- Click on “Midtrans android integration”
- Enter your Sandbox Client Key and Server Key then Proceed. If you enter wrong key combination, the app will let you know the keys are incorrect.
- We will be able to see the changes that Devsupport AI will implement to Android project (Jual-in). Proceed by clicking
Apply changes.
- If everything went smooth, you will see this summary.
- Close the Devsupport AI and open Android project in Android Studio. Then, Android Studio will download all required dependencies that previously written by Devsupport AI. If not, please do trigger project sync.
- Now we have two new methods : initListener() and midpay(). All we need to do is call the midpay() method with its required parameter (email, phone number, name, and transaction amount). For example, we can trigger the method everytime we click on FAB (Floating Action Bar).
- Run the project.
NOTE : Before running the project, please make sure that host app (in this case Jual-in) doesn’t use action bar in its theme because it will be clash with toolbar used in Midtrans SDK. For example, we can use this config in styles.xml :
<style name="AppTheme" parent="Theme.AppCompat.Light.NoActionBar">
Let Us Know Your Thoughts
Is our wizard (or setup assistant, using Devsupport AI) helpful? Do you have any feedback? Please share your feedback here.
Prepare Transaction Details
TRANSACTION_ID and
TOTAL_AMOUNT are required to create a transaction request for each payment.
Create Transaction Request object
TransactionRequest transactionRequest = new TransactionRequest(TRANSACTION_ID, TOTAL_AMOUNT);
Item Details
Item details are required for Mandiri Bill and BCA KlikPay payment; it is optional for other payment methods.
ItemDetails class holds information about item purchased by user. TransactionRequest takes an array list of item details.
Item Details Object
ItemDetails itemDetails1 = new ItemDetails(ITEM_ID_1, ITEM_PRICE_1, ITEM_QUANTITY_1, ITEM_NAME_1); ItemDetails itemDetails2 = new ItemDetails(ITEM_ID_2, ITEM_PRICE_2, ITEM_QUANTITY_2, ITEM_NAME_2); // Create array list and add above item details in it and then set it to transaction request. ArrayList<ItemDetails> itemDetailsList = new ArrayList<>(); itemDetailsList.add(itemDetails1); itemDetailsList.add(itemdetails2); // Set item details into the transaction request. transactionRequest.setItemDetails(itemDetailsList);
Note:
- This goes with the assumption that you have created
transactionRequestobject using required parameters.
ITEM_NAMEmaximum character length is 50.
Bill Info
Bill Info is optional for Mandiri Bill payment only.
BillInfoModel class holds information about billing information that will be shown in billing details.
Bill Info object
BillInfoModel billInfoModel = new BillInfoModel(BILL_INFO_KEY, BILL_INFO_VALUE); // Set the bill info on transaction details transactionRequest.setBillInfoModel(billInfoModel);
Set Transaction Request into SDK Instance
After creating transaction request with optional fields above, you must set it into SDK instance.
Set Transaction Request into SDK instance
MidtransSDK.getInstance().setTransactionRequest(transactionRequest);
Starting Payment
Starting Payment
CreditCard creditCardOptions = new CreditCard(); // Set to true if you want to save card to Snap creditCardOptions.setSaveCard(false); // Set to true to save card token as `one click` token creditCardOptions.setSecure(false); // Set bank name when using MIGS channel creditCardOptions.setBank(BankType.BANK_NAME); // Set MIGS channel (ONLY for BCA, BRI and Maybank Acquiring bank) creditCardOptions.setChannel(CreditCard.MIGS); // Set Credit Card Options transactionRequest.setCreditCard(creditCardOptions); // Set transaction request into SDK instance MidtransSDK.getInstance().setTransactionRequest(transactionRequest);
Start payment method screen
Default mode for Android SDK is showing payment method screen. This screen will show all of your available payment methods.
You can enable/disable payment methods via Snap Preferences in MAP.
Start payment method select screen
MidtransSDK.getInstance().startPaymentUiFlow(ACTIVITY_CONTEXT);
Start payment by using snap token
We provide SDK method to allow you to make payment by using
snap token without initialize transaction request first. You just need to pass snap token as argument of
startPaymentUiFlow method
Start payment by using snap token
MidtransSDK.getInstance().startPaymentUiFlow(ACTIVITY_CONTEXT, SNAP_TOKEN);
Acquiring Bank
We provide acquiring bank option that can be used to make payment using credit card :
bca(channel: MIGS)
bri(channel: MIGS)
maybank(channel: MIGS)
bni
mandiri
cimb
danamon
mega
If you are using
bca,
bri, or
maybank as your acquiring bank (MIGS channel), you have to define its channel explicitly.
Acquiring Bank Configuration
CreditCard creditCardOptions = new CreditCard(); //... // Set bank name when using MIGS channel. for example bank BRI creditCardOptions.setBank(BankType.BRI); // Set MIGS channel (ONLY for BCA, BRI and Maybank Acquiring bank) creditCardOptions.setChannel(CreditCard.MIGS); //...
iOS SDK
Latest Version
Latest released version logs on Github release page.
Integration
Prequisite
Please install Cocoapods version 1.0.0. You can find the installation guide here
Instalation
Instalation - Podfile
def shared_pods pod 'MidtransCoreKit' pod 'MidtransKit' end target 'MyBeautifulApp' do shared_pods end
- Navigate to your project’s root directory and run
pod initto create a Podfile.
- Open up the Podfile and add MidtransKit to your project’s target.
Installation - install command
pod install --verbose
- Save the file and run
pod installto install MidtransKit.
- Cocoapods will download and install MidtransKit and also create a .xcworkspace project.
Integration
Integration
//AppDelegate.m #import <MidtransKit/MidtransKit.h> [CONFIG setClientKey:@"VT-CLIENT-sandbox-client-key" environment:MidtransServerEnvironmentSandbox merchantServerURL:@""];
Once you have completed installation of MidtransKit, configure it with your
clientKey,
merchant server URL and
server environment in your
AppDelegate.h
Important: if you use Swift as your project language, you will need to add
-ObjC to your Xcode project.
Navigate to your
.xcodeproj file in Xcode, choose you app main target in
Targets, and in
Build Settings tab, search for
Other Linker Flags, double click and add
-ObjC.
Enable Card Scanner (using Card.IO library)
Podfile for CardIO
platform :ios, '7.0' def shared_pods pod 'MidtransKit' pod 'MidtransKit/CardIO' end target 'MyBeautifulApp' do shared_pods end
We provide a plugin to integrate card.io to allow customers to read/scan the credit card/debit card information using their mobile phone camera. If you want to support this external card scanner, follow these steps
- Update your
Podfilejust like on the example code
Update Cocoapods
pod update --verbose
- Then update your pods
Starting Payment
How to start payment by using snap token on iOS
From iOS SDK v1.12.0, we provide SDK method to allow you to make payment by using
snap token without initialize transaction request first. You just need to pass snap token as argument of
requestTransacationWithCurrentToken: method
Charging using SNAP token
[[MidtransMerchantClient shared] requestTransacationWithCurrentToken:{{string token}} completion:^(MidtransTransactionTokenResponse * _Nullable regenerateToken, NSError * _Nullable error) { MidtransUIPaymentViewController *paymentVC = [[MidtransUIPaymentViewController alloc] initWithToken:token]; paymentVC.paymentDelegate = self; [self.navigationController presentViewController:paymentVC animated:YES completion:nil]; }];
Making Transactions
Generate
TransactionTokenResponse object
Generate TransactionTokenResponse
//ViewController.m MidtransItemDetail *itemDetail = [[MidtransItemDetail alloc] initWithItemID:@"item_id" name:@"item_name" price:item_price quantity:item_quantity]; MidtransCustomerDetails *customerDetail = [[MidtransCustomerDetails alloc] initWithFirstName:@"user_firstname" lastName:@"user_lastname" email:@"user_email" phone:@"user_phone" shippingAddress:ship_address billingAddress:bill_address]; MidtransTransactionDetails *transactionDetail = [[MidtransTransactionDetails alloc] initWithOrderID:@"order_id" andGrossAmount:items_gross_amount]; [[MidtransMerchantClient shared] requestTransactionTokenWithTransactionDetails:transactionDetail itemDetails:self.itemDetails customerDetails:customerDetail completion:^(MidtransTransactionTokenResponse *token, NSError *error) { if (token) { } else { } }];
To create this object, you need to prepare required objects like item object etc.
Present the
MidtransUIPaymentViewController
Present payment page
MidtransUIPaymentViewController *vc = [[MidtransUIPaymentViewController alloc] initWithToken:token]; [self presentViewController:vc animated:YES completion:nil];
We provide you a
MidtransUIPaymentViewController to handle all the payment. Use the generated
TransactionTokenResponse as required parameter.
Get Notified
Conform with MidtransUIPaymentViewControllerDelegate
//ViewController.m #import <MidtransKit/MidtransKit.h> @interface ViewController () <MidtransUIPaymentViewControllerDelegate> //other code
SDK will give callback/ delegate transaction response to host app. To be able to do so, please follow these steps:
- Set your view controller to conform with
MidtransUIPaymentViewControllerDelegate
Set the delegate
//ViewController.m MidtransUIPaymentViewController *vc = [[MidtransUIPaymentViewController alloc] initWithToken:token]; //set the delegate vc.paymentDelegate = self;
- Set the delegate of
MidtransUIPaymentViewController
Implement the functions of delegate
//ViewController.m #pragma mark - MidtransUIPaymentViewControllerDelegate - (void)paymentViewController:(MidtransUIPaymentViewController *)viewController paymentSuccess:(MidtransTransactionResult *)result { NSLog(@"success: %@", result); } - (void)paymentViewController:(MidtransUIPaymentViewController *)viewController paymentFailed:(NSError *)error { [self showAlertError:error]; } - (void)paymentViewController:(MidtransUIPaymentViewController *)viewController paymentPending:(MidtransTransactionResult *)result { NSLog(@"pending: %@", result); } - (void)paymentViewController_paymentCanceled:(MidtransUIPaymentViewController *)viewController { NSLog(@"canceled"); }
- Implement the
MidtransUIPaymentViewControllerDelegatefunctions
Custom Acquiring Bank
Config Acquiring Bank
/* available acquiring banks MTAcquiringBankBCA, MTAcquiringBankBRI, MTAcquiringBankCIMB, MTAcquiringBankMandiri, MTAcquiringBankBNI, MTAcquiringBankMaybank */ CC_CONFIG.acquiringBank = MTAcquiringBankMaybank; //ex. maybank
We already support these banks
- BCA
- BRI
- CIMB
- Mandiri
- BNI
- Maybank
Features
Two Clicks Payment
Two clicks feature allows you to capture the customer’s card number, expiry date, email and phone number as a TWO_CLICKS_TOKEN. For successive payments by the same customer, the TWO_CLICKS_TOKEN can be utilized to pre-fill the details. The customer just needs to fill out the cvv number to finish the payment. Please see the configuration on the example code to enable the two clicks mode.
CreditCard creditCardOptions = new CreditCard(); // Set to true if you want to save card as two click payment creditCardOptions.setSaveCard(true); // Set secure option to enable or disable 3DS secureTypeTwoclick; CC_CONFIG.saveCardEnabled = YES;
To support two clicks configuration, merchant can use both default token storage on Midtrans backend or use their server to store their customer credentials.
Using Default Token Storage
By default this SDK will use Midtrans token storage to save customer credential so you don’t need to setup anything on your backend.
Store Token on Merchant Server
Please take a look at this guide to see save card feature implementation in your own server.
Then you need to configure SDK to disable the built-in token storage.
Disable built-in token storage
SdkUIFlowBuilder.init(CONTEXT, CLIENT_KEY, BASE_URL, CALLBACK) // disable built in token storage .useBuiltInTokenStorage(false) .buildSDK();
CC_CONFIG.paymentType = MTCreditCardPaymentTypeTwoclick; CC_CONFIG.tokenStorageEnabled = NO;
One Click Payment
In addition to two clicks feature, SNAP also supports one click transaction which will also capture the card’s cvv. With this, customers can directly proceed to pay without input any information.
One click payment configuration
CreditCard creditCardOptions = new CreditCard(); // Set to true if you want to save card as one click creditCardOptions.setSaveCard(true); // Set to true to save card token as `one click` tokenTypeOneclick; CC_CONFIG.saveCardEnabled = YES; //1-click need token storage disabled CC_CONFIG.tokenStorageEnabled = NO; //1-click need 3ds enabled CC_CONFIG.secure3DEnabled = YES;
To ease card saving process, SNAP provides card token storage feature. So, merchants don’t have to store and manage credit card token by themselves. Merchants can easily integrate credit card token storage feature to SNAP by providing unique
user_id that associates with customers’ account on merchant’s system, in addition to enabling
credit_card.save_card flag.
SNAP will then decide to store credit card token as one click token based on two criteria:
- Merchant has recurring MID enabled
- Initial transaction is 3DS-enabled
For one click payment, you can only use the built token storage as credentials storage option.
Installment
Prerequisites
There are few things that needs to be checked before installment can be used.
- MID for installment must be activated (please contact [email protected] to activate MID)
- Setup merchant server to handle checkout request. (Please see this wiki)
Provide Installment Data
Example of installment data.
"installment": { "required": false, "terms": { "bni": [3, 6, 12], "mandiri": [3, 6, 12], "cimb": [3], "bca": [3, 6, 12], "offline": [6, 12] } }
Complete JSON request
{ "transaction_details": { "order_id": "ORDER-ID", "gross_amount": 10000 }, "credit_card": { "secure": true, "channel": "migs", "bank": "bca", "installment": { "required": false, "terms": { "bni": [ 3, 6, 12 ], "mandiri": [ 3, 6, 12 ], "cimb": [ 3 ], "bca": [ 3, 6, 12 ], "offline": [ 6, 12 ] } }, "whitelist_bins": [ "48111111", "41111111" ] }, " } } }
In order to use installment in mobile SDK, merchant server must intercept mobile SDK’s request and add installment data that is sent to Midtrans backend (snap). From the Mobile side, there are no changes needed.
required–> the installment payment required.
terms-> array of available installment terms of supported bank.
Optionally
whitelist_bins can be used to accept only card numbers within the specified BIN numbers. (
whitelist_bins is required for
offline installment type)
Mobile SDK and Midtrans backend (snap) will check if the card numbers’ first
n digit numbers contain one of bins available in
whitelist_bins.
This installment object and whitelist bins need to be added at
credit_card object when the server accepts the transaction request from mobile SDK. A full example of installment data sent to Snap is like this.
BNI Point
BNI Point is special feature for BNI Bank customers. This feature allows users to redeem their point to be applied into their credit card payments. In your app side, there’s no adjustment needed. To enabled this feature in mobile SDK you just need to contact our support team. Please have a look to the following BNI Point flow to know more about this feature.
Mandiri Fiestapoin
Mandiri Fiestapoin is special feature for Mandiri Bank customers. This feature allows users to redeem their Mandiri fiestapoin to be applied into their credit card payments. In your app side, there’s no adjustment needed. To enabled this feature in mobile SDK you just need to contact our support team.
Risk Based Authentication (RBA)
Set up RBA
CreditCard creditCard = new CreditCard(); // .... // make set authentication to RBA creditCard.setAuthentication(CreditCard.AUTHENTICATION_TYPE_RBA); // Set into transaction Request TransactionRequest transactionRequest = MidtransSDK.getInstance().getTransactionRequest(); transactionRequest.setCreditCard(creditCard);
CC_CONFIG.authenticationType = MTAuthenticationTypeRBA;
RBA is a program from VISA for merchants to be able to route each of their transactions to use 3DS MID or non-3DS based on their risk level.
Under RBA program, VISA wish to help merchants to protect their transactions without sacrificing their conversion rate due to known issues in current 3DS (delayed/unsent OTP, connection, etc).
Example: Use 3D secure if customer is using US card and did more than 3 transactions in the last 1 hour. Use non 3D Secure if customer is using Indonesia card and amount is less than 100,000.
This feature can be activated by contacting our support team. Then you need to configure the SDK to use this feature.
Pre Authorization
Set up authorize transaction
CreditCard creditCard = new CreditCard(); creditCard.setType(CardTokenRequest.TYPE_AUTHORIZE); // Set into transaction Request TransactionRequest transactionRequest = MidtransSDK.getInstance().getTransactionRequest(); transactionRequest.setCreditCard(creditCard); // Set into SDK instance MidtransSDK.getInstance().setTransactionRequest(transactionRequest());
CC_CONFIG.preauthEnabled = YES;
Pre Authorization is a feature to set the credit card transaction type into
authorize.
If the transaction type is
authorize then the merchant needs to capture the payment in MAP.
To use this feature you need to add a settings into the credit card options in transaction request.
Multi Currency
Multi currency is a feature to allow for multiple various currency display formatting.
Currently you can use supported currency
IDR (Indonesian Rupiah) and
SGD (Singaporean Dollar)
You can set currency like this:
CONFIG.currency = MidtransCurrencyIDR; CONFIG.currency = MidtransCurrencySGD;
// by default SDK is using IDR TransactionRequest transactionRequest = new TransactionRequest(ORDER_ID, TOTAL_AMOUNT); // Define particular currency TransactionRequest transactionRequest = new TransactionRequest(ORDER_ID, TOTAL_AMOUNT, Currency.IDR);
Direct payment screen
Start Direct Payment Screen
startPaymentUiFlow(CONTEXT, PAYMENT_METHOD) // or run the SDK by using snap token startPaymentUiFlow(CONTEXT, PAYMENT_METHOD, SNAP_TOKEN)
// example : credit card payment MidtransSDK.getInstance().startPaymentUiFlow(MainActivity.this, PaymentMethod.CREDIT_CARD);
// example : credit card payment MidtransUIPaymentViewController *paymentVC = [[MidtransUIPaymentViewController alloc] initWithToken:token andPaymentFeature:MidtransPaymentFeatureCreditCard];
Other PAYMENT_METHOD Possible Value
//bank transfer PaymentMethod.BANK_TRANSFER //bank transfer BCA PaymentMethod.BANK_TRANSFER_BCA //bank transfer Mandiri PaymentMethod.BANK_TRANSFER_MANDIRI //bank transfer Permata PaymentMethod.BANK_TRANSFER_PERMATA //bank transfer BNI PaymentMethod.BANK_TRANSFER_BNI //bank transfer other PaymentMethod.BANK_TRANSFER_OTHER //GO-PAY PaymentMethod.GO_PAY //BCA KlikPay PaymentMethod.BCA_KLIKPAY //KlikBCA PaymentMethod.KLIKBCA //Mandiri Clickpay PaymentMethod.MANDIRI_CLICKPAY //Mandiri e-cash / LINE Pay PaymentMethod.MANDIRI_ECASH //e-Pay BRI PaymentMethod.EPAY_BRI //CIMB Clicks PaymentMethod.CIMB_CLICKS //Indomaret PaymentMethod.INDOMARET //Danamon online PaymentMethod.DANAMON_ONLINE //Akulaku PaymentMethod.AKULAKU
typedef NS_ENUM(NSInteger, MidtransPaymentFeature) { MidtransPaymentFeatureCreditCard, MidtransPaymentFeatureBankTransfer,///va MidtransPaymentFeatureBankTransferBCAVA, MidtransPaymentFeatureBankTransferMandiriVA, MidtransPaymentFeatureBankTransferBNIVA, MidtransPaymentFeatureBankTransferPermataVA, MidtransPaymentFeatureBankTransferOtherVA, MidtransPaymentFeatureKlikBCA, MidtransPaymentFeatureIndomaret, MidtransPaymentFeatureCIMBClicks, MidtransPaymentFeatureCStore, midtranspaymentfeatureBCAKlikPay, MidtransPaymentFeatureMandiriEcash, MidtransPaymentFeatureEchannel, MidtransPaymentFeaturePermataVA, MidtransPaymentFeatureBRIEpay, MidtransPaymentFeatureTelkomselEcash, MidtransPyamentFeatureDanamonOnline, MidtransPaymentFeatureIndosatDompetku, MidtransPaymentFeatureXLTunai, MidtransPaymentFeatureMandiriClickPay, MidtransPaymentFeatureKiosON, MidtransPaymentFeatureGCI, MidtransPaymentFeatureGOPAY, MidtransPaymentCreditCardForm };
Users can directly go to payment screen and skip the default payment method screen.
Note:
- Please make sure the payment method is activated via Setting -> Snap Preferences in MAP.
- For other payment methods you just need to change the parameter of payment method.
- For GO-PAY integration in iOS please see section
GO-PAY CONFIGURATION IOS
// ----------------------- // GO-PAY CONFIGURATION IOS // ----------------------- <key>LSApplicationQueriesSchemes</key> <array> <string>gojek</string> </array>
We provide a method API to make direct payment for all payment methods on Android and iOS SDK.
GO-PAY Callback Deeplink
Using callback deeplink
// Setup your Transaction Request here then set your GO-PAY deeplink. // Assume you already make TransactionRequest object. transactionRequest.setGopay(new Gopay("demo://midtrans"));
// You need to create your custom scheme URL in your xcode project. In this example the scheme URL is "demo.midtrans" // and then before you start transaction, set callbackSchemeURL with your scheme URL. CONFIG.callbackSchemeURL = @"demo.midtrans://";
Without callback deeplink
// You not needed special setting for this implementation. // For checking transaction status, you can call getTransactionStatus method for checking. String snapToken = getMidtransSDK().readAuthenticationToken(); MidtransSDK.getInstance().getTransactionStatus(snapToken, new GetTransactionStatusCallback() { @Override public void onSuccess(TransactionStatusResponse response) { // do action for response } @Override public void onFailure(TransactionStatusResponse response, String reason) { // do nothing } @Override public void onError(Throwable error) { // do action if error } });
// iOS without callback deeplink implementation [[MidtransMerchantClient shared] performCheckStatusTransactionWcompletion:^(MidtransTransactionResult * _Nullable result, NSError * _Nullable error) { if (!error) { if (result.statusCode == 200) { //handle success } } else { //handle error } }];
If you use GO-PAY payment method, result of transaction can receive by host app with 2 type of method, first method is with callback deeplink (to host-app) and the second is without callback deeplink. If you want to use callback deeplink for getting result from GO-JEK app, please set your deeplink to TransactionRequest object that previously you created, please refer to this link for deeplink implementation. If your app not use deeplink, you can skip this step and check your transaction manually.
For example if you set your deeplink like this
demo:://midtrans then GO-JEK app will return callback like this for success
demo://midtrans?order_id=xxxx&result=success and this for failure
demo://midtrans?order_id=xxxx&result=failure.
Note:
- Please make sure the payment method is activated via Setting -> Snap Preferences in MAP.
- Using deeplink only work with latest GO-JEK application.
- For more GO-PAY integration please refere to this link for Android.
Card Registration
Card Register Implementation
MidtransSDK.getInstance().UiCardRegistration(CONTEXT, new CardRegistrationCallback() { @Override public void onSuccess(CardRegistrationResponse response) { String savedTokenId = response.getSavedTokenId(); String maskedCard = response.getMaskedCard(); } @Override public void onFailure(CardRegistrationResponse response, String reason) { // Handle failure here } @Override public void onError(Throwable error) { // Handle error here } });
NSString *clientkey = @"your client key"; NSString *merchantServer = @"your merchant server url"; [[MidtransNetworkLogger shared] startLogging]; [CONFIG setClientKey:clientkey environment:MidtransServerEnvironmentSandbox merchantServerURL:merchantServer]; NSArray *data = [self.expiryDate.text componentsSeparatedByString:@"/"]; NSString *expMonth = [data[0] stringByReplacingOccurrencesOfString:@" " withString:@""]; NSString *expYear = [NSString stringWithFormat:@"%ld",[data[1] integerValue]+2000]; MidtransCreditCard *creditCard = [[MidtransCreditCard alloc] initWithNumber: [self.cardNumberTextFIeld.text stringByReplacingOccurrencesOfString:@" " withString:@""] expiryMonth:expMonth expiryYear:expYear cvv:self.cvv.text]; [[MidtransClient shared] registerCreditCard:creditCard completion:^(MidtransMaskedCreditCard * _Nullable maskedCreditCard, NSError * _Nullable error) { if (!error) { [self saveCreditCardObject:maskedCreditCard]; } else { } }];
Card Registration is a feature to allow customers to save credit card token without doing transactions first. This feature is provided for merchants that will make API-to-API payments. To implement it, you just need to follow the sample code.
Notes:
- Card expiry consists of
expiryMonthand
expiryYear. Both are
Stringand should follow this format : 2 digit for
expiryMonth(for example : “02”), and 4 digit for
expiryYear(for example : “2020”).
- CVV is
Stringtoo, with number of digit between 3 or 4, based on card. Mastercard, Visa, or JCB usually has 3 digit CVV, while American Express has 4.
- Number of digit for expiryMonth (example : it must be 2 digits) Reason : there is an error while merchant try to input 1 digit for this parameter, but there is no clue what’s gone wrong regarding the code.
- Number of digit for expiryYear (example : it must be 4 digits)
- Number of digit for cvv (example : it must be 3 digits)
Custom Expiry
There is a feature on mobile SDK to enable custom transaction lifetime. To use this feature you need to add an
ExpiryModel object into
TransactionRequest.
Custom Expiry Configuration
// set expiry time ExpiryModel expiryModel = new ExpiryModel(); // set start time long timeInMili = System.currentTimeMillis(); // format the time DateFormat df = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss Z"); df.setTimeZone(TimeZone.getTimeZone("Asia/Jakarta")); // format the time as a string String nowAsISO = df.format(new Date(time)); // set the formatted time to expiry model expiryModel.setStartTime(Utils.getFormattedTime(); expiryModel.setDuration(1); // set time unit expiryModel.setUnit(ExpiryModel.UNIT_MINUTE); //set expiry time object to transaction request transactionRequest.setExpiry(expiryModel);
MidtransTransactionExpire *expireTime = [[MidtransTransactionExpire alloc] initWithExpireTime:nil expireDuration:1 withUnitTime:MindtransTimeUnitTypeHour]; // and then [[MidtransMerchantClient shared] requestTransactionTokenWithTransactionDetails:trx itemDetails:@[itm] customerDetails:cst customField:arrayOfCustomField binFilter:binFilter blacklistBinFilter:blacklistBin transactionExpireTime:expireTime completion:^(MidtransTransactionTokenResponse * _Nullable token, NSError * _Nullable error)
Custom Field
Set up custom field
TransactionRequest transactionRequest = MidtransSDK.getInstance().getTransactionRequest(); transactionRequest.setCustomField1(CUSTOM_FIELD_1); transactionRequest.setCustomField2(CUSTOM_FIELD_2); transactionRequest.setCustomField3(CUSTOM_FIELD_3); MidtransSDK.getInstance().setTransactionRequest(transactionRequest);
NSMutableArray *arrayOfCustomField = [NSMutableArray new]; [arrayOfCustomField addObject:@{MIDTRANS_CUSTOMFIELD_1:@"custom134"}]; [arrayOfCustomField addObject:@{MIDTRANS_CUSTOMFIELD_2:@"custom3332"}]; [arrayOfCustomField addObject:@{MIDTRANS_CUSTOMFIELD_3:@"custom3333"}]; // and then [[MidtransMerchantClient shared] requestTransactionTokenWithTransactionDetails:self.transactionDetails itemDetails:self.itemDetails customerDetails:self.customerDetails customField:arrayOfCustomField transactionExpireTime:nil completion:^(MidtransTransactionTokenResponse * _Nullable token, NSError * _Nullable error) { ... }];
These 3 fields will be brought at payment so it will be available at MAP and a HTTP notification will be sent to the merchant.
Custom VA Number
Set up Custom VA Number
// .... // Custom VA Number Permata Bank TransactionRequest transactionRequest = MidtransSDK.getInstance().getTransactionRequest(); // custom va number for Permata Bank Transfer must be 10 digits String PERMATA_VA_NUMBER = "1234512345"; // Set into transaction Request transactionRequest.setPermataVa(new BankTransferRequestModel(PERMATA_VA_NUMBER));
// .... // Custom VA Number BCA Bank TransactionRequest transactionRequest = MidtransSDK.getInstance().getTransactionRequest(); // custom va number String BCA_VA_NUMBER = "1234512345"; // Set into transaction Request transactionRequest.setBcaVa(new BankTransferRequestModel(BCA_VA_NUMBER)); // ...
// .... // Custom VA Number BNI Bank TransactionRequest transactionRequest = MidtransSDK.getInstance().getTransactionRequest(); // custom va number String BNI_VA_NUMBER = "1234512345"; // Set into transaction Request transactionRequest.setBniVa(new BankTransferRequestModel(BNI_VA_NUMBER)); // ...
CONFIG.customBCAVANumber = @"your va string"; CONFIG.customPermataVANumber = = @"your va string";
Mobile SDK provides a feature that allows customized VA Number for the following payment methods - Permata VA (Bank Transfer) - BCA VA (Bank Transfer) - BNI VA (Bank Transfer)
Note - The Custom VA number must consist of numbers. - Custom VA Number for Permata Bank Transfer must be 10 digits
Other Bank ATM / VA Switcher
For bank transfers beside BCA, Mandiri, BNI, or Permata, Midtrans previously utilized Permata as VA processor. Recently, Midtrans adds support for BNI VA, giving the merchant flexibility to choose which one will be used for other bank transfer. The idea is when one of the processors (either BNI or Permata) is down, merchant can switches to other processor, preventing lose of the sales. In order to use this functionality, merchant should enable both BNI VA and Permata VA. The switch itself is located in Snap Preferences in MAP (Merchant Administrator Portal). In merchant app, there’s no adjustment needed. The changes will be valid for the next checkout.
Sub Company Code
Set up sub company code for BCA VA
// .... // Transaction request TransactionRequest transactionRequest = MidtransSDK.getInstance().getTransactionRequest(); // sub company code must be exactly 5 digits of number String SUB_COMPANY_CODE_BCA = "12321"; // Create BcaBankTransferRequestModel object and then set the SUB_COMPANY_CODE_BCA BcaBankTransferRequestModel bcaRequestModel = new BcaBankTransferRequestModel(); bcaRequestModel.setSubCompanyCode(SUB_COMPANY_CODE_BCA); // Set into transaction Request transactionRequest.setBcaVa(bcaRequestModel);
// sub company code must be exactly 5 digits of number ex:55555 CONFIG.customBCASubcompanyCode = @“55555”;
This feature allows you to pass sub company code in VA payment. The sub company code must be exactly 5 digits of number. This feature is only available on BCA VA payment method.
Custom Recipient
Set up Custom Recipient for Permata
// .... // Custom Recipient Name for Permata Bank TransactionRequest transactionRequest = MidtransSDK.getInstance().getTransactionRequest(); // custom recipient for Permata Bank Transfer must be 20 character at most and in uppercase String PERMATA_RECIPIENT = "SUDARSONO"; // Create request model (you can optionally insert custom VA as method argument here) PermataBankTransferRequestModel permataRequest = new PermataBankTransferRequestModel(); permataRequest.setRecipientName(PERMATA_RECIPIENT); // Set into transaction Request transactionRequest.setPermataVa(permataRequest);
Mobile SDK provides a feature that allows merchant to customize recipient name for Permata VA (Bank Transfer). The recipient name will be displayed in the ATM screen.
Note - The custom recipient name must consist of alphanumeric characters and space. No symbol allowed. - Custom recipient name for Permata Bank Transfer must be 20 characters at most, and in uppercase.
Custom Themes
Set up Custom Fonts & Theme Color
/** * Android custom font */ MidtransSDK midtransSDK = MidtransSDK.getInstance(); midtransSDK.setDefaultText("open_sans_regular.ttf"); midtransSDK.setSemiBoldText("open_sans_semibold.ttf"); midtransSDK.setBoldText("open_sans_bold.ttf")
/** * Android custom theme color */ // Create Custom Color Theme String examplePrimary = "#ffffff"; String examplePrimaryDark = "#ffffff"; String exampleSecondary = "#ffffff"; CustomColorTheme colorTheme = new CustomColorTheme(examplePrimary, examplePrimaryDark, exampleSecondary);
// Set Theme color into SDK Builder SdkUIFlowBuilder.init(this, BuildConfig.CLIENT_KEY, BuildConfig.BASE_URL, this) .setColorTheme(colorTheme) .build();
// or // Set Theme color into Initialized SDK MidtransSDK midtransSDK = MidtransSDK.getInstance(); midtransSDK.setColorTheme(colorTheme)
// Set Fonts & Theme color into Initialized SDK MidtransUIFontSource fontSource = [[MidtransUIFontSource alloc] initWithFontNameBold:font_name fontNameRegular:font_name fontNameLight:font_name]; [MidtransUIThemeManager applyCustomThemeColor:themeColor themeFont:fontSource];
Mobile SDK provide
Fonts and
Theme color customization.
- Custom Fonts : to set your fonts to SDK you have to add the font to your project.
- Custom Theme Color : By default, SDK will use color settings provided in Snap Preferences in MAP. If you want to use your own custom theme color you need to define it in SDK.
Android Custom Themes
Android Custom Fonts
To use a custom font on Android SDK, you need to put your font files to assets directory of project and then just follow the sample code.
Notes:
- (Android)
open_sans_regular.ttf,
open_sans_semibold.ttf,
open_sans_bold.ttfare path of the custom font on the assets directory.
Android Custom Theme color
To set custom theme in this SDK you just need to provide 3 colors:
- Primary: For top panels showing amount
- Primary Dark: For bordered button, link button
- Secondary: For text field.
iOS Custom Themes
We’ve created
MidtransUIThemeManager to configure the theme color and font of the Midtrans payment UI.
Class
MidtransUIThemeManager needs
UIColor object so you need to convert your HEX or RGB to
UIColor, here is a nice tool to generate
UIColor code.
Note: If you didn’t configure this
MidtransUIThemeManager, your theme color will follow SNAP color configuration on MAP -> SNAP settings preference.
Uikit Custom Settings
We provide Settings to handle more customizable UI in our SDK.
Skip Payment Status
Skip Payment Status
// Init custom settings UIKitCustomSetting uisetting = new UIKitCustomSetting(); uisetting.setShowPaymentStatus(false); // hide sdk payment status MidtransSDK.getInstance().setUIKitCustomSetting(uiKitCustomSetting);
UICONFIG.hideStatusPage = YES;
You can skip payment status provided by Midtrans SDK if you want to show your own status page.
Set Default save card options to true
Set default save card options to true
// Init custom settings UIKitCustomSetting uisetting = new UIKitCustomSetting(); uiKitCustomSetting.setSaveCardChecked(true); MidtransSDK.getInstance().setUIKitCustomSetting(uiKitCustomSetting);
CC_CONFIG.setDefaultCreditSaveCardEnabled = YES;
In credit card payment page, there is a checkbox to save card and it is not checked by default. You can make this checkbox checked by default by using this settings.
Skip customer detail
Skip customer detail
UIKitCustomSetting setting = MidtransSDK.getInstance().getUIKitCustomSetting(); uiKitCustomSetting.setSkipCustomerDetailsPages(true); MidtransSDK.getInstance().setUIKitCustomSetting(setting);
On the first SDK usage, user are required to fill in customer details to make a payment. This setting is only for Android SDK due to the iOS SDK not having built-in customer details pages. You can skip this screen if you want by using UIKitCustomSetting.
Add customer contact in credit card form
Add customer contact in credit card form
UIKitCustomSetting setting = MidtransSDK.getInstance().getUIKitCustomSetting(); uiKitCustomSetting.setShowEmailInCcForm(true); MidtransSDK.getInstance().setUIKitCustomSetting(setting);
If you want to show additional fields for customer contact (phone number and / or email), you can use this option. SDK will automatically display additional fields. Please note that both fields are optional.
To use this feature, please make sure you have installed our latest SDK.
Enable Auto Read OTP (Android Only)
Enable Auto Read SMS OTP
// Init custom settings UIKitCustomSetting uisetting = new UIKitCustomSetting(); // enable auto read SMS uiKitCustomSetting.setEnableAutoReadSms(true); // set custom setting to SDK MidtransSDK.getInstance().setUIKitCustomSetting(uiKitCustomSetting);
There is an option to enable auto read OTP for credit card payments. You can enable this option by using the following settings.
Bin Promo (Bin Filter)
Whitelist bin object will be like this:
"whitelist_bins": [ "mandiri", "41111111" ]
Blacklist bin object will be like this:
"blacklist_bins": [ "mandiri", "41111111" ]
So complete bin promo request will be like this:
{ "transaction_details": { "order_id": "ORDER-ID", "gross_amount": 10000 }, "credit_card": { "secure": true, "whitelist_bins": [ "bni", "459920" ], "blacklist_bins": [ "bri", "410505" ] }, " } } }
Credit card bins can be filtered by using
whitelist_bins or
blacklist_bins or both features. To use these features, merchant server must intercept mobile SDK’s request and add list of whitelist bin into request.
Note:
- This object needs to be added into
credit_cardobject.
- If set of whitelisted bins intersects with set of blacklisted bins, then:
- Everything in whitelisted bins that is not mentioned in blacklisted bins will be accepted.
- Everything else will be denied
Testing Credentials
Here is a list of dummy transaction credentials that can be used for transactions in the Sandbox Environment.
Credit Card
General Testing Card Number
Normal Transaction
3D Secure Transaction
Bank-Specific Testing Card
Accepted 3D Secure Card
Accepted Normal Card
Denied Card
Expiry Date and CVV
Bank Transfer
Direct Debit
e-Wallet
Convenience Store
Status Codes
Goal: Understand all status codes used by API. For more inquiries, please contact us at [email protected] or visit our support web page.
Status Codes used by midtrans API are categorized into 2xx, 3xx, 4xx dan 5xx.
Code 2xx
Code 3xx
Code 4xx
Code 5xx
Going Live With Mobile SDK
Android
To use the production version in Android, please use the production version of the library and also
production client key which you can get from MAP.
Latest Corekit Production version
implementation 'com.midtrans:corekit:1.21.2'
Latest Uikit Production version
implementation 'com.midtrans:uikit:1.21.2'
iOS
To use the production version in iOS, please use the production environment when initializing SDK and also
production client key which you can get from MAP.
Production setup in Objective-C
[CONFIG setClientKey:@"merchant clientkey" environment:MidtransServerEnvironmentProduction merchantServerURL:@"merchant server production URL"];
Production setup in Swift
MidtransConfig.shared().setClientKey("merchant clientkey", environment: .production, merchantServerURL: "merchant server URL")
Merchant Server
To use the production version of Snap, please use the production endpoint of Snap and also
production server key which you can get from MAP.
Production Endpoint:
Frequently Asked Questions
What is a merchant server?
Midtrans Mobile SDK requires merchant to have a server-side implementation to store SERVER_KEY and make charge request. As an implementation reference, please take a look at this wiki.
What is MERCHANT_BASE_URL?
It is the URL of your merchant server (backend).
Why did I get Mixpanel token error?
This error will not happen if you use the build that we have provided on bintray. If for some reason you don’t want to use the build that we have provided, you need to add your own mixpanel token by signing up to mixpanel first.
Why did I get this error “Access denied due to unauthorized transaction, please check client or server key”?
Please check whether you have used the correct client key and server key. The keys should also match the environment such that sandbox keys are used for sandbox environment, and production keys for production. If the keys are already correct but you still get the same error, make sure you have put the correct merchant server URL (MERCHANT_BASE_URL).
Why did I get response message “Token not found” when trying to do checkout?
Please make sure you have installed correct SDK and supplied corresponding credentials. In sandbox environment, please make sure that you use sandbox keys (client key and server key) and sandbox version of SDK (the name of SDK using postfix “-SANDBOX” eg: com.midtrans:uikit:1.16.0-SANDBOX).
Why did I get a blank screen when trying to do checkout?
First, please make sure you’ve already used our latest version of SDK (at least v1.15.3). If you still face this problem, please either do one of these two solutions : (1) supply user detail to SDK, or (2) use UIKitCustomSetting. Please refer to this guide on how to do one of them.
I got error when trying to do checkout.
First, please make sure you’ve already used our latest version of SDK. Then, please examine the error log captured in Logcat or other error reporting.
If the error looks like this :
Then rest easy as this is a known problem. Our mobile SDK uses Glide v3.7 as image loader library, so if your app uses Glide too, make sure you use the same version (or at least v3.x) to avoid this error. We still work on it to make sure in the future, no such version clashing exist anymore.
Modify base theme to avoid error
<!-- Base application theme. --> <style name="AppTheme" parent="Theme.AppCompat.Light.DarkActionBar"> <!-- Customize your theme here. --> <item name="colorPrimary">@color/colorPrimary</item> <item name="colorPrimaryDark">@color/colorPrimaryDark</item> <item name="colorAccent">@color/colorAccent</item> <item name="windowActionBar">false</item> <item name="windowNoTitle">true</item> </style>
If the error looks like this :
You should have no worries. Since our mobile SDK uses Toolbar as ActionBar, you need to disable ActionBar in your app and use Toolbar instead. It requires you to modify your base theme.
For more information regarding this particular error, please refer to this Stack Overflow post.
I would like to test bank point. Is there any testing credential that I can use?
Sure. If you want to test bank point in sandbox environment, you can use this card number.
- BNI Point : 4105 0586 8948 1467
- Mandiri Fiestapoin : 4617 0069 5974 6656
The expiry and CCV are the same like any other testing cards. Please refer to this section to see other testing credential.
Why is the Gojek App can’t be opened from my app when using iOS SDK?
There is a case that Gojek app is not detected, while using our iOS SDK.
You will need to add LSApplicationQueriesSchemes key to your app’s Info.plist
Add LSApplicationQueriesSchemes key with this value:
<key>LSApplicationQueriesSchemes</key> <array> <string>gojek</string> </array>
How to get pdf_url ?
pdf_url is an response property that provide url to download instruction of VA payment. Here’s how to get pdf_url :
String pdfUrl = transactionResponse.getPdfUrl()
Handle Async Payment
There are some async payment supported by Midtrans:
- Internet Banking and Direct Debit
- KlikBCA
- BCA KlikPay
- Epay BRI
- CIMB Clicks
- LINE Pay e-cash / mandiri e-cash
- Bank Transfer (Virtual Account)
- BCA VA
- Mandiri Bill
- Permata VA
- Convenience Store
- Indomaret
- Installment Method
- Akulaku
If you’re using payment methods from above list, you will not get the latest transaction status after doing the charge using the SDK.
SDK only returns the
pending status when doing the charge to Midtrans Payment API.
Midtrans will update the transaction status after they receive notification from bank when the transaction is really completed.
There are two ways to know the latest transaction status:
- HTTP Notification sent from Midtrans backend
- Get Latest Transaction Status using Midtrans API
Handling HTTP Notification
In order to increase the security aspect, there are several ways to ensure that the notification received is from Midtrans.
Handle HTTP Notification (Using Veritrans PHP)
<?php require_once('Veritrans.php'); Veritrans_Config::$isProduction = false; Veritrans_Config::$serverKey = '<your serverkey>'; $notif = new Veritrans_Notification(); $transaction = $notif->transaction_status; $type = $notif->payment_type; $order_id = $notif->order_id; $fraud = $notif->fraud_status; if ($transaction == 'capture') { // For credit card transaction, we need to check whether transaction is challenge by FDS or not if ($type == 'credit_card'){ if($fraud == 'challenge'){ // TODO set payment status in merchant's database to 'Challenge by FDS' // TODO merchant should decide whether this transaction is authorized or not in MAP echo "Transaction order_id: " . $order_id ." is challenged by FDS"; } else { // TODO set payment status in merchant's database to 'Success' echo "Transaction order_id: " . $order_id ." successfully captured using " . $type; } } } else if ($transaction == 'settlement'){ // TODO set payment status in merchant's database to 'Settlement' echo "Transaction order_id: " . $order_id ." successfully transfered using " . $type; } else if($transaction == 'pending'){ // TODO set payment status in merchant's database to 'Pending' echo "Waiting customer to finish transaction order_id: " . $order_id . " using " . $type; } else if ($transaction == 'deny') { // TODO set payment status in merchant's database to 'Denied' echo "Payment using " . $type . " for transaction order_id: " . $order_id . " is denied."; } else if ($transaction == 'expire') { // TODO set payment status in merchant's database to 'expire' echo "Payment using " . $type . " for transaction order_id: " . $order_id . " is expired."; } else if ($transaction == 'cancel') { // TODO set payment status in merchant's database to 'Denied' echo "Payment using " . $type . " for transaction order_id: " . $order_id . " is canceled."; } ?>
Challenge Response
An additional mechanism we provide to verify the content and the origin of the notification is to challenge. This can be achieved by calling the get status API. The response is the same as the notification status.
Signature Key
We added signature key information in our notification. The purpose of this signature key is to validate whether the notification is originated from Midtrans or not. Should the notification not be genuine, merchants can disregard the notification. Please find on the side, the logic of the signature key and the sample code to generate signature key.
Signature Key Logic
SHA512(order_id + status_code + gross_amount + serverkey)
Sample code generate signature key
<?php $orderId = "1111"; $statusCode = "200"; $grossAmount = "100000.00"; $serverKey = "askvnoibnosifnboseofinbofinfgbiufglnbfg"; $input = $orderId.$statusCode.$grossAmount.$serverKey; $signature = openssl_digest($input, 'sha512'); echo "INPUT: " , $input."<br/>"; echo "SIGNATURE: " , $signature; ?>
Best Practice to Handle notification
- Always use an HTTPS endpoint. It is secure and there cannot be MITM attacks because we validate the certificates match the hosts. Also do not use self signed certificates.
- Always implement notification in an idempotent way, in extremely rare cases, we may send multiple notifications for the same transaction event twice. It should not cause double entries on the merchant end. The simple way of achieving this is to use orderid as the key to track the entries.
- Always check the signature hash of the notification, This will confirm that the notification was actually sent by Midtrans, because we encode the shared secret (server key). Nobody else can build this signature hash.
- Always check all the following three fields to confirm successful transactions
- status code: Should be 200 for successful transactions
- fraud status: ACCEPT
- transaction status : settlement/capture
- We strive to send the notification immediately after the transaction has occurred, but in extremely rare cases, it may be delayed because of transaction spikes. If you have not received a notification, please use the Status API to check the current status of the transaction.
- It is safe to call Status API to get the latest status of the transaction/order on each notification.
- We set the HTTP timeout to 30 seconds. Please strive to keep the response time of the HTTP notifications under 5 seconds.
- In extremely rare cases we may send the HTTP notifications out of order, ie. a
settlementstatus for a notification before the notification for
Pendingstatus. It’s important that such later notifications are ignored. Here’s the state transition diagram that you could use. But again, use our /status API to confirm the actual status.
- We send the notification body as JSON, please parse the JSON with a JSON parser. Always expect new fields will be added to the notification body, so parse it in a non strict format, so if the parser sees new fields, it should not throw exception. It should gracefully ignore the new fields. This allows us to extend our notification system for newer use cases without breaking old clients.
- Always use the right HTTP Status code for responding to the notification, we handle retry for error cases differently based on the status code
- for 2xx: No retries, it is considered success
- for 500: We will retry 1 times in the 1 minute interval
- for 503: Retry 4 times
- for 400/404: retry 2 times in 1 minute interval
- for all other failures: Retry 5 times at 1 minute interval
- Redirection
- for 307/308: The request will be repeated with the new URL using POST method and the same notification body. A maximum of 5 redirection is allowed. -for 301/301/303: The job will be marked as failed without further retries. The merchant will be notified via email. We suggest either to use 307/308 or update the notification endpoint settings in merchant portal.
The following are the standard types of notifications. Note different types of notifications can be added in addition to the ones below. Also new fields may be added to the existing notification, please confirm with the latest documentation for the exact fields.
Get Transaction Status
Please refer to this docs. | http://mobile-docs.midtrans.com/ | 2018-11-13T03:17:09 | CC-MAIN-2018-47 | 1542039741192.34 | [array(['https://trello-attachments.s3.amazonaws.com/58dcb542d7cd40dd4f1108bf/1004x578/ba7f15ccae1b2c0444d6478630dac5ff/mobile-sdk-flow.png',
'Transaction Flow Figure Transaction Flow Figure'], dtype=object)
array(['https://snap-docs.midtrans.com/images/signaturekey1.png',
'Challenge Response'], dtype=object) ] | mobile-docs.midtrans.com |
MQL4 Reference Common Functions DebugBreak
It is a program breakpoint in debugging.
void DebugBreak();
Return Value
No return value.
Note
Execution of an MQL4 program is interrupted only if a program is started in a debugging mode. The function can be used for viewing values of variables and/or for further step-by-step execution. | https://docs.mql4.com/common/debugbreak | 2018-11-13T03:39:43 | CC-MAIN-2018-47 | 1542039741192.34 | [] | docs.mql4.com |
When you click Get Started on the Analytics Welcome page or navigate to Settings > Data Sources, Citrix Analytics automatically discovers all Endpoint Management data sources associated with your Citrix Cloud account.
A site card for Endpoint Management. | https://docs.citrix.com/en-us/citrix-analytics/getting-started/endpoint-management-data-source.html | 2018-11-13T03:51:48 | CC-MAIN-2018-47 | 1542039741192.34 | [array(['/en-us/citrix-analytics/media/endpoint-enable-analytics.png',
'Endpoint data source'], dtype=object) ] | docs.citrix.com |
Command capstone-disassemble
If you have installed the
capstone library and
its Python bindings, you can use it to disassemble any memory in your debugging
session. This plugin was created to offer an alternative to
GDB's disassemble
function which sometimes gets things mixed up.
You can use its alias
cs-disassemble or just
cs with the location to
disassemble at. If not specified, it will use
$pc.
gef➤ cs main
| https://gef.readthedocs.io/en/master/commands/capstone-disassemble/ | 2018-11-13T03:28:23 | CC-MAIN-2018-47 | 1542039741192.34 | [array(['https://i.imgur.com/wypt7Fo.png', 'cs-disassemble'], dtype=object)] | gef.readthedocs.io |
Common Config Options¶
All Options¶
To get the latest up-to-date list of all availble options in Cinch, consult the files for each Ansible role in the cinch/roles/<role_name>/defaults/main.yml files in the code base. Every variable should be documented, along with the default value given.
Jenkins Plugins¶
Cinch configures what has been deemed and tested as a reasonable baseline set of Jenkins plugins. Typically it will not be necessary to alter or remove elements from this list. The current list can be found in the file cinch/files/jenkins-plugin-lists/default.txt. Opening this file will give a list of plugins, one per line. A specific version of a plugin can be specified by a line that reads “myplugin==1.2.3” and will install specifically version 1.2.3 of that plugin.
If the set of default plugins is not acceptable to a user, they can override the list by defining the variable jenkins_plugins in their host or group vars for a Cinch run to include the items they want. This variable is an array of strings, each string being the equivalent of one line from the default.txt file.
If a user only wants to add some plugins that are not present in the default set, without completely overriding the set, this can be accomplished by adding entries to jenkins_extra_plugins in the same format as entries in the jenkins_plugins variable. This allows the user to install more plugins than the default, without needing to worry about falling out of sync with the default set of plugins | https://redhatqe-cinch.readthedocs.io/en/latest/config.html | 2018-11-13T02:53:47 | CC-MAIN-2018-47 | 1542039741192.34 | [] | redhatqe-cinch.readthedocs.io |
autop:
self.assertThat(window.maximized, Eventually(Equals(True))):
self.assertThat(window.height, Eventually(GreaterThan(200)))
Callable Objects:
self.assertThat( autopilot.platform.model, Eventually(Equals("Galaxy Nexus")))
In this example we’re using the autopilot.platform.model function as a callable. In this form, Eventually matches against the return value of the callable.
This can also be used to use a regular python property inside an Eventually matcher:
self.assertThat(lambda: self.mouse.x, Eventually(LessThan(10))):
self.assertThat(foo.bar, Eventually(Equals(123), timeout=30))
Warning
The Eventually matcher does not work with any other matcher that expects a callable argument (such as testtools’ ‘Raises’ matcher) | https://docs.ubuntu.com/phone/en/apps/api-autopilot-current/autopilot.matchers.Eventually.html | 2018-11-13T02:48:06 | CC-MAIN-2018-47 | 1542039741192.34 | [] | docs.ubuntu.com |
Sketch
Everything about the system in Sketch is designed so our designers spend less time needlessly creating and re-inventing our styles, 'in theory' can be updated via Craft Library).
Patterns are trickier to manage with symbols, but we should strive to be as consistent as possible when designing with our established patterns, featuring our component symbols and foundational text styles and colors.
The System in Sketch 'rivendell-text-styles.json' file in
Rivendell Design Systemfolder on Dropbox.
- You should now have all the Rivendell system text styles! | https://rivendell-docs.netlify.com/sketch/ | 2018-11-13T03:06:23 | CC-MAIN-2018-47 | 1542039741192.34 | [array(['system-colors.png', 'System colors in Sketch'], dtype=object)
array(['system-text-styles.png', 'System text styles in Sketch'],
dtype=object)
array(['system-control-symbols.png', 'Control symbols in Sketch'],
dtype=object)
array(['../responsive/template-weworkcom.png',
'Responsive Sketch template'], dtype=object)] | rivendell-docs.netlify.com |
tile corner trim subway inside tub shower with contemporary candles bathroom traditional and angle bathtub schluter.
Related Post
Magnetic Blackboard Popup Tents Kids Aerobed Twin Lifesmart Spa Toy Vehicles Marcy Cardio Mini Cycle Red Socks Memorabilia Exerpeutic Magnetic Upright Exercise Bike With Heart Pulse Sensors Lil Rider Digital Mail Scale Ohio State Memoribilia Sunny Health And Fitness Small Command Hooks Bunn Coffee Filters 10 Cup Spikeless Golf Shoes | http://top-docs.co/tile-corner-trim/tile-corner-trim-subway-inside-tub-shower-with-contemporary-candles-bathroom-traditional-and-angle-bathtub-schluter/ | 2018-11-13T03:00:41 | CC-MAIN-2018-47 | 1542039741192.34 | [array(['http://top-docs.co/wp-content/uploads/2018/03/tile-corner-trim-subway-inside-tub-shower-with-contemporary-candles-bathroom-traditional-and-angle-bathtub-schluter.jpg',
'tile corner trim subway inside tub shower with contemporary candles bathroom traditional and angle bathtub schluter tile corner trim subway inside tub shower with contemporary candles bathroom traditional and angle bathtub schluter'],
dtype=object) ] | top-docs.co |
Configuring Databases¶
NodeBB has a Database Abstraction Layer (DBAL) that allows one to write drivers for their database of choice. Currently we have the following options:
- Redis (default, see installation guides)
- Mongo
Note
If you would like to write your own database driver for NodeBB, please visit our community forum and we can point you in the right direction. | https://docs.archive.nodebb.org/en/latest/configuring/databases.html | 2018-11-13T03:45:19 | CC-MAIN-2018-47 | 1542039741192.34 | [] | docs.archive.nodebb.org |
CREATE TABLE
Define a new table.
Define a new table.
Synopsis¶
CREATE TABLE keyspace_name.table_name ( column_definition, column_definition, ...) WITH property AND property ...
column_definition is:
column_name cql_type | column_name cql_type PRIMARY KEY | PRIMARY KEY ( partition_key ) | column_name collection_type
cql_type is a type, other than a collection or a counter type. CQL data types lists the types. Exceptions: ADD supports a collection type and also, if the table is a counter, a counter type.
partition_key is:
column_name | ( column_name1 , column_name2, column_name3 ... ) | ((column_name1*, column_name2*), column3*, column4* . . . )
column_name1 is the partition key.
column_name2, column_name3 ... are clustering columns.
column_name1*, column_name2* are partitioning keys.
column_name3*, column_name4* ... are clustering columns.
collection_type is:
LIST <cql_type> | SET <cql_type> | MAP <cql_type, cql_type>
property is a one of the CQL table property, enclosed in single quotation marks in the case of strings, or one of these directives:
- COMPACT STORAGE
- CLUSTERING ORDER followed by the clustering order specification.
Synopsis legend¶
-¶
CREATE TABLE creates a new table under the current keyspace. You can also use the alias CREATE COLUMNFAMILY. Valid table names are strings of alphanumeric characters and underscores, which begin with a letter. If you add the keyspace name followed by a period to the name of the table, Cassandra creates the table in the specified keyspace, but does not change the current keyspace; otherwise, if you do not use a keyspace name, Cassandra creates the table within the current keyspace. ¶
Defining a primary key column¶
The only schema information that must be defined for a table is the primary key and its associated data type. Unlike earlier versions, CQL 3 does not require a column in the table that is not part of the primary key. A primary key can have any number (1 or more) of component columns.
If the primary key consists of only one column, you can use the keywords, PRIMARY KEY, after the column definition:
CREATE TABLE users ( user_name varchar PRIMARY KEY, password varchar, gender varchar, session_token varchar, state varchar, birth_year bigint );
Alternatively, you can declare the primary key consisting of only one column in the same way as you declare a compound primary key. Do not use a counter column for a key.
Using a compound primary key¶
A compound primary key consists of more than one column. Cassandra treats the first column declared in a definition as the partition key. To create a compound primary key, use the keywords, PRIMARY KEY, followed by the comma-separated list of column names enclosed in parentheses.
CREATE TABLE emp ( empID int, deptID int, first_name varchar, last_name varchar, PRIMARY KEY (empID, deptID) );
Using a composite partition key¶
A composite partition key is a partition key consisting of multiple columns. You use an extra set of parentheses to enclose columns that make up the composite partition key. The columns within the primary key definition but outside the nested parentheses are clustering columns. These columns form logical sets inside a partition to facilitate retrieval.
CREATE TABLE Cats ( block_id uuid, breed text, color text, short_hair boolean, PRIMARY KEY ((block_id, breed), color, short_hair) );
For example, the composite partition key consists of block_id and breed. The clustering columns, color and short_hair, determine the clustering order of the data. Generally, Cassandra will store columns having the same block_id but a different breed on different nodes, and columns having the same block_id and breed on the same node.
Defining a column¶
You assign columns a type during table creation. Column types, other than collection-type columns, are specified as a parenthesized, comma-separated list of column name and type pairs.
This example shows how to create a table that includes collection-type columns: map, set, and list.
CREATE TABLE users ( userid text PRIMARY KEY, first_name text, last_name text, emails set<text>, top_scores list<int>, todo map<timestamp, text> );
Setting a table property¶
Using the optional WITH clause and keyword arguments, you can configure caching, compaction, and a number of other operations that Cassandra performs on new table. You can use the WITH clause to specify the properties of tables listed in CQL table properties. Enclose a string property in single quotation marks. For example:
CREATE TABLE MonkeyTypes ( block_id uuid, species text, alias text, population varint, PRIMARY KEY (block_id) ) WITH comment='Important biological records' AND read_repair_chance = 1.0; CREATE TABLE DogTypes ( block_id uuid, species text, alias text, population varint, PRIMARY KEY (block_id) ) WITH compression = { 'sstable_compression' : 'DeflateCompressor', 'chunk_length_kb' : 64 } AND compaction = { 'class' : 'SizeTieredCompactionStrategy', 'min_threshold' : 6 };
You can specify using compact storage or clustering order using the WITH clause.¶
Using compact storage¶
The compact storage directive is used for backward compatibility of CQL 2 applications and data in the legacy (Thrift) storage engine format. To take advantage of CQL 3 capabilities, do not use this directive in new applications. When you create a table using compound primary keys, for every piece of data stored, the column name needs to be stored along with it. Instead of each non-primary key column being stored such that each column corresponds to one column on disk, an entire row is stored in a single column on disk, hence the name compact storage.
CREATE TABLE sblocks ( block_id uuid, subblock_id uuid, data blob, PRIMARY KEY (block_id, subblock_id) ) WITH COMPACT STORAGE;
Using the compact storage directive prevents you from defining more than one column that is not part of a compound primary key. A compact table using a primary key that is not compound can have multiple columns that are not part of the primary key.A compact table that uses a compound primary key must define at least one clustering column. Columns cannot be added nor removed after creation of a compact table. Unless you specify WITH COMPACT STORAGE, CQL creates a table with non-compact storage.
Using clustering order¶); | http://docs.datastax.com/en/cql/3.0/cql/cql_reference/create_table_r.html?scroll=reference_ds_v3f_vfk_xj__using-compact-storage | 2015-06-30T05:45:50 | CC-MAIN-2015-27 | 1435375091751.85 | [] | docs.datastax.com |
.
- The search box background and text colors.
- Modify footer content. Add or remove parts. Move footer outside of the main area (adjust the background so it appears separate from the main body).
- Here's one I haven't figured out yet: How to move or modify the Copyright statement that appear just below the page content. It appears to be part of the page content but really isn't. I'd prefer to move this into the footer section, or at a minimum have a horizontal rule just above it. Any ideas???
-
- How to create a new style attribute and make it available within the administrator's edit content page. This would be used so one section of text in the middle of a paragraph could be easily highlighted without just picking a different font/bold/italic/underline/color, and could also include a different size, something that is not available within the editor menus. (Note, I tried utilising the various heading controls (h3, h4, h5) for this, except they each forced the associated text to a new line. Maybe, an
inlineparameter will resolve this. Still need to give this a try.)
-.
-? | https://docs.joomla.org/index.php?title=Customising_the_Milky_Way_template&diff=34713&oldid=14967 | 2015-06-30T06:41:26 | CC-MAIN-2015-27 | 1435375091751.85 | [] | docs.joomla.org |
Difference between revisions of "Platform Portal"
From Joomla! Documentation
Revision as of 07:30, 27 August 2011
Joomla! Platform Portal
The Joomla! Platform provides a toolbench for PHP developers to build web and command line applications.
You can download the platform at github
Key Links
Platform Basics
Building a Simple Platform Application
Documentation is in three main places, api.joomla.org, the docs folder at github and this wiki. All of these are works in progress and your contributions to improving them are welcome. | https://docs.joomla.org/index.php?title=Portal:Platform&diff=61582&oldid=61581 | 2015-06-30T06:02:03 | CC-MAIN-2015-27 | 1435375091751.85 | [] | docs.joomla.org |
There is an issue with Ranger not being able to map a user on a group ‘Hdp_admins’ to a policy that allows/denies access to the group ‘Hdp_admins’. The issue is on the capital characters that might be on a AD group name definition.
Most HDP components get the group information for a user via the SSSD daemon. When asked for the groups the user ‘d.threpe’ belongs to we get:
[centos@rjk-hdp25-m-01 ~]$ groups d.threpe d.threpe : domain_users hdp_admins hadoop
So ‘hdp_admins’ all in lower case. Ranger does not treat this as the same value as ‘Hdp_admins’ which came via the group sync and was applied to some policies.
There is no way to make the group sync write or retrieve the group names all in lower case since there is no AD attribute that rewrites it in lowercase.
This issue can be worked around fortunately (till it gets solved). The solution is to define a local group in Ranger as a shadow group of a real group from AD, but then all in lower case:
If we now create policies and use that lower case ‘shadow’ group literal the result is that policies are correctly mapped to the AD groups again:
*The ‘Hdp_admins’ entry does not have to be there, it is shown for clarification only. ‘hdp_admins’ is necessary to make it work. | https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.4/bk_security/content/ad_integration_issue_ranger_group_mapping.html | 2018-01-16T15:33:24 | CC-MAIN-2018-05 | 1516084886437.0 | [array(['figures/7/figures/ad_integration_25.png', None], dtype=object)
array(['figures/7/figures/ad_integration_26.png', None], dtype=object)] | docs.hortonworks.com |
Create a Full Database Backup (SQL Server)
For SQL Server 2014, go to Create a Full Database Backup (SQL Server).
This topic describes how to create a full database backup in SQL Server 2017 using SQL Server Management Studio, Transact-SQL, or PowerShell.
For information on SQL Server backup to the Azure Blob storage service, see SQL Server Backup and Restore with Microsoft Azure Blob Storage Service and SQL Server Backup to URL.
Before You begin!
Limitations and Restrictions
The BACKUP statement is not allowed in an explicit or implicit transaction.
Backups created by more recent version of SQL Server cannot be restored in earlier versions of SQL Server.
For and overview of, and deeper dive into, backup concepts and tasks, see Backup Overview (SQL Server) before proceding.
Recommendations
As a database increases in size full database backups take more time to complete, and require more storage space. For a large database, consider supplementing a full database backup with a series of differential database backups. For more information, see SQL Server Backup to URL.
Estimate the size of a full database backup by using the sp_spaceused system stored procedure.
By default, every successful backup operation adds an entry in the SQL Server error log and in the system event log. If you back up frequently, these success messages will accumulate quickly, resulting in huge error logs! This can make finding other messages difficult. In such cases you can suppress these backup log entries by using trace flag 3226 if none of your scripts depend on those entries. For more information, see Trace Flags (Transact-SQL).
Security.
When you specify a back up task by using SQL Server Management Studio, you can generate the corresponding Transact-SQL BACKUP script by clicking the Script button and selecting a script destination.
Back up a database
After connecting to the appropriate instance of the Microsoft SQL Server Database Engine, in Object Explorer, click the server name to expand the server tree.
Expand Databases, and either select a user database or expand System Databases and select a system database.
Right-click the database, point to Tasks, and then click Back Up. The Back Up Database dialog box appears.
General Page
In the Database drop-down list, verify the database name. Optionally, you can select a different database from the list.
The Recovery model text box is for reference only. You can perform a database backup for any recovery model (FULL, BULK_LOGGED, or SIMPLE).
In the Backup type drop-down list, select Full.
Note that after creating a full database backup, you can create a differential database backup; for more information, see Create a Differential Database Backup (SQL Server).
Optionally, you can select the Copy-only backup checkbox to create a copy-only backup. A copy-only backup is a SQL Server backup that is independent of the sequence of conventional SQL Server backups. For more information, see Copy-Only Backups (SQL Server). A copy-only backup is not available for the Differential backup type.
For Backup component, select the Database radio button.
In the Destination section, use the Back up to drop-down list to select the backup destination. Click Add to add additional backup ojects and/or destinations.
To remove a backup destination, select it and click Remove. To view the contents of an existing backup destination, select it and click Contents.
Media Options Page
To view or select the media options, click Media Options in the Select a page pane.
Select an Overwrite Media option, by clicking one of the following:
Important
The Overwrite media option is disabled if you selected URL as the backup destination in the General page. For more information, see Back Up Database (Media Options Page)
Back up to the existing media set
Important
If you plan to use encryption, do not select this option. If you select this option, the encryption options in the Backup Options page will be disabled. Encryption is not supported when appending to the existing backup set.
For this option, click either Append to the existing backup set or Overwrite all existing backup sets. For more information,.
In the Reliability section, optionally check:
Verify backup when finished.
Perform checksum before writing to media. For information on checksums, see Possible Media Errors During Backup and Restore (SQL Server).
Continue on error.
The Transaction log section is inactive unless you are backing up a transaction log (as specified in the Backup type section of the General page).
In the Tape drive section, the Unload the tape after backup option is active if you are backing up to a tape drive (as specified in the Destination section of the General page). Clicking this option activates the Rewind the tape before unloading option.
Backup Options Page
To view or select the backup options, click Backup Options in the Select a page pane.
In the Name text box either accept the default backup set name, or enter a different name for the backup set.
In the Description text box, you can optionally).
In the Compression section, use the Set backup compression drop-down list to select the desired compression level..
For more information on backup compression settings, see View or Configure the backup compression default Server Configuration Option
In the Encryption section, use the Encrypt backup checkbox to decide whether to use encryption for the backup. Use the Algorithm drop-down list to select an encryption algorithm. Use the Certificate or Asymmetric key drop-down list, to select an existing Certificate or Asymmetric key. Encryption is supported in SQL Server 2014 or later. For more details on the Encryption options, see Back Up Database (Backup Options Page).
You can use the Maintenance Plan Wizard to create database backups.
Examples
A. Full back up to disk to default location
In this example the
Sales database will be backed up to disk at the default backup location. A back up of
Sales has never been taken.
In Object Explorer, connect to an instance of the SQL Server Database Engine and then expand that instance.
Expand Databases, right-click
Sales, point to Tasks, and then click Back Up....
Click OK.
B. Full back up to disk to non-default location
In this example the
Sales database will be backed up to disk at
E:\MSSQL\BAK. Previous back ups of
Sales have been taken.
In Object Explorer, connect to an instance of the SQL Server Database Engine and then expand that instance.
Expand Databases, right-click
Sales, point to Tasks, and then click Back Up....
On the General page in the Destination section select Disk from the Back up to: drop-down list.
Click Remove until all existing backup files have been removed.
Click Add and the Select Backup Destination dialog box will open.
Enter
E:\MSSQL\BAK\Sales_20160801.bakin the file name text box.
Click OK.
Click OK.
C. Create an encrypted backup
In this example the
Sales database will be backed up with encryption to the default backup location. A database master key has already been created. A certificate has already been created called
MyCertificate. A T-SQL example of creating a database master key and certificate can be seen at Create an Encrypted Backup.
In Object Explorer, connect to an instance of the SQL Server Database Engine and then expand that instance.
Expand Databases, right-click
Sales, point to Tasks, and then click Back Up....
On the Media Options page in the Overwrite media section select Back up to a new media set, and erase all existing backup sets.
On the Backup Options page in the Encryption section select the Encrypt backup check box.
From the Algorithm drop-down list select AES 256.
From the Certificate or Asymmetric key drop-down list select
MyCertificate.
Click OK.
D. Back up to the Azure Blob storage service
Common Steps
The three examples below perform a full database backup of
Sales to the Microsoft Azure Blob storage service. The storage Account name is
mystorageaccount. The container is called
myfirstcontainer. For brevity, the first four steps are listed here once and all examples will start on Step 5.
In Object Explorer, connect to an instance of the SQL Server Database Engine and then expand that instance.
Expand Databases, right-click
Sales, point to Tasks, and then click Back Up....
On the General page in the Destination section select URL from the Back up to: drop-down list.
Click Add and the Select Backup Destination dialog box will open.
D1. Striped Backup to URL and a SQL Server credential already exists
A stored access policy has been created with read, write, and list rights. The SQL Server credential,, was created using a Shared Access Signature that is associated with the Stored Access Policy.
*
Select the Azure storage container: text box
In the Backup File: text box enter
Sales_stripe1of2_20160601.bak.
Click OK.
Repeat Steps 4 and 5.
In the Backup File: text box enter
Sales_stripe2of2_20160601.bak.
Click OK.
Click OK.
D2. A shared access signature exists and a SQL Server Credential does not exist
Enter the Azure storage container: text box
Enter the shared access signature in the Shared Access Policy: text box.
Click OK.
Click OK.
D3. A shared access signature does not exist
Click the New container button and the Connect to a Microsoft Subscription dialog box will open.
Complete the Connect to a Microsoft Subscription dialog box and then click OK to return the Select Backup Destination dialog box. See See Connect to a Microsoft Azure Subscription for additional information.
Click OK at the Select Backup Destination dialog box.
Click OK.
Using Transact-SQL:
{.
Important
Use extreme caution when you are using the FORMAT clause of the BACKUP statement because this destroys any backups that were previously stored on the backup media.
Examples (Transact-SQL)
A. Back up to a disk device
The following example backs up the complete AdventureWorks2012 database to disk, by using
FORMAT to create a new media set.
USE AdventureWorks2012; GO BACKUP DATABASE AdventureWorks2012 TO DISK = 'Z:\SQLServerBackups\AdventureWorks2012.Bak' WITH FORMAT, MEDIANAME = 'Z_SQLServerBackups', NAME = 'Full Backup of AdventureWorks2012'; GO
B. Back up to a tape device
The following example backs up the complete AdventureWorks2012 database to tape, appending the backup to the previous backups.
USE AdventureWorks2012; GO BACKUP DATABASE AdventureWorks2012 TO TAPE = '\\.\Tape0' WITH NOINIT, NAME = 'Full Backup of AdventureWorks2012'; GO
C. Back
Using PowerShell
Use the Backup-SqlDatabase cmdlet. To explicitly indicate that this is a full database backup, specify the -BackupAction parameter with its default value, Database. This parameter is optional for full database backups.
Examples
A. Full local backup
The following example creates a full database backup of the
MyDB database to the default backup location of the server instance
Computer\Instance. Optionally, this example specifies -BackupAction Database.
Backup-SqlDatabase -ServerInstance Computer\Instance -Database MyDB -BackupAction Database
B. Full backup to Microsoft Azure
The following example creates a full backup of the database
Sales on the
MyServer instance to the Microsoft Azure Blob Storage service. A stored access policy has been created with read, write, and list rights. The SQL Server credential,, was created using a Shared Access Signature that is associated with the Stored Access Policy. The PowerShell command uses the BackupFile parameter to specify the location (URL) and the backup file name.
import-module sqlps; $container = ''; $FileName = 'Sales.bak'; $database = 'Sales'; $BackupFile = $container + '/' + $FileName ; Backup-SqlDatabase -ServerInstance "MyServer" –Database $database -BackupFile $BackupFile;
To set up and use the SQL Server PowerShell provider
Related Tasks
Back Up a Database (SQL Server)
Create a Differential Database Backup (SQL Server)
Restore a Database Backup Using SSMS
Restore a Database Backup Under the Simple Recovery Model (Transact-SQL)
Restore a Database to the Point of Failure Under the Full Recovery Model (Transact-SQL)
Restore a Database to a New Location (SQL Server)
Use the Maintenance Plan Wizard
See also
Troubleshooting SQL Server backup and restore operations
Backup Overview (SQL Server)
Transaction Log Backups (SQL Server)
Media Sets, Media Families, and Backup Sets (SQL Server)
sp_addumpdevice (Transact-SQL)
BACKUP (Transact-SQL)
Back Up Database (General Page)
Back Up Database (Backup Options Page)
Differential Backups (SQL Server)
Full Database Backups (SQL Server) | https://docs.microsoft.com/en-us/sql/relational-databases/backup-restore/create-a-full-database-backup-sql-server | 2018-01-16T16:07:52 | CC-MAIN-2018-05 | 1516084886437.0 | [array(['../../includes/media/yes.png', 'yes'], dtype=object)
array(['../../includes/media/no.png', 'no'], dtype=object)
array(['../../includes/media/no.png', 'no'], dtype=object)
array(['../../includes/media/no.png', 'no'], dtype=object)] | docs.microsoft.com |
To review learner answers to the problems in your course, you can review the answer submitted by a selected learner for a specific problem or download a course-wide report of any learner answers to a specific problem. You can also download an answer distribution report for course problems.
Learner answer distribution data, including both charts and reports, is also available from edX Insights. For more information, see Using edX Insights.
You can review a single learner’s complete submission history for a specific problem, or the answers submitted by all learners for that problem. For either a single learner or all learners, you can review the exact response submitted, the number of attempts made, and the date and time of the submission.
Before you can check the answer or answers submitted by a learner, you need the learner’s username. For more information about how to obtain usernames, see Download or View Learner Data.
To review a response submitted by a learner, follow these steps.
Information about the response or responses provided by the learner displays. For more information, see Interpret a Learner’s Submission History.
To close the Submission History Viewer, click on the browser page outside of the viewer.
The Submission History Viewer shows every timestamped database record of the interactions between a learner and a problem, which can include processes completed both in the browser and on the server. These records appear with the most recent interaction at the top of the Submission History Viewer, followed by each previous interaction.
This topic provides an example submission history for a CAPA problem with guidelines that can help you interpret a submission history. The number and complexity of the records that appear in this report vary based on the type of problem and the settings and features defined.
Record 1: Problem Viewed (Server)
The first interaction, shown at the bottom of the Submission History, records when the server delivered the problem component to the browser for the learner to view.
#1: 2015-09-04 08:34:53+00:00 (America/New_York time) Score: None / None { "input_state": { "e58b639b86db44ca89652b30ea566830_2_1": {} }, "seed": 1 }
Record 2: Problem Checked (Browser)
The next interaction shown as you scroll up from the bottom records when the
learner selected Submit in the browser to submit an answer. Note that this
record does not contain the actual answer submitted. The answer choice is
indicated using a choice identifier:
choice_1 in this example.
Note
The numbering of choice identifiers starts at
choice_0, so that
choice_0 represents your first answer choice,
choice_1 represents
your second answer choice, and so on.
#2: 2015-09-04 08:35:03+00:00 (America/New_York time) Score: 0.0 / 1.0 { "input_state": { "e58b639b86db44ca89652b30ea566830_2_1": {} }, "seed": 1, "student_answers": { "e58b639b86db44ca89652b30ea566830_2_1": "choice_1" }
Record 3: Problem Checked (Server)
The next interaction records the results of the server processing that occurred
after the learner submitted the answer. This record includes
student_answers with the submitted answer value, along with
attempts,
correctness, and other values.
#3: 2015-09-03 18:15:10+00:00 (America/New_York time) Score: 0.0 / 1.0 { "attempts": 1, "correct_map": { "e58b639b86db44ca89652b30ea566830_2_1": { "answervariable": null, "correctness": "incorrect", "hint": "", "hintmode": null, "msg": "", "npoints": null, "queuestate": null } }, "done": true, "input_state": { "e58b639b86db44ca89652b30ea566830_2_1": {} }, "last_submission_time": "2015-09-03T18:15:10Z", "seed": 1, "student_answers": { "e58b639b86db44ca89652b30ea566830_2_1": "choice_1" } }
Record 4: Problem Retried (Browser)
When a problem gives learners multiple attempts at the correct answer, and the learner tries again, an additional record is added when a learner selects Submit again. The server has not yet processed the new submission, so the data in the record is almost identical to the data in record 3.
Record 5: Problem Retried (Server)
The most recent interaction in this example records the results after the
learner attempts the problem again and submits a different answer. Note the
differences between values in this record and in record 3, including the
reported
Score and the values for
student_answers,
attempts, and
correctness.
#5: 2015-09-03 18:15:17+00:00 (America/New_York time) Score: 1.0 / 1.0 { "attempts": 2, "correct_map": { "e58b639b86db44ca89652b30ea566830_2_1": { "answervariable": null, "correctness": "correct", "hint": "", "hintmode": null, "msg": "", "npoints": null, "queuestate": null } }, "done": true, "input_state": { "e58b639b86db44ca89652b30ea566830_2_1": {} }, "last_submission_time": "2015-09-03T18:15:17Z", "seed": 1, "student_answers": { "e58b639b86db44ca89652b30ea566830_2_1": "choice_2" } }
Before you can download a report of all learner answers for a problem, you need the unique identifier of the problem that you want to investigate.
To download the Student State report, which is a report of the answers submitted for a problem by every learner, follow these steps.
{course_id}_student_state_from_{problem_location}_{date}.csvfile.
The Student State report contains a row for each learner who has viewed a problem or submitted an answer for the problem, identified by username. The State column reports the results of the server processing for each learner’s most recently submitted answer.
When you open the report, the value in the State column appears on a single line. This value is a record in JSON format. An example record for a text input CAPA problem follows.
{"}}
You can use a JSON “pretty print” tool or script to make the value in the State column more readable, as in the following example.
{ " } }
When you add line breaks and spacing to the value in the State column for this CAPA problem, it becomes possible to recognize its similarity to the server problem check records in the Submission History. For more information, see Interpret a Learner’s Submission History.
A State value that appears as follows indicates a learner who has viewed a CAPA problem, but not yet submitted an answer.
{"seed": 1, "input_state": {"e58b639b86db44ca89652b30ea566830_2_1": {}}}
For open response assessment problems, the State value appears as follows for learners who have submitted an answer.
{"submission_uuid": "c359b484-5644-11e5-a166-0a4a2062d211", "no_peers": false}
For open response assessment problems,
"no_peers": false indicates that the
learner has completed at least one peer assessment, while
"no_peers": true
indicates that no peer assessments have been submitted.
For certain types of problems in your course, you can download a .csv file with data about the distribution of learner answers. Student answer distribution data is included in the file for problems of these types.
<choiceresponse>)
<optionresponse>)
<multiplechoiceresponse>)
<numericalresponse>)
<stringresponse>)
<formularesponse>)
The file includes a row for each problem-answer combination selected by your learners. For example, for a problem that has a total of five possible answers the file includes up to five rows, one for each answer selected by at least one learner. For problems with Randomization enabled in Studio (sometimes called rerandomization), there is one row for each problem-variant-answer combination selected by your learners. For more information, see Defining Settings for Problem Components.
Note
Certain types of problems can be set up to award partial credit. When a learner receives either the full or a partial score for a problem, this report includes that answer as correct. learner answer data. A link to the most recently updated version of the .csv file is available on the Instructor Dashboard.
To download the most recent file of learner answer data, follow these steps.
{course_id}_answer_distribution.csvfile. learner has answered since early March 2014. For those problems, this report only includes activity that occurred after October 2013.
Why don’t I see an AnswerValue for some of my problems?
For checkboxes and multiple choice problems, the answer choices actually selected by a learner after early March 2014 display as described in the previous answer. Answer choices selected by at least one learner question text that you identified for the problem with the accessible label formatting. If you did not identify question text for the problem, you will not see a question. For more information about how to set up accessible labels for problems, see The Simple Editor.
Also, for problems that use the Randomization setting in Studio, if a particular answer has not been selected since early March 2014, the Question is blank for that answer.
My learners learners learner learner responses to assignments, which can then help you evaluate the structure and completeness of your course content learner learners’ common misconceptions easier to identify.
In this example, the Student Answer Distribution report is open in Microsoft Excel. To create a chart that shows how many of your learners chose various answers to a multiple choice question, you move the AnswerValue and Count columns next to each other. After you click and drag to select the report cells that contain the data you want to chart, you select the Charts toolbar and then select mistakes. While most learners in this example selected the correct answer, the number of incorrect answer(s) can guide future changes to the course content. | http://edx.readthedocs.io/projects/open-edx-building-and-running-a-course/en/open-release-ginkgo.master/student_progress/course_answers.html | 2018-01-16T15:23:18 | CC-MAIN-2018-05 | 1516084886437.0 | [] | edx.readthedocs.io |
JFolder::delete
From Joomla! Documentation
Revision as of 17::delete
Description
Delete a folder.
Description:JFolder::delete [Edit Descripton]
public static function delete ($path)
- Returns boolean True on success.
- Defined on line 281 of libraries/joomla/filesystem/folder.php
- Since
- Referenced by
- JInstaller::abort
- JCacheStorageCachelite::clean
- JInstallerHelper::cleanupInstall
- JInstaller::parseFiles
- JInstaller::removeFiles
- JInstallerTemplate::uninstall
- JInstallerPlugin::uninstall
- JInstallerLibrary::uninstall
- JInstallerComponent::uninstall
- JInstallerModule::uninstall
- JInstallerLanguage::uninstall
- JInstallerFile::uninstall
See also
JFolder::delete source code on BitBucket
Class JFolder
Subpackage Filesystem
- Other versions of JFolder::delete
SeeAlso:JFolder::delete [Edit See Also]
User contributed notes
<CodeExamplesForm /> | https://docs.joomla.org/index.php?title=API17:JFolder::delete&direction=next&oldid=56846 | 2015-11-25T09:26:17 | CC-MAIN-2015-48 | 1448398445033.85 | [] | docs.joomla.org |
JFormField::setup
From Joomla! Documentation
Revision as of 17: /> | https://docs.joomla.org/index.php?title=API17:JFormField::setup&direction=next&oldid=56904 | 2015-11-25T09:24:01 | CC-MAIN-2015-48 | 1448398445033.85 | [] | docs.joomla.org |
GSOC 2013 Project Ideas
From Joomla! Documentation
Contents
- 1 Welcome!
- 2 Ideas
- 2.1 Joomla CMS
- 2.1.1 Project: Build New Media Manager
-
- Brief Explanation: The current media manager is outdated and limited. Build a new). The manager should also include ACL permissions, programmatic implementations and configurations, and be easily reusable into other Joomla extensions.
- Expected Results: A new Media Manager component to improve the usability, functionality, and reusability of the extension.
- Knowledge Prerequisite: Joomla Platform, PHP, MySQL, Javascript
-.
- Expected Results: Hathor is beautiful.
- Knowledge Prerequisite: CSS
- | https://docs.joomla.org/index.php?title=GSOC_2013_Project_Ideas&oldid=82423 | 2015-11-25T09:06:52 | CC-MAIN-2015-48 | 1448398445033.85 | [] | docs.joomla.org |
Declarative Configuration using ZCML¶
The mode of configuration detailed in the examples within the ZCML Directives. This chapter provides an overview of how you might get started with ZCML and highlights some common tasks performed when you use ZCML.
ZCML Configuration¶
A Pyramid application can be configured “declaratively”, if so desired. Declarative configuration relies on declarations made external to the code in a configuration file format named ZCML (Zope Configuration Markup Language), an XML dialect.
A Pyramid
pyramid_zcml.load_zcml() method so that
it now reads as:
Everything else is much the same.
The
config.include('pyramid_zcml') line makes the
load_zcml method
available on the configurator.
pyramid.config.Configurator.add_view() method on your
behalf.
The
<view> tag is an example of a Pyramid declaration
tag. Other such tags include
<route> and
<scan>. Each of
these tags is effectively a “macro” which calls methods of a
pyramid.config.
ZCML Conflict Detection¶.
Hello World, Goodbye World (Declarative)¶
Another almost entirely equivalent mode of application configuration exists named declarative configuration. Pyramid can be configured for the same “hello world” application “declaratively”, if so desired.
To do so, first, create a file named
helloworld.py:
Then create a file named
configure.zcml in the same directory as
the previously created
helloworld.py:
This pair of files forms an application functionally equivalent to the application we created earlier in Hello World. We can run it the same way.
$ python helloworld.py serving on 0.0.0.0:8080 view at
Let’s examine the differences between the code in that section and the code
above. In Application Configuration, we had the following lines
within the
if __name__ == '__main__' section of
helloworld.py:
In our “declarative” code, we’ve added a call to the
pyramid_zcml.load_zcml() method with the value
configure.zcml, and
we’ve removed the lines which read
config.add_view(hello_world) and
config.add_view(goodbye_world, name='goodbye'), so that it now reads as:
Everything else is much the same.
The
config.load_zcml('configure.zcml') line tells the configurator
to load configuration declarations from the
configure.zcml file
which sits next to
helloworld.py. Let’s take a look at the
configure.zcml file now:
We already understand what the view code does, because the application is functionally equivalent to the application described in Hello World, but use of ZCML is new. Let’s break that down tag-by-tag.
The
<configure> Tag¶
The
configure.zcml ZCML file contains this bit of XML:
Because ZCML is XML, and because XML requires a single root
tag for each document, every ZCML file used by Pyramid must
contain a
configure container directive, which acts as the root
XML tag. It is a “container” directive because its only job is to
contain other directives.
See also configure and A Word On XML Namespaces.
The
<include> Tag¶
The
configure.zcml ZCML file contains this bit of XML within the
<configure> root tag:
This self-closing tag instructs Pyramid to load a ZCML file
from the Python package with the dotted Python name
pyramid_zcml, as specified by its
package attribute.
This particular
<include> declaration is required because it
actually allows subsequent declaration tags (such as
<view>, which
we’ll see shortly) to be recognized. The
<include> tag
effectively just includes another ZCML file, causing its declarations
to be executed. In this case, we want to load the declarations from
the file named
configure.zcml within the
pyramid_zcml Python package. We know we want to load
the
configure.zcml from this package because
configure.zcml is
the default value for another attribute of the
<include> tag named
file. We could have spelled the include tag more verbosely, but
equivalently as:
The
<include> tag that includes the ZCML statements implied by the
configure.zcml file from the Python package named
pyramid_zcml is basically required to come before any
other named declaration in an application’s
configure.zcml. If it
is not included, subsequent declaration tags will fail to be
recognized, and the configuration system will generate an error at
startup. However, the
<include package="pyramid_zcml"/>
tag needs to exist only in a “top-level” ZCML file, it needn’t also
exist in ZCML files included by a top-level ZCML file.
The
<view> Tag¶
The
configure.zcml ZCML file contains these bits of XML after the
<include> tag, but within the
<configure> root tag:
These
<view> declaration tags direct Pyramid to create
two view configuration registrations. The first
<view>
tag has an attribute (the attribute is also named
view), which
points at a dotted Python name, referencing the
hello_world function defined within the
helloworld package.
The second
<view> tag has a
view attribute which points at a
dotted Python name, referencing the
goodbye_world function
defined within the
helloworld package. The second
<view> tag
also has an attribute called
name with a value of
goodbye.
These effect of the
<view> tag declarations we’ve put into our
configure.zcml is functionally equivalent to the effect of lines
we’ve already seen in an imperatively-configured application. We’re
just spelling things differently, using XML instead of Python.
In our previously defined application, in which we added view configurations imperatively, we saw this code:
Each
<view> declaration tag encountered in a ZCML file effectively
invokes the
pyramid.config.Configurator.add_view()
method on the behalf of the developer. Various attributes can be
specified on the
<view> tag which influence the view
configuration it creates.
Since the relative ordering of calls to
pyramid.config.Configurator.add_view() doesn’t matter
(see the sidebar entitled View Dispatch and Ordering within
Adding Configuration), the relative order of
<view> tags in
ZCML doesn’t matter either. The following ZCML orderings are
completely equivalent:
Hello Before Goodbye
Goodbye Before Hello
We’ve now configured a Pyramid helloworld application declaratively. More information about this mode of configuration is available in ZCML Configuration.
ZCML Granularity¶.
Scanning via ZCML¶
ZCML can invoke a scan via its
<scan> directive. If a
ZCML file is processed that contains a scan directive, the package the ZCML
file points to is scanned.
Which Mode Should I Use?¶
A combination of imperative configuration, declarative configuration via ZCML and scanning can be used to configure any application. They are not mutually exclusive.
Declarative configuraton was the more traditional form of configuration used in Pyramid applications; the first releases of Pyramid and all releases of Pyramid’s predecessor named repoze.bfg included ZCML in the core. However, by virtue of this package, it has been externalized from the Pyramid core because it has proven that imperative mode configuration can be simpler to understand and document.
However, you can choose to use imperative configuration, or declarative configuration via ZCML. Use the mode that best fits your brain as necessary.
View Configuration Via ZCML¶
You may associate a view with a URL by adding view
declarations via ZCML in a
configure.zcml file. An
example of a view declaration in ZCML is as follows:
The above maps the
.views.hello_world view callable function to
the following set of resource location results:
- A context object which is an instance (or subclass) of the Python class represented by
.resources.Hello
- A view name equalling
hello.html.
Note
Values prefixed with a period (
.) for the
context and
view
attributes of a
view declaration (such as those above) mean “relative
to the Python package directory in which this ZCML file is
stored”. So if the above
view declaration was made inside a
configure.zcml file that lived in the
hello package, you could
replace the relative
.resources.Hello with the absolute
hello.resources.Hello; likewise you could replace the relative
.views.hello_world with the absolute
hello.views.hello_world.
Either the relative or absolute form is functionally equivalent. It’s
often useful to use the relative form, in case your package’s name
changes. It’s also shorter to type.
You can also declare a default view callable for a resource type:
A default view callable simply has no
name attribute. For the above
registration, when a context is found that is of the type
.resources.Hello and there is no view name associated with the
result of resource location, the default view callable will be
used. In this case, it’s the view at
.views.hello_world.
A default view callable can alternately be defined by using the empty
string as its
name attribute:
You may also declare that a view callable is good for any context type
by using the special
* character as the value of the
context
attribute:
This indicates that when Pyramid identifies that the
view name is
hello.html and the context is of any type,
the
.views.hello_world view callable will be invoked.
A ZCML
view declaration’s
view attribute can also name a
class. In this case, the rules described in Defining a View Callable as a Class
apply for the class which is named.
See view for complete ZCML directive documentation.
Configuring a Route via ZCML¶
Instead of using the imperative
pyramid.config.Configurator.add_route()
method to add a new route, you can alternately use ZCML.
route statements in a ZCML file. For example, the
following ZCML declaration causes a route to be added to the
application.
Note
Values prefixed with a period (
.) within the values of ZCML
attributes such as the
view attribute of a
route mean
“relative to the Python package directory in which this
ZCML file is stored”. So if the above
route
declaration was made inside a
configure.zcml file that lived in
the
hello package, you could replace the relative
.views.myview with the absolute
hello.views.myview Either
the relative or absolute form is functionally equivalent. It’s
often useful to use the relative form, in case your package’s name
changes. It’s also shorter to type.
The order that routes are evaluated when declarative configuration is used is the order that they appear relative to each other in the ZCML file.
See route for full
route ZCML directive
documentation.
Serving Static Assets Using ZCML¶
Use of the
static ZCML directive makes static assets available at a name
relative to the application root URL, e.g.
/static.
Note that the
path provided to the
static ZCML directive may be a
fully qualified asset specification, a package-relative path, or
an absolute path. The
path with the value
a/b/c/static of a
static directive in a ZCML file that resides in the “mypackage” package
will resolve to a package-qualified assets such as
some_package:a/b/c/static.
Here’s an example of a
static ZCML directive that will serve files
up under the
/static URL from the
/var/www/static directory of
the computer which runs the Pyramid application using an
absolute path.
Here’s an example of a
static directive that will serve files up
under the
/static URL from the
a/b/c/static directory of the
Python package named
some_package using a fully qualified
asset specification.
Here’s an example of a
static directive that will serve files up
under the
/static URL from the
static directory of the Python
package in which the
configure.zcml file lives using a
package-relative path.
Whether you use for
path a fully qualified asset specification,
an absolute path, or a package-relative path, When you place your
static files on the filesystem in the directory represented as the
path of the directive, you will then be able to view the static
files in this directory via a browser at URLs prefixed with the
directive’s
name. For instance if the
static directive’s
name is
static and the static directive’s
path is
/path/to/static, will
return the file
/path/to/static/dir/foo.js. The static directory
may contain subdirectories recursively, and any subdirectories may
hold files; these will be resolved by the static view as you would
expect.
While the
path argument can be a number of different things, the
name argument of the
static ZCML directive can also be one of
a number of things: a view name or a URL. The above examples have
shown usage of the
name argument as a view name. When
name is
a URL (or any string with a slash (
/) in it), static assets
can be served from an external webserver. In this mode, the
name
is used as the URL prefix when generating a URL using
pyramid.url.static_url().
For example, the
static ZCML directive may be fed a
name
argument which is:
Because the
static ZCML directive is provided with a
name argument
that is the URL prefix, subsequent calls to
pyramid.url.static_url() with paths that start with the
path
argument passed to
pyramid.url.static_url() will generate a URL
something like. The external webserver
listening on
example.com must be itself configured to respond properly to
such a request. The
pyramid.url.static_url() API is discussed in more
detail later in this chapter.
The
pyramid.config.Configurator.add_static_view() method offers
an imperative equivalent to the
static ZCML directive. Use of the
add_static_view imperative configuration method is completely equivalent
to using ZCML for the same purpose. See Serving Static Assets for
more information.
The
asset ZCML Directive¶
Instead of using
pyramid.config.Configurator.override_asset() during
imperative configuration, an equivalent ZCML directive can be used.
The ZCML
asset tag is a frontend to using
pyramid.config.Configurator.override_asset().
An individual Pyramid
asset ZCML statement can override a
single asset. For example:
The string value passed to both
to_override and
override_with
attached to an
asset directive an asset directory with another directory, you must
make sure to attach the slash to the end of both the
to_override
specification and the
override_with specification. If you fail to attach
a slash to the end of an asset specification that points to a directory, you
will get unexpected results.
The package name in an asset specification may start with a dot, meaning that the package is relative to the package in which the ZCML file resides. For example:
Built-In Authentication Policy ZCML Directives¶
Instead of configuring an authentication policy and authorization policy imperatively, Pyramid ships with a few “pre-chewed” authentication policy ZCML directives that you can make use of within your application.
authtktauthenticationpolicy¶
When this directive is used, authentication information is obtained from an “auth ticket” cookie value, assumed to be set by a custom login form.
An example of its usage, with all attributes fully expanded:
See authtktauthenticationpolicy for details about this directive.
remoteuserauthenticationpolicy¶
When this directive is used, authentication information is obtained
from a
REMOTE_USER key in the WSGI environment, assumed to
be set by a WSGI server or an upstream middleware component.
An example of its usage, with all attributes fully expanded:
See remoteuserauthenticationpolicy for detailed information.
repozewho1authenticationpolicy¶
When this directive is used, authentication information is obtained
from a
repoze.who.identity key in the WSGI environment, assumed to
be set by repoze.who middleware.
An example of its usage, with all attributes fully expanded:
See repozewho1authenticationpolicy for detailed information.
Adding and Changing Renderers via ZCML¶
New templating systems and serializers can be associated with Pyramid renderer names. To this end, configuration declarations can be made which change an existing renderer factory and which add a new renderer factory.
Adding or changing an existing renderer via ZCML is accomplished via the renderer ZCML directive.
For example, to add a renderer which renders views which have a
renderer attribute that is a path that ends in
.jinja2:
The
factory attribute is a dotted Python name that must
point to an implementation of a renderer factory.
The
name attribute is the renderer name.
Registering a Renderer Factory¶
See Adding a New Renderer for more information for the definition of a renderer factory. Here’s an example of the registration of a simple renderer factory via ZCML:
Adding the above ZCML to your application will allow you to use the
my.package.MyAMFRenderer renderer factory implementation in view
configurations by subseqently referring to it as
amf in the
renderer
attribute of a view configuration:
Here’s an example of the registration of a more complicated renderer factory, which expects to be passed a filesystem path:
Adding the above ZCML to your application will allow you to use the
my.package.MyJinja2Renderer renderer factory implementation in
view configurations by referring to any
renderer which ends in
.jinja in the
renderer attribute of a view
configuration:
When a view configuration which has a
name attribute that does
contain a dot, such as
templates/mytemplate.jinja2 above is encountered at
startup time,
Jinja2Renderer for each view configuration which includes anything ending
with
.jinja2 as its
renderer value. The
name passed to the
Jinja2Renderer constructor will be whatever the user passed as
renderer= to the view configuration.
See also renderer and
pyramid.config.Configurator.add_renderer(). ZCML:
After you do this, the renderer factory in
my.package.pt_renderer will be used to render templates which end
in
.pt, replacing the default Chameleon ZPT renderer.
To ochange the default mapping in which files with a
.txt
extension are rendered via a Chameleon text template renderer, use a
variation on the following in your application’s ZCML:
After you do this, the renderer factory in
my.package.text_renderer will be used to render templates which
end in
.txt, replacing the default Chameleon text renderer.
To associate a default renderer with all view configurations (even
ones which do not possess a
renderer attribute), use a variation
on the following (ie. omit the
name attribute to the renderer
tag):
See also renderer and
pyramid.config.Configurator.add_renderer().
Adding a Translation Directory via ZCML¶
You can add a translation directory via ZCML by using the translationdir ZCML directive:
A message catalog in a translation directory added via translationdir will be merged into translations from a message catalog added earlier if both translation directories contain translations for the same locale and translation domain.
See also translationdir and Adding a Translation Directory.
Adding a Custom Locale Negotiator via ZCML¶
You can add a custom locale negotiator via ZCML by using the localenegotiator ZCML directive:
See also Using a Custom Locale Negotiator and localenegotiator.
Configuring an Event Listener via ZCML¶
You can configure an subscriber by modifying your application’s
configure.zcml. Here’s an example of a bit of XML you can add to the
configure.zcml file which registers the above
mysubscriber function,
which we assume lives in a
subscribers.py module within your application:
See also subscriber and Using Events.
Configuring a Not Found View via ZCML¶
If your application uses ZCML, you can replace the Not Found view by
placing something like the following ZCML in your
configure.zcml file.
Replace
helloworld.views.notfound_view with the Python dotted name to the
notfound view you want to use.
See Changing the Not Found View for more information.
Configuring a Forbidden View via ZCML¶
If your application uses ZCML, you can replace the Forbidden view by
placing something like the following ZCML in your
configure.zcml file.
Replace
helloworld.views.forbidden_view with the Python dotted name to
the forbidden view you want to use.
See Changing the Forbidden View for more information.
Configuring an Alternate Traverser via ZCML¶
Use an
adapter stanza in your application’s
configure.zcml to
change the default traverser:
Or to register a traverser for a specific resource type:
See Changing the Traverser for more information.
Using features to make ZCML configurable¶
Using features you can make ZCML somewhat configurable. That is, you
can exclude or include parts of a ZCML configuration using the
features argument to
pyramid_zcml.load_zcml(). For example:
Will configure the views
always_configured and
alternate_hello_world
but NOT
hello_world.
Changing
resource_url URL Generation via ZCML¶
You can change how
pyramid.url.resource_url() generates a URL for a
specific type of resource by adding an adapter statement to your
configure.zcml.
See Changing How pyramid.request.Request.resource_url() Generates a URL for more information.
Changing the Request Factory via ZCML¶
A
MyRequest class can be registered via ZCML as a request factory through
the use of the ZCML
utility directive. In the below, we assume it lives
in a package named
mypackage.mymodule.
See Changing the Request Factory for more information.
Changing the Renderer Globals Factory via ZCML¶
A renderer globals factory can be registered via ZCML as a through the use of
the ZCML
utility directive. In the below, we assume a
renderers_globals_factory function lives in a package named
mypackage.mymodule.
See adding_renderer_globals for more information.
Using Broken ZCML Directives¶
Some Zope and third-party ZCML directives use the
zope.component.getGlobalSiteManager API to get “the registry” when
they should actually be calling
zope.component.getSiteManager.
zope.component.getSiteManager can be overridden by Pyramid via
pyramid.config.Configurator.hook_zca(), while
zope.component.getGlobalSiteManager cannot. Directives that use
zope.component.getGlobalSiteManager are effectively broken; no ZCML
directive should be using this function to find a registry to populate.
You cannot use ZCML directives which use
zope.component.getGlobalSiteManager within a Pyramid application without
passing the ZCA global registry to the Configurator constructor at
application startup, as per Enabling the ZCA Global API by Using The ZCA Global Registry.
One alternative exists: fix the ZCML directive to use
getSiteManager rather than
getGlobalSiteManager. If a
directive disuses
getGlobalSiteManager, the
hook_zca method of
using a component registry as documented in Enabling the ZCA Global API by Using hook_zca will begin
to work, allowing you to make use of the ZCML directive without
also using the ZCA global registry. | http://docs.pylonsproject.org/projects/pyramid-zcml/en/latest/narr.html | 2015-11-25T08:11:41 | CC-MAIN-2015-48 | 1448398445033.85 | [] | docs.pylonsproject.org |
Revision history of "Beez"
View logs for this page
There is no edit history for this page.
This page has been deleted. The deletion and move log for the page are provided below for reference.
- 21:21, 27 June 2013 Tom Hutchison (Talk | contribs) deleted page Category:Beez (1.5 Category, Beez 5 and Beez 20 are the current versions) | https://docs.joomla.org/index.php?title=Category:Beez&action=history | 2015-11-25T09:13:34 | CC-MAIN-2015-48 | 1448398445033.85 | [] | docs.joomla.org |
Projects
From Joomla! Documentation
- Joomla! Help Screens Project
- This team is currently compiling the help screens for Joomla 2.5. These are being served from this wiki and you can see a summary page showing the current status of each page here. For further information see the call for help announcement.
-.]] | https://docs.joomla.org/index.php?title=JDOC:Projects&direction=next&oldid=73754 | 2015-11-25T09:24:58 | CC-MAIN-2015-48 | 1448398445033.85 | [] | docs.joomla.org |
JDatabaseMySQL::test
From Joomla! Documentation
Revision as of 15::test
Description
Test to see if the MySQL connector is available.
Description:JDatabaseMySQL::test [Edit Descripton]
public static function test ()
- Returns bool True on success, false otherwise.
- Defined on line 129 of libraries/joomla/database/database/mysql.php
- Since
See also
JDatabaseMySQL::test source code on BitBucket
Class JDatabaseMySQL
Subpackage Database
- Other versions of JDatabaseMySQL::test
SeeAlso:JDatabaseMySQL::test [Edit See Also]
User contributed notes
<CodeExamplesForm /> | https://docs.joomla.org/index.php?title=API17:JDatabaseMySQL::test&direction=next&oldid=56331 | 2015-11-25T09:34:14 | CC-MAIN-2015-48 | 1448398445033.85 | [] | docs.joomla.org |
Graduate Catalog
Graduate School of Education and Counseling
General Information
About the Graduate School | Admissions | Degrees Offered | Tuition and Fees | Policies and Procedures | Student Resources | Faculty and Staff
Counseling Psychology
Department Home
+ Courses
|
Marriage, Couple and Family Therapy | Professional Mental Health Counseling | Professional Mental Health Counseling—Addictions | Psychological and Cultural Studies | School Psychology | Ecopsychology in Counseling Certificate |
Educational Leadership
Department Home
+ Courses
|
Doctor of Education in Leadership | Educational Administration | School Counseling
Teacher Education
Department Home
+ Courses
|
Early Childhood/Elementary | Middle-Level/High School | Educational Studies |
Curriculum and Instruction | ESOL/Bilingual | Language and Literacy | Special Education
Special Programs and Continuing Education
Center for Community Engagement
|
Core Program
|
Certificate in Documentary Studies | Certificate in the Teaching of Writing | matriculated in Lewis & Clark College at the time. The contents of this catalog are based on information available to the administration at the time of publication. | http://docs.lclark.edu/catalog/archive/2011-2012/graduate/ | 2015-11-25T08:09:41 | CC-MAIN-2015-48 | 1448398445033.85 | [] | docs.lclark.edu |
Difference between revisions of "JCli::getInstance"::getInstance
Description
Returns a reference to the global object, only creating it if it doesn't already exist.
Description:JCli::getInstance [Edit Descripton]
public & function getInstance ()
- Returns
- Defined on line 86 of libraries/joomla/application/cli.php
See also
JCli::getInstance source code on BitBucket
Class JCli
Subpackage Application
- Other versions of JCli::getInstance
SeeAlso:JCli::getInstance [Edit See Also]
User contributed notes
<CodeExamplesForm /> | https://docs.joomla.org/index.php?title=JCli::getInstance/11.1&diff=56104&oldid=46897 | 2015-11-25T09:34:42 | CC-MAIN-2015-48 | 1448398445033.85 | [] | docs.joomla.org |
Information for "Plugin Development" Basic information Display titlePortal:Plugin Development Default sort keyPlugin Development Page length (in bytes)1,249 Page ID10760 Page content languageEnglish (en) Page content modelwikitext Indexing by robotsDisallowed Number of redirects to this page1 Number of subpages of this page34 (0 redirects; 34 non-redirects) Page protection EditAllow all users MoveAllow all users Edit history Page creatorBatch1211 (Talk | contribs) Date of page creation08:16, 17 September 2010 Latest editorMATsxm (Talk | contribs) Date of latest edit08:00, 3 June 2015 Total number of edits39 Total number of distinct authors7 Recent number of edits (within past 30 days)0 Recent number of distinct authors0 Page properties Magic words (2)__NOINDEX____NOTOC__ Transcluded templates (14)Templates used on this page: Template:- (view source) Template:AmboxNew (view source) (protected)Template:Edit (view source) Template:Icon (view source) Template:JVer (view source) (semi-protected)Template:Section portal heading (view source) Template:Tip (view source) Template:Top portal heading (view source) Portal:Plugin Development/Intro/en (view source) Portal:Plugin Development/Projects/en (view source) Portal:Plugin Development/Reading list/en (view source) Portal:Plugin Development/Tutorials/en (view source) Portal:Plugin Development/Using Plugins/en (view source) Chunk:Plugin/en (view source) Retrieved from ‘’ | https://docs.joomla.org/index.php?title=Portal:Plugin_Development&action=info | 2015-11-25T09:16:18 | CC-MAIN-2015-48 | 1448398445033.85 | [] | docs.joomla.org |
Difference between revisions of "JBuffer/stream read"
From Joomla! Documentation
< API15:JBuffer
Latest revision as of 07:46,
stream /> | https://docs.joomla.org/index.php?title=API15:JBuffer/stream_read&diff=97335&oldid=24001 | 2015-11-25T08:45:40 | CC-MAIN-2015-48 | 1448398445033.85 | [] | docs.joomla.org |
Information for "Access Control List Tutorial" Basic information Display titleJ3.x:Access Control List Tutorial Default sort keyAccess Control List Tutorial Page length (in bytes)57,646 Page ID26564 creatorTom Hutchison (Talk | contribs) Date of page creation09:05, 28 November 2012 Latest editorTopazgb (Talk | contribs) Date of latest edit14:32, 2 September 2015 Total number of edits44 Total number of distinct authors5 Recent number of edits (within past 30 days)0 Recent number of distinct authors0 Page properties Transcluded templates (9)Templates used on this page: Template:- (view source) Template:CurrentSTSVer (view source) (protected)Template:JVer (view source) (semi-protected)Template:Joomla version (view source) Template:Joomla version/layout (view source) Template:Ns (view source) Template:Translation language (view source) Template:Version-msg-latest-tooltip/en (view source) J3.x:Access Control List/en (view source) Retrieved from ‘’ | https://docs.joomla.org/index.php?title=J3.3:Access_Control_List_Tutorial&action=info | 2015-11-25T09:12:57 | CC-MAIN-2015-48 | 1448398445033.85 | [] | docs.joomla.org |
Revision history of "JCategoryNode::getSibling/11.1"
View logs for this page
There is no edit history for this page.
This page has been deleted. The deletion and move log for the page are provided below for reference.
- 14:19, 10 May 2013 JoomlaWikiBot (Talk | contribs) moved page JCategoryNode::getSibling/11.1 to API17:JCategoryNode::getSibling without leaving a redirect (Robot: Moved page) | https://docs.joomla.org/index.php?title=JCategoryNode::getSibling/11.1&action=history | 2015-11-25T08:59:40 | CC-MAIN-2015-48 | 1448398445033.85 | [] | docs.joomla.org |
Information for "Subpackage Html" Basic information Display titleAPI17:Subpackage Html Default sort keySubpackage Html Page length (in bytes)2,664 Page ID21109:35, 20 April 2011 Latest editorJoomlaWikiBot (Talk | contribs) Date of latest edit16:33, 11 May 2013 Total number of edits4 Total number of distinct authors2 Recent number of edits (within past 30 days)0 Recent number of distinct authors0 Retrieved from ‘’ | https://docs.joomla.org/index.php?title=API17:Subpackage_Html&action=info | 2015-11-25T08:18:18 | CC-MAIN-2015-48 | 1448398445033.85 | [] | docs.joomla.org |
Information for "Components Banners Categories" Basic information Display titleHelp16:Components Banners Categories Default sort keyComponents Banners Categories Page length (in bytes)7,395 Page ID10015:05, 27 April 2010 Latest editorJoomlaWikiBot (Talk | contribs) Date of latest edit17:37, 28 April 2013 Total number of edits26 Total number of distinct authors3 Recent number of edits (within past 30 days)0 Recent number of distinct authors0 Retrieved from ‘’ | https://docs.joomla.org/index.php?title=Help16:Components_Banners_Categories&action=info | 2015-11-25T10:04:15 | CC-MAIN-2015-48 | 1448398445033.85 | [] | docs.joomla.org |
Frame(XafApplication, TemplateContext, Controller[]) Constructor
Namespace: DevExpress.ExpressApp
Assembly: DevExpress.ExpressApp.v22.1.dll
Declaration
Parameters
Remarks
The constructor is used to create and initialize a new Frame object. It sets the Frame.Application and Frame.Context properties and populates the Frame.Controllers collection. Controllers in the ViewController class’ inheritance are activated.
Basically, you do not need to create Frames. They are created automatically. You may need to do it manually if you only develop a specific Property Editor or List Editor.
It is recommended that you create a new Frame via the XafApplication.CreateFrame method, rather than using a constructor. This is the common approach for all XAF elements. | https://docs.devexpress.com/eXpressAppFramework/DevExpress.ExpressApp.Frame.-ctor(DevExpress.ExpressApp.XafApplication-DevExpress.ExpressApp.TemplateContext-DevExpress.ExpressApp.Controller--) | 2022-08-08T08:12:53 | CC-MAIN-2022-33 | 1659882570767.11 | [] | docs.devexpress.com |
<process>
Syntax
<process request="MyApp.Request" response="MyApp.Response"> <context> ... </context> <sequence> ... </sequence> </process>
Details
Description
The <process> element is the outermost element for a BPL document. All the other BPL elements are contained within a <process> element.
A business process consists of an execution context (defined by the <context> element) and a sequence of activities (defined by the <sequence> element).
The request attribute defines the type (class name) for the business process’s initial request. The response attribute defines the type (class name) for the eventual response from the business process. The request attribute is required, but the response attribute is optional, since the business process might not return a response.
Execution Context
The life cycle of a business process requires it to have certain state information saved to disk and restored from disk, whenever the business process suspends or resumes execution. A BPL business process supports the business process life cycle with a group of variables known as the execution context.
The execution context variables include the objects called context, request, response, callrequest, callresponse and process; the integer value synctimedout; the collection syncresponses; and the %StatusOpens in a new tab value status. Each variable has a specific purpose, as described in documentation for the <assign>, <call>, <code>, and <sync> elements.
Example
The following sample business process provides a <sync> element to synchronize several <call> elements. Further activities within the <process> element are replaced by ellipses (...) near the end of the example:
<process request="Demo.Loan.Msg.Application"> <context> <property name="BankName" type="%String"/> <property name="IsApproved" type="%Boolean"/> <property name="InterestRate" type="%Numeric"/> <property name="TheResults".TheResults".TheResults".TheResults" value="callresponse" action="append"/> </response> </call> <sync name='Wait for Banks' calls="BankUS,BankSoprano,BankManana" type="all" timeout="5"> <annotation> <![CDATA[Wait for responses. Wait up to 5 seconds.]]> </annotation> </sync> <trace value='"sync complete"'/> ... </sequence> </process>
Replies
The primary response from a business process is the response it returns to the request that originally invoked the specific business process instance. Normally, the business process returns.
Language
The <process> element defines the scripting language used by a business process by providing a value for the language attribute: The value shouold be "objectscript". Any expressions found in the business process, as well as lines of code within <code> elements, must use the specified language.
Versioning
Developers can update the version number for a BPL business process to indicate that its new functionality is incompatible with previous versions. A higher number indicates later versions. There is no automatic versioning of BPL business processes. A developer manually updates the value of the version attribute within the BPL <process> element to highlight the fact that the new code contains changes that are incompatible with previous versions of the same business process. Examples include adding or deleting properties within the business process <context>, or changing the flow of activities within the business process <sequence>.
Prior versions of the same BPL business process that have instances already executing continue to execute their original activities, with their original context. New versions use their own context and their own activities. InterSystems IRIS achieves this by generating new context and thread classes for each version. The version appears as a subpackage in the generated class hierarchy. For example, if you have a class MyBPL, version 3 generates MyBPL.V3.Context and MyBPL.V3.Thread1.
Layout
By default, when a user opens a BPL diagram in the Business Process Designer, the tool display the diagram using automatic layout arrangements. These automatic choices may or may not be appropriate for a particular drawing. If you suspect that this may be an issue for your diagram, you can disable automatic layout to ensure that your diagram always displays with exactly the layout you want.
The most direct way to control the layout of your diagram is to clear the Auto arrange check box on the Preferences tab.
You can also click the General tab and choose either Automatic or Manual for the Layout. The “manual” selection preserves the exact position of each element each time you save the diagram, so that when the diagram is displayed in the Business Process Designer, it does not take on any layout characteristics except any that you specify.
Problems in scrolling through a business process diagram in the Business Process Designer can be fixed by adjusting the height or width attributes of the <process> element. You can do this using the General tab as for the layout attribute. | https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=EBPLR_process | 2022-08-08T08:14:34 | CC-MAIN-2022-33 | 1659882570767.11 | [] | docs.intersystems.com |
Create an installation source that includes software updates server farm deployments, all your Web servers must have the same software update version applied. This means that, before you add a new Web server to an existing server farm, this new Web server must have the same software updates as the rest of the Web servers in your server farm. To accomplish this, we recommend that you follow the procedures in this topic to create an installation source that contains a copy of the released version of the software,.
Note
In this article, we use the term software update as a general term for all update types, including any service pack, update, update rollup, feature pack, critical update, security update, or hotfix used to improve or fix this software product.
Use the updates folder
To create an installation source, you must add software updates to the updates folder of the released version of the software.
To use the updates folder
Copy the files from the released version source media for the product to a folder that you can use as an installation point for the servers in your server farm.
Download the appropriate software update package.
Extract the software update files, by using this command:
<package> /extract: <path>
The /extract switch prompts you to provide a folder name for the files, for example, for x86 systems:
wssv3sp2-kb953338-x86-fullfile-en-us.exe /extract: <C:\WSS> \Updates
<C:\WSS> is the location to which you copied the files that you extracted from the Windows SharePoint Services 3.0 released version.
Note
You must use the default location for the updates folder. If you use the SupdateLocation="path-list" property to specify a different location, Setup stops responding.
Copy the files that you extracted from the Windows SharePoint Services 3.0 software update package to the updates folder you created in the previous step.
Extract the Microsoft Office SharePoint Server 2007 software update files, by using this command:
officeserver2007sp2-kb953334-x86-fullfile-en-us.exe /extract: <C:\rtm_product_path> \Updates
<C:\rtm_product_path> is the location to which you copied the files that you extracted from the Office SharePoint Server 2007 released version.
Copy the files that you extracted from the Office SharePoint Server 2007 software update package to the updates folder containing the source for the released version. You must verify that the Svrsetup.dll file has been copied from the Office SharePoint Server 2007 software update package and you should delete the Wsssetup.dll file.
Important
Delete Wsssetup.dll because it may conflict with Svrsetup.dll. Having both Wsssetup.dll and Svrsetup.dll in the updates folder for a slipstreamed installation source is not supported.
You can now use this location as an installation point, or you can create an image of this source that you can burn to a CD-ROM.
Note
If you extracted the software update files to a location to which you had previously copied the source for a released version, the source is updated and is ready to use.
For more information about using enterprise deployment tools to deploy updates, see the article Distribute product updates for the 2007 Office system.
You can download the 32-bit or 64-bit edition of Service Pack 2 (SP2) for Office SharePoint Server 2007 at the following location:
- The 2007 Microsoft Office Servers Service Pack 2 (SP2) ().
Language template packs
Use the following procedure to create an installation location that you can use to install the language template packs with software updates already applied.
To use the updates folder with language template packs
Download the language template pack package for the released product.
Extract the files from the language template pack package.
Copy the extracted files to a folder that you can use as an installation point for the servers in your server farm.
Download the updated language template pack package for the released product.
Extract the files from the updated language template pack package.
Copy these extracted files to the updates folder, in the subfolder in which you stored the files for the released product in step 3.
You can now use this location as an installation point, or you can create an image of this source that you can burn to a CD-ROM.
To install the language template pack with the software update already applied, run Setup from this location, and then run the SharePoint Products and Technologies Configuration Wizard to complete the configuration. | https://docs.microsoft.com/en-us/previous-versions/office/sharepoint-2007-products-and-technologies/cc261890(v=office.12)?redirectedfrom=MSDN | 2022-08-08T08:15:47 | CC-MAIN-2022-33 | 1659882570767.11 | [] | docs.microsoft.com |
SD-WAN Configuration Elements
SD-WAN configuration elements interrelate to allow the firewall to select the best path to an SD-WAN.
The elements of an SD-WAN configuration work together, allowing you to:
- Group physical Ethernet interfaces that share a common destination into a logical SD-WAN interface.
- Specify link speeds.
- Specify the thresholds at which a deteriorating path (or brownout or blackout) to an SD-WAN warrants selecting a new best path.
- Specify the method of selecting that new best path.
This view indicates the relationships between elements at a glance.
The goal of an SD-WAN configuration is to control which links your traffic takes by specifying the VPN tunnels or direct internet access (DIA) that certain applications or services take from a branch to a hub or from a branch to the internet. You group paths so that if one path deteriorates, the firewall selects a new best path.
- ATagname of your choice identifies a link; you apply the Tag to the link (interface) by applying an Interface Profile to the interface, as the red arrow indicates. A link can have only one Tag. The two yellow arrows indicate that a Tag is referenced in the Interface Profile and the Traffic Distribution profile. Tags allow you to control the order that interfaces are used for traffic distribution. Tags allow Panorama to systematically configure many firewall interfaces with SD-WAN functionality.
- AnSD-WAN Interface Profilespecifies the Tag that you apply to the physical interface, and also specifies the type of Link that interface is (ADSL/DSL, cable modem, Ethernet, fiber, LTE/3G/4G/5G, MPLS, microwave/radio, satellite, WiFi, or other). The Interface Profile is also where you specify the maximum upload and download speeds (in Mbps) of the ISP’s connection. You can also change whether the firewall monitors the path frequently or not; the firewall monitors link types appropriately by default.
- A Layer3 EthernetInterfacewith an IPv4 address can support SD-WAN functionalities. You apply an SD-WAN Interface Profile to this interface (red arrow) to indicate the characteristics of the interface. The blue arrow indicates that physical Interfaces are referenced and grouped in a virtual SD-WAN Interface.
- A virtualSD-WAN Interfaceis a VPN tunnel or DIA group of one or more interfaces that constitute a numbered, virtual SD-WAN Interface to which you can route traffic. The paths belonging to an SD-WAN Interface all go to the same destination WAN and are all the same type (either DIA or VPN tunnel). (Tag A and Tag B indicate that physical interfaces for the virtual interface can have different tags.)
- APath Quality Profilespecifies maximum latency, jitter, and packet loss thresholds. Exceeding a threshold indicates that the path has deteriorated and the firewall needs to select a new path to the target. A sensitivity setting of high, medium, or low lets you indicate to the firewall which path monitoring parameter is more important for the applications to which the profile applies. The green arrow indicates that you reference a Path Quality Profile in one or more SD-WAN Policy Rules; thus, you can specify different thresholds for rules applied to packets having different applications, services, sources, destinations, zones, and users.
- ATraffic Distribution Profilespecifies how the firewall determines a new best path if the current preferred path exceeds a path quality threshold. You specify which Tags the distribution method uses to narrow its selection of a new path; hence, the yellow arrow points from Tags to the Traffic Distribution profile. A Traffic Distribution profile specifies the distribution method for the rule.
- The preceding elements come together inSD-WAN Policy Rules. The purple arrow indicates that you reference a Path Qualify Profile and a Traffic Distribution profile in a rule, along with packet applications/services, sources, destinations, and users to specifically indicate when and how the firewall performs application-based SD-WAN path selection for a packet not belonging to a session. (You can also reference aSaaS Quality Profileand anError Correction Profilein an SD-WAN policy rule.)
Now that you understand the relationship between the elements, review the traffic distribution methods and then Plan Your SD-WAN Configuration.
Most Popular
Recommended For You
Recommended Videos
Recommended videos not found. | https://docs.paloaltonetworks.com/sd-wan/3-0/sd-wan-admin/sd-wan-overview/sd-wan-configuration-elements | 2022-08-08T06:46:35 | CC-MAIN-2022-33 | 1659882570767.11 | [array(['/content/dam/techdocs/en_US/dita/_graphics/10-2/sd-wan/sd-wan-elements.png/jcr:content/renditions/original',
None], dtype=object) ] | docs.paloaltonetworks.com |
What's the difference between managed and unmanaged hosting?
Multiplay offers managed hosting, unmanaged hosting, and hybrid hosting (which is a custom combination of the two offerings). With managed hosting, Multiplay manages your game, monitors your processes, troubleshoots machine issues, and facilitates your game updates. Conversely, with unmanaged hosting, Multiplay supplies you with the machines only. Multiplay cannot access the machines for troubleshooting or otherwise, but will monitor network performance and outages for you. | https://docs.unity.com/multiplay/overview-mp-topics/what-the-difference-between-managed-and-unmanaged-hosting.html | 2022-08-08T06:36:55 | CC-MAIN-2022-33 | 1659882570767.11 | [] | docs.unity.com |
Table of Contents
Product Index
Enjoy this beautiful Secret Grove.
My Secret Grove includes a scene with ruins and another with grouped props forming a new beautiful setting.
The wonderful forest of My Secret Grove is ideal for your characters in scenes of fantasy, adventure, ancient civilization, outdoor areas or nature scene. | http://docs.daz3d.com/doku.php/public/read_me/index/72729/start | 2022-08-08T07:21:31 | CC-MAIN-2022-33 | 1659882570767.11 | [] | docs.daz3d.com |
Gokit3 is the third generation of the Gokit product line, which features the MCU and SoC mode switch. Currently the modules that support the SoC mode include ESP8266 and Hi3518E etc. The modules that support the MCU mode include ESP8266, High-Flying, Mico and Yutone World etc.
The module interface of the function expansion board of Gokit3 adopts the dual-row socket strips design. The single row pin header plug of the module chooses to use either of the two modes of MCU (MCU mode interface) and SoC (SoC mode interface) on the function expansion board of Gokit3 as needed. The function expansion board interface diagram is illustrated in the following figure.
Description:
Gokit3(S) is one of.
It adopts the separate type design. The Wi-Fi module is only responsible for receiving and transmitting data, which communicates with the MCU via serial port, etc. It needs to perform protocol and peripheral-related development on the MCU.
Summary: The advantage of this scheme is that it is not subject to the limited Wi-Fi module on-chip resources, and has high scalability for applications; the disadvantage is that it needs to adapt to the communication protocol and has high cost.
It adopts the integral type design. It directly connects the peripheral driver modules to the Wi-Fi module and supports programming on the Wi-Fi SoC, eliminating the need for the internal communication.
Summary: The advantage of this scheme is that it can reduce the application development difficulty and the production cost; the disadvantage is that it is subject to the limited Wi-Fi SoC on-chip resources and thus only a few applications can run on it.
In order to empower developers to develop more types of products and applications based on Gokit, our higher performance SoC, BLE and other modules are currently under development. Please stay tuned on the Gizwits website. | http://docs.gizwits.com/en-us/DeviceDev/GoKit/Gokit3Intro.html | 2022-08-08T06:43:36 | CC-MAIN-2022-33 | 1659882570767.11 | [] | docs.gizwits.com |
PermissionPolicyRole.Users Property
A list of users associated with the current role.
Namespace: DevExpress.Persistent.BaseImpl.PermissionPolicy
Assembly: DevExpress.Persistent.BaseImpl.Xpo.v22.1.dll
Declaration
Property Value
Remarks
In eXpressApp Framework applications, permissions are not assigned to a user directly. Users have roles, which in turn are characterized by a permission set. So, each user has one or more roles that determine what actions can be performed.
See Also | https://docs.devexpress.com/eXpressAppFramework/DevExpress.Persistent.BaseImpl.PermissionPolicy.PermissionPolicyRole.Users | 2022-08-08T06:52:58 | CC-MAIN-2022-33 | 1659882570767.11 | [] | docs.devexpress.com |
Configure a Web Server to Serve Content (IIS 7)
Applies To: Windows 7, Windows Server 2008, Windows Server 2008 R2, Windows Vista
In IIS 7, you can create sites, applications, and virtual directories to share information with users over the Internet, an intranet, or an extranet. Sites, applications, and virtual directories work together in a hierarchical relationship as the basic building blocks for hosting online content.
Briefly, a site contains one or more applications, an application contains one or more virtual directories, and a virtual directory maps to a physical directory on a Web server. Each of these three concepts is discussed in additional detail in the following sections.
This section includes:
Sites
Applications
Virtual Directories
Sites
A site is a container for Web applications, and you can access it through one or more unique bindings. A Web site binding is the combination of an IP address, a port, and the optional host headers on which HTTP.sys listens for requests made to that Web site. For more information about sites, see Managing Sites in IIS 7.
Applications
An application is a software program that runs in an application pool and that delivers Web content, usually in HTML, to users over the HTTP protocol. When you create an application, the application's name becomes part of the site's URL that users can request from a Web browser.
In IIS 7, each site must have an application called the root application, or default application. However, a site can have more than one application. For example, you might have an online commerce site that has several applications, such as a shopping cart application that lets users gather items during shopping and a logon application that lets users to recall saved payment information when they make a purchase.
For more information about applications, see Managing Applications in IIS 7.
Virtual Directories
A virtual directory is a directory name that you specify in IIS and map to a physical directory on a local or remote server. The directory name then becomes part of the application's URL, and users can request the URL from a Web to the root of the site.
In IIS.
For more information about virtual directories, see Managing Virtual Directories in IIS 7. | https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc754437(v=ws.10)?redirectedfrom=MSDN | 2022-08-08T06:46:35 | CC-MAIN-2022-33 | 1659882570767.11 | [] | docs.microsoft.com |
High-Level Interrupts¶
The Xtensa architecture has support for 32 interrupts, divided over 8 levels, plus an assortment of exceptions. On the ESP32, the interrupt mux allows most interrupt sources to be routed to these interrupts using the interrupt allocator. Normally, interrupts will be written in C, but ESP-IDF allows high-level interrupts to be written in assembly as well, allowing for very low interrupt latencies.
Interrupt Levels¶
Using these symbols is done by creating an assembly file (suffix .S) and defining the named symbols, like this:
.section .iram1,"ax" .global xt_highint5 .type xt_highint5,@function .align 4 xt_highint5: ... your code here rsr a0, EXCSAVE_5 rfi 5 For a real-life example, see the :component_file:`esp32/dport_panic_highint_hdl.S` file; the panic handler interrupt is implemented there.
Notes¶
-
Do not call C code from a high-level interrupt; because these interupts still run in critical sections, this can cause crashes. (The panic handler interrupt does call normal C code, but this is OK because there is no intention of returning to the normal code flow afterwards.)
-
Make sure your assembly code gets linked in. If the interrupt handler symbol is the only symbol the rest of the code uses from this file, the linker will take the default ISR instead and not link the assembly file into the final project. To get around this, in the assembly file, define a symbol, like this:.global ld_include_my_isr_file ld_include_my_isr_file:
(The symbol is called
ld_include_my_isr_file here but can have any arbitrary name not defined anywhere else.)
Then, in the component.mk, add this file as an unresolved symbol to the ld command line arguments:
COMPONENT_ADD_LDFLAGS := -u ld_include_my_isr_file
This should cause the linker to always include a file defining
ld_include_my_isr_file, causing the ISR to always be linked in.
-
High-level interrupts can be routed and handled using esp_intr_alloc and associated functions. The handler and handler arguments to esp_intr_alloc must be NULL, however.
-
In theory, medium priority interrupts could also be handled in this way. For now, ESP-IDF does not support this. | https://docs.espressif.com/projects/esp-idf/en/release-v4.2/esp32/api-guides/hlinterrupts.html | 2022-08-08T07:37:49 | CC-MAIN-2022-33 | 1659882570767.11 | [] | docs.espressif.com |
Parametrized Simulation using CFG files
The Model CFG (
.cfg) files allow to pass command line options to Feel++ applications. In particular, it allows to
setup the output directory
setup the mesh
setup the time stepping
define the solution strategy and configure the linear/non-linear algebraic solvers.
other options specific to the toolbox used
directory=toolboxes/fluid/TurekHron/cfd3/P2P1G1 (1) case.dimension=2 (2) case.discretization=P2P1G1 (3) [fluid] (4) filename=$cfgdir/cfd3.json (5) mesh.filename=$cfgdir/cfd.geo (6) gmsh.hsize=0.03 (7) solver=Newton (8) pc-type=lu (9) bdf.order=2 (10) [ts] time-step=0.01 (11) time-final=10 (12)
If the mesh file is stored on a remote storage as Girder, the
mesh.filename option in the previous listing can be replaced by
mesh.filename=girder:{file:5af862d6b0e9574027047fc8}
where
5af862d6b0e9574027047fc8 is the id of the mesh file in the Girder platform. All options for Girder access are listed here : | https://docs.feelpp.org/toolboxes/0.107/parametrized-simulation-using-cfg-files.html | 2022-08-08T08:21:38 | CC-MAIN-2022-33 | 1659882570767.11 | [] | docs.feelpp.org |
Extensions Template Manager Templates
From Joomla! Documentation
Description[edit]
Provides an overview of the Templates available on a Joomla site. The screen is used to preview and edit templates.
How to Access[edit]
- Click the Templates button in the Control Panel
- Select Templates in the left sidebar.
- Select Extensions → Templates → Templates from the dropdown menu of the Administrator Panel.
Screenshot[edit]
Column Headers[edit]
In the table containing templates these are the different columns as shown below. Click on the column heading to sort the list by that column's value.
- Image. Thumbnail of the template.
- Template. The name of the template.
- Version. The version number of the template.
- Date. The date the template was created by the developer.
- Author. The developer of the template.
Column Filters[edit] ascending (default). Shows ordering of selected column, ascending or descending.
- Number of templates to display. Shows the number of templates to display on one page, default is 20 templates. If there are more templates than this number, you can use the page navigation buttons to navigate between pages.
List Filters[edit]
Site and Administrator filter
At the top you will see the following filter:
- Site: Filters on Site Templates. This is the default selection and allows you to manage the Templates for the Frontend.
- Administrator: This allows you to manage the Templates.
Automatic Pagination[edit]
Page Controls. When the number of templates right you will see the toolbar.
The functions are:
- Help. Opens this help screen.
- Options. Opens the Options window where settings such as default parameters can be edited.
Related Information[edit]
To edit templates styles. | https://docs.joomla.org/Help310:Extensions_Template_Manager_Templates | 2022-08-08T06:41:09 | CC-MAIN-2022-33 | 1659882570767.11 | [] | docs.joomla.org |
Build a React app to add users to a Face service
This guide will show you how to get started with the sample Face enrollment application. The app demonstrates best practices for obtaining meaningful consent to add users into a face recognition service and acquire high-accuracy face data. An integrated system could use an app like this to provide touchless access control, identity verification, attendance tracking, or personalization kiosk, based on their face data.
When launched, the application shows users a detailed consent screen. If the user gives consent, the app prompts for a username and password and then captures a high-quality face image using the device's camera.
The sample app is written using JavaScript and the React Native framework. It can currently be deployed on Android and iOS devices; more deployment options are coming in the future.
Prerequisites
- An Azure subscription – Create one for free.
- Once you have your Azure subscription, create a Face resource in the Azure portal to get your key and endpoint. After it deploys, select Go to resource.
- You'll need the key and endpoint from the resource you created to connect your application to Face API.
- For local development and testing only, the API key and endpoint are environment variables. For final deployment, store the API key in a secure location and never in the code or environment variables.
Important Security Considerations
- For local development and initial limited testing, it is acceptable (although not best practice) to use environment variables to hold the API key and endpoint. For pilot and final deployments, the API key should be stored securely - which likely involves using an intermediate service to validate a user token generated during login.
- Never store the API key or endpoint in code or commit them to a version control system (e.g. Git). If that happens by mistake, you should immediately generate a new API key/endpoint and revoke the previous ones.
- As a best practice, consider having separate API keys for development and production.
Set up the development environment
- Clone the git repository for the sample app.
- To set up your development environment, follow the React Native documentation . Select React Native CLI Quickstart. Select your development OS and Android as the target OS. Complete the sections Installing dependencies and Android development environment.
- Download your preferred text editor such as Visual Studio Code.
- Retrieve your FaceAPI endpoint and key in the Azure portal under the Overview tab of your resource. Don't check in your Face API key to your remote repository.
- Run the app using either the Android Virtual Device emulator from Android Studio, or your own Android device. To test your app on a physical device, follow the relevant React Native documentation .
Create a user add experience
Now that you have set up the sample app, you can tailor it to your own needs.
For example, you may want to add situation-specific information on your consent page:
Many face recognition issues are caused by low-quality reference images. Some factors that can degrade model performance are:
- Face size (faces that are distant from the camera)
- Face orientation (faces turned or tilted away from camera)
- Poor lighting conditions (either low light or backlighting) where the image may be poorly exposed or have too much noise
- Occlusion (partially hidden or obstructed faces) including accessories like hats or thick-rimmed glasses)
- Blur (such as by rapid face movement when the photograph was taken).
The service provides image quality checks to help you make the choice of whether the image is of sufficient quality based on the above factors to add the customer or attempt face recognition. This app demonstrates how to access frames from the device's camera, detect quality and show user interface messages to the user to help them capture a higher quality image, select the highest-quality frames, and add the detected face into the Face API service.
Notice the app also offers functionality for deleting the user's information and the option to re-add.
To extend the app's functionality to cover the full experience, read the overview for additional features to implement and best practices.
Deploy the app
First, make sure that your app is ready for production deployment: remove any keys or secrets from the app code and make sure you have followed the security best practices.
When you're ready to release your app for production, you'll generate a release-ready APK file, which is the package file format for Android apps. This APK file must be signed with a private key. With this release build, you can begin distributing the app to your devices directly.
Follow the Prepare for release documentation to learn how to generate a private key, sign your application, and generate a release APK.
Once you've created a signed APK, see the Publish your app documentation to learn more about how to release your app.
Next steps
In this guide, you learned how to set up your development environment and get started with the sample app. If you're new to React Native, you can read their getting started docs to learn more background information. It also may be helpful to familiarize yourself with Face API. Read the other sections on adding users before you begin development.
Feedback
Submit and view feedback for | https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/tutorials/build-enrollment-app | 2022-08-08T06:34:20 | CC-MAIN-2022-33 | 1659882570767.11 | [array(['../media/enrollment-app/1-consent-1.jpg', 'app consent page'],
dtype=object)
array(['../media/enrollment-app/4-instruction.jpg',
'app image capture instruction page'], dtype=object)
array(['../media/enrollment-app/10-manage-2.jpg',
'profile management page'], dtype=object) ] | docs.microsoft.com |
You're reading the documentation for a version of ROS 2 that has reached its EOL (end-of-life), and is no longer officially supported. If you want up-to-date information, please have a look at Humble.
Building ROS 2 on Fedora Linux
How to setup the development environment?
First install a bunch of dependencies:
$ sudo dnf install cppcheck cmake libXaw-devel opencv-devel poco-devel poco-foundation python3-empy python3-devel python3-nose python3-pip python3-pyparsing python3-pytest python3-pytest-cov python3-pytest-runner python3-setuptools python3-yaml tinyxml-devel eigen3-devel python3-pydocstyle python3-pyflakes python3-coverage python3-mock python3-pep8 uncrustify python3-argcomplete python3-flake8 python3-flake8-import-order asio-devel tinyxml2-devel libyaml-devel python3-lxml
Then install vcstool from pip:
$ pip3 install vcstool
With this done, you can follow the rest of the instructions to fetch and build ROS 2. | https://docs.ros.org/en/crystal/Installation/Fedora-Development-Setup.html | 2022-08-08T06:34:55 | CC-MAIN-2022-33 | 1659882570767.11 | [] | docs.ros.org |
The widget requires Webix Pro edition and can be purchased as part of any license pack.
Check Diagram What's New list to keep up with the latest updates.
Webix Diagram is a powerful tool for developing robust charts based on hierarchical and arbitrary data structures. Comming as a successor of Organogram, the newcomer is extended with rich capabilities and allows users to:
Users can also perform all the CRUD, filtering and sorting operations with item blocks. | https://docs.webix.com/desktop__diagram.html | 2022-08-08T06:30:30 | CC-MAIN-2022-33 | 1659882570767.11 | [] | docs.webix.com |
Gunicorn
Gunicorn is a Python WSGI HTTP Server for UNIX. The Gunicorn server is light on server resources, and fairly speedy.
If you find Apache’s
mod_wsgi to be a headache or want to use NGINX (or some other webserver), then Gunicorn could be for you. There are a number of other WSGI server options out there and this documentation should be enough for you to piece together how to get them working with your environment.
Note
The page contains additional steps on how to setup and configure Gunicorn that are not required for users who decide to stick with the default Gunicorn configuration as described in the main installation guide for AA.
Setting up Gunicorn
Note
- If you’re using a virtual environment, activate it now::
sudo su allianceserver source /home/allianceserver/venv/auth/bin/activate
Install Gunicorn using pip
pip install gunicorn
In your
myauth base directory, try running
gunicorn --bind 0.0.0.0:8000 myauth.wsgi. You should be able to browse to and see your Alliance Auth installation running. Images and styling will be missing, but don’t worry, your web server will provide them.
Once you validate its running, you can kill the process with Ctrl+C and continue.
Running Gunicorn with Supervisor
If you are following this guide, we already use Supervisor to keep all of Alliance Auth components running. You don’t have to but we will be using it to start and run Gunicorn for consistency.
Sample Supervisor config
You’ll want to edit
/etc/supervisor/conf.d/myauth.conf (or whatever you want to call the config file)
[program:gunicorn] user = allianceserver directory=/home/allianceserver/myauth/ command=/home/allianceserver/venv/auth/bin/gunicorn myauth.wsgi --workers=3 --timeout 120 stdout_logfile=/home/allianceserver/myauth/log/gunicorn.log stderr_logfile=/home/allianceserver/myauth/log/gunicorn.log autostart=true autorestart=true stopsignal=INT
[program:gunicorn]- Change
gunicornto whatever you wish to call your process in Supervisor.
user = allianceserver- Change to whatever user you wish Gunicorn to run as. You could even set this as allianceserver if you wished. I’ll leave the question security of that up to you.
directory=/home/allianceserver/myauth/- Needs to be the path to your Alliance Auth project.
command=/home/allianceserver/venv/auth/bin/gunicorn myauth.wsgi --workers=3 --timeout 120- Running Gunicorn and the options to launch with. This is where you have some decisions to make, we’ll continue below.
Gunicorn Arguments
See the Commonly Used Arguments or Full list of settings for more information.
Where to bind Gunicorn to
What address are you going to use to reference it? By default, without a bind parameter, Gunicorn will bind to
127.0.0.1:8000. This might be fine for your application. If it clashes with another application running on that port you will need to change it. I would suggest using UNIX sockets too, if you can.
For UNIX sockets add
--bind=unix:/run/allianceauth.sock (or to a path you wish to use). Remember that your web server will need to be able to access this socket file.
For a TCP address add
--bind=127.0.0.1:8001 (or to the address/port you wish to use, but I would strongly advise against binding it to an external address).
Whatever you decide to use, remember it because we’ll need it when configuring your webserver.
Number of workers
By default Gunicorn will spawn only one worker. The number you set this to will depend on your own server environment, how many visitors you have etc. Gunicorn suggests
(2 x $num_cores) + 1 for the number of workers. So for example if you have 2 cores you want 2 x 2 + 1 = 5 workers. See here for the official discussion on this topic.
Change it by adding
--workers=5 to the command.
Running with a virtual environment
Following this guide, you are running with a virtual environment. Therefore you’ll need to add the path to the
command= config line.
e.g.
command=/path/to/venv/bin/gunicorn myauth.wsgi
The example config is using the myauth venv from the main installation guide:
command=/home/allianceserver/venv/auth/bin/gunicorn myauth.wsgi
Starting via Supervisor
Once you have your configuration all sorted, you will need to reload your supervisor config
service supervisor reload and then you can start the Gunicorn server via
supervisorctl start myauth:gunicorn (or whatever you renamed it to). You should see something like the following
myauth-gunicorn: started. If you get some other message, you’ll need to consult the Supervisor log files, usually found in
/var/log/supervisor/.
Configuring your webserver
Any web server capable of proxy passing should be able to sit in front of Gunicorn. Consult their documentation armed with your
--bind= address and you should be able to find how to do it relatively easy. | https://allianceauth.readthedocs.io/en/latest/installation/gunicorn.html | 2022-08-08T07:26:08 | CC-MAIN-2022-33 | 1659882570767.11 | [] | allianceauth.readthedocs.io |
ShipmentType Table (490)
• 1 minute to read
• 1 minute to read
Shipment type list table. Classification of a mailing, allowing recipients to subscribe to lists
Fields
Indexes
Relationships
Replication Flags
- Replicate changes DOWN from central to satellites and travellers.
- Replicate changes UP from satellites and travellers back to central.
- Copy to satellite and travel prototypes.
Security Flags
- No access control via user's Role. | https://docs.superoffice.com/database/tables/shipmenttype.html | 2022-08-08T07:46:46 | CC-MAIN-2022-33 | 1659882570767.11 | [array(['media/ShipmentType.png',
'ShipmentType table relationship diagram'], dtype=object)] | docs.superoffice.com |
The native protocol layer encodes protocol messages into binary, before they are sent over the network.
This part of the code lives in its own project: native-protocol. We extracted it to make it reusable (Simulacron also uses it).
The protocol specifications are available in native-protocol/src/main/resources. These files originally come from Cassandra, we copy them over for easy access. Note that, if the latest version is a beta (this is the case for v5 at the time of writing – September 2019), the specification might not be up to date. Always compare with the latest revision in cassandra/doc.
For a broad overview of how protocol types are used in the driver, let’s step through an example:
the user calls
session.execute() with a
SimpleStatement. The protocol message for a
non-prepared request is
QUERY;
CqlRequestHandler uses
Conversions.toMessage to convert the statement into a
c.d.o.protocol.internal.request.Query;
InflightHandler.write assigns a stream id to that message, and wraps it into a
c.d.o.protocol.internal.Frame;
FrameEncoder uses
c.d.o.protocol.internal.FrameCodec to convert the frame to binary.
(All types prefixed with
c.d.o.protocol.internal belong to the native-protocol project.)
A similar process happens on the response path: decode the incoming binary payload into a protocol
message, then convert the message into higher-level driver objects:
ResultSet,
ExecutionInfo,
etc.
Every protocol message is identified by an opcode, and has a corresponding
Message subclass.
A
Frame wraps a message to add metadata, such as the protocol version and stream id.
+-------+ contains +------------+ | Frame +--------->+ Message + +-------+ +------------+ | int opcode | +--+---------+ | | +---------+ +----+ Query | | +---------+ | | +---------+ +----+ Execute | | +---------+ | | +---------+ +----+ Rows | +---------+ etc.
All value classes integer constants to represent
protocol codes (enums wouldn’t work at that level, because we need to add new codes in the DSE
driver); the driver generally rewraps them in more type-safe structures before exposing them to
higher-level layers.
For every message, there is a corresponding
Message.Codec for encoding and decoding. A
FrameCodec relies on a set of message codecs, for one or more protocol versions. Given an incoming
frame, it looks up the right message codec to use, based on the protocol version and opcode.
Optionally, it compresses frame bodies with a
Compressor.
+-----------------+ +-------------------+ | FrameCodec[B] +----------------+ PrimitiveCodec[B] | +-----------------+ +-------------------+ | B encode(Frame) | | Frame decode(B) +-------+ +---------------+ +------+----------+ +--------+ Compressor[B] | | +---------------+ | | +-------------------+ +---------------------------+ Message.Codec | 1 codec per opcode +-------------------+ and protocol version | B encode(Message) | | Message decode(B) | +-------------------+
Most of the time, you’ll want to use the full set of message codecs for a given protocol version.
CodecGroup provides a convenient way to register multiple codecs at once. The project provides
default implementations for all supported protocol version, both for clients like the driver (e.g.
encode
QUERY, decode
RESULT), or servers like Simulacron (decode
QUERY encode
RESULT).
+-------------+ | CodecGroup | +------+------+ | | +------------------------+ +----+ ProtocolV3ClientCodecs | | +------------------------+ | | +------------------------+ +----+ ProtocolV3ServerCodecs | | +------------------------+ | | +------------------------+ +----+ ProtocolV4ClientCodecs | | +------------------------+ | | +------------------------+ +----+ ProtocolV4ClientCodecs | | +------------------------+ | | +------------------------+ +----+ ProtocolV5ClientCodecs | | +------------------------+ | | +------------------------+ +----+ ProtocolV5ClientCodecs | +------------------------+
The native protocol layer is agnostic to the actual binary representation. In the driver, this
happens to be a Netty
ByteBuf, but the encoding logic doesn’t need to be aware of that. This is
expressed by the type parameter
B in
FrameCodec<B>.
PrimitiveCodec<B> abstracts the basic
primitives to work with a
B: how to create an instance, read and write data to it, etc.
public interface PrimitiveCodec<B> { B allocate(int size); int readInt(B source); void writeInt(int i, B dest); ... }
Everything else builds upon those primitives. By just switching the
PrimitiveCodec implementation,
the whole protocol layer could be reused with a different type, such as
byte[].
In summary, to initialize a
FrameCodec, you need:
a
PrimitiveCodec;
a
Compressor (optional);
one or more
CodecGroups.
The driver initializes its
FrameCodec in
DefaultDriverContext.buildFrameCodec().
the primitive codec is
ByteBufPrimitiveCodec, which implements the basic primitives for Netty’s
ByteBuf;
the compressor comes from
DefaultDriverContext.buildCompressor(), which determines the
implementation from the configuration;
it is built with
FrameCodec.defaultClient, which is a shortcut to use the default client groups:
ProtocolV3ClientCodecs,
ProtocolV4ClientCodecs and
ProtocolV5ClientCodecs.
The default frame codec can be replaced by extending the
context to override
buildFrameCodec. This
can be used to add or remove a protocol version, or replace a particular codec.
If protocol versions change,
ProtocolVersionRegistry will likely be affected as well.
Also, depending on the nature of the protocol changes, the driver’s request
processors might require some adjustments: either replace
them, or introduce separate ones (possibly with new
executeXxx() methods on a custom session
interface). | https://java-driver.docs.scylladb.com/stable/manual/developer/native_protocol/ | 2022-08-08T08:28:29 | CC-MAIN-2022-33 | 1659882570767.11 | [] | java-driver.docs.scylladb.com |
...
The plugin enables analysis of Erlang projects within within SonarQube.
It is compatible with the Issues Report plugin to run pre-commit local analysis.
Usage
RunanAnalysis
an Analysis with the SonarQube Runner (Recommended
recommended method)
To run an analysis of your Erlang project, use the SonarQube Runner.
A sample project is available on GitHub that can be browsed or downloaded: /projects/languages/erlang.
Run an Analysis withthe
other Analyzers
Maven and Ant can also be used to launch analysis on Erlang projects.
... | http://docs.codehaus.org/pages/diffpages.action?pageId=230394073&originalId=233053529 | 2015-05-22T10:29:30 | CC-MAIN-2015-22 | 1432207924919.42 | [] | docs.codehaus.org |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.