content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
Outputs Be sure the outputs you need are enabled (in the "Outputs" group) in order to generate them. Bitmap2Material can produce the following outputs from a diffuse map: - Base color - Roughness - Metallic - Diffuse - Specular - Glossiness - Normal - Height - Displacement - Bump - Ambient Occlusion - Curvature - Detail Normal - Emissive - Opacity
https://docs.substance3d.com/b2m3/outputs-67797034.html
2021-05-06T09:54:53
CC-MAIN-2021-21
1620243988753.91
[]
docs.substance3d.com
Gail Vanstone, Associate Professor, Department of Humanities (Culture & Expression Program) at York University, Toronto, Canada is the author of D is For Daring, a feminist cultural history of Studio D of the National Film Board (1974-1996) and a four-minute digital documentary Remembering Miriam (1994). Specializing in Canadian cultural production, Vanstone is currently compiling a digital archive of filmmakers, producers, technicians and other key players associated with Studio D. Session Title: A Stitch in Time: Interactivity and Ethical Activism Our proposal takes place at the nexus of feminist activism, participation and ethics. Developing a project executed in augmented reality conceived of as an art gallery installation, we reanimate the voices of second wave feminists bringing them into a creative contemporary collaborative conversation. Underlying the project is a desire to bridge the historical/ideological period known as second-wave feminism, joining it to ideas/issues concerning women today. Our method reinvigorates feminist documentary practices embedded in super-8 film and video production through interactive documentary. As a meditation on the creative tension between politics and history and an invitation to engage in epistemological self-examination, our project must address pressing questions rooted in the above-name triad of considerations. The piece itself is grounded in the act of quilting. Fracturing images from second-wave feminist documentary film, we capture women’s ideas, making them available for reflection and reconfiguration by audiences today. Employing augmented reality, we invite participants not merely to consume the documentary visually but to engage bodily in creating a new documentary fabric, reconstructing fragments from the past with ideas of their own, thus crafting an augmented reality ‘quilt’. This gesture towards the collective practices of women’s creativity and collectivity effectively transforms the role and position of the ‘onlooker’, inviting an ideological and ontological shift from object to subject. The project demands that we think through ethical implications of our invitation to engagement with (our) feminist agenda. We ask: What ethical considerations are bound up in the dynamic act of participation/interaction, taking place both within and around our i-doc project? What are the ethical considerations in inviting and using participant responses as a component of activism? How might we mobilize a strategy that avoids the pitfalls of essentialism, a critique that continues to dog second-wave feminism?
http://i-docs.org/idocs-2012/speakers-2/gail-vanstone/
2018-10-15T19:39:05
CC-MAIN-2018-43
1539583509690.35
[]
i-docs.org
Amazon EC2 Instance Store An instance store provides temporary block-level storage for your instance. This storage is located on disks that are physically attached to the host computer. Instance store is ideal for temporary storage of information that changes frequently, such as buffers, caches, scratch data, and other temporary content, or for data that is replicated across a fleet of instances, such as a load-balanced pool of web servers. An instance store consists of one or more instance store volumes exposed as block devices. The size of an instance store as well as the number of devices available varies by instance type. The virtual devices for instance store volumes are ephemeral[0-23]. Instance types that support one instance store volume have ephemeral0. Instance types that support two instance store volumes have ephemeral0 and ephemeral1, and so on. Contents Instance Store Lifetime You can specify instance store volumes for an instance only when you launch it. You can't detach an instance store volume from one instance and attach it to a different instance. Therefore, do not rely on instance store for valuable, long-term data. Instead, use more durable data storage, such as Amazon S3, Amazon EBS, or Amazon EFS. When you stop or terminate an instance, every block of storage in the instance store is reset. Therefore, your data cannot be accessed through the instance store of usage cost. You must specify the instance store volumes that you'd like to use when you launch the instance (except for NVMe instance store volumes, which are available by default). Then format and mount the instance store volumes before using them. You can't make an instance store volume available after you launch the instance. For more information, see Add Instance Store Volumes to Your EC2 Instance. Some instance types use NVMe or SATA-based solid state drives (SSD) to deliver high random I/O performance. This is a good option when you need storage with very low latency, but you don't need the data to persist when the instance terminates or you can take advantage of fault-tolerant architectures. For more information, see SSD Instance Store Volumes. The following table provides the quantity, size, type, and performance optimizations of instance store volumes available on each supported instance type. For a complete list of instance types, including EBS-only types, see Amazon EC2 Instance Types. * Volumes attached to certain instances suffer a first-write penalty unless initialized. ** For more information, see Instance Store Volume TRIM Support.
https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/InstanceStorage.html
2018-10-15T20:32:44
CC-MAIN-2018-43
1539583509690.35
[array(['images/instance_storage.png', 'Amazon EC2 instance storage'], dtype=object) ]
docs.aws.amazon.com
Debugging Scripts If a form script is loaded using an XHR from a web server, it is executed using eval(). To debug it, you need to use browser-specific debugger extensions. Debugging Form Scripts in Google Chrome If you are using the Google Chrome debugger, you can add the debugger; directive to the source code of the script: <form role="form"> <script cam-script debugger; </script> </form>
https://docs.camunda.org/manual/7.7/reference/embedded-forms/javascript/debugging/
2018-10-15T19:10:03
CC-MAIN-2018-43
1539583509690.35
[]
docs.camunda.org
Viewing log messagesViewing log messages If you want to debug a wolkenkit application, you may want to have a look at the log messages of its various processes. Getting a snapshotGetting a snapshot To get the log messages of an application use the wolkenkit logs command of the CLI. This will get a snapshot of the log messages: wolkenkit logswolkenkit logs Getting live updatesGetting live updates If you are not only interested in a snapshot, but want to follow the log messages in real-time, you need to additionally provide the --follow flag: wolkenkit logs --followwolkenkit logs --follow Formatting log messagesFormatting log messages Either way, the log messages are nothing but stringified JSON objects, so they can be somewhat hard to read. To format them install flaschenpost by running the following command: npm install -g flaschenpostnpm install -g flaschenpost Then you can pipe the log messages through flaschenpost and view them as nicely formatted output: wolkenkit logs | flaschenpost-normalize | flaschenpost-uncorkwolkenkit logs | flaschenpost-normalize | flaschenpost-uncork
https://docs.wolkenkit.io/1.1.0/reference/debugging-an-application/viewing-log-messages/
2018-10-15T19:54:29
CC-MAIN-2018-43
1539583509690.35
[]
docs.wolkenkit.io
Thruster Commander Introduction Why do I need the Thruster Commander?. Why can’t I use the Thruster Commander with other ESCs? Most ESCs are designed for RC airplane or multi-copter use. Motors on these craft never need to be run in reverse, so most ESCs are uni-directional. However the Thruster Commander is designed to be used with bi-directional ESCs. Because of the different operating natures of uni-directional and bi-directional ESCs, the signals required to stop these types of ESCs are different, so the Thruster Commander’s safety features (which are meant to stop all attached motors) will not work with uni-directional ESCs. For this reason the Thruster Commander should never be used with uni-directional ESCs. It is possible to upload bi-directional firmware to some standard ESCs. More information can be found in the Basic ESC documentation. Important Notes Only use the Thruster Commander with reversible/bi-directional ESCs (e.g. Blue Robotics ESCs). Standard ESCs are not compatible with the Thruster Commander’s safety features. See below for more information. Neither the Thruster Commander nor its included accessories are waterproof. Make sure to mount them in locations where they will not get wet. Deadman switches are recommended on any mobile Thruster Commander application. ESCs are necessary for powering brushless DC (BLDC) motors (e.g. T100 & T200 thrusters), even to run at full speed. Most ESCs require a neutral input signal to complete their initialization. This is a safety feature! The Thruster Commander will output a neutral signal when: - SWITCH is open (i.e. not connected to ground), - input potentiometers are centered, or - not enough potentiometers are connected. Setup Quick Start - Connect the power cable to the POWER connector in the top left of the board - Attach the potentiometer(s) to the inputs (see Modes of Operation for input configurations) - Each potentiometer’s 3-pin connector can be reversed to reverse its input - Attach electronic speed controllers (ESCs) to the output pins - The black/brown wire should be connected to the GND pin - These connectors should not be reversed - Apply 7-28 VDC power to the POWER connector and ESCs - To initialize the ESCs, either: - Turn the knobs to their center position, or - Turn the external switch to the off position - That’s it! You’re ready to go. Modes of Operation Powering Thruster Commander The Thruster Commander can be powered in one of two ways: 7-28 VDC or regulated 5 VDC. 7-28 VDC power can be supplied via the POWER connector in the top left corner of the board. This can be connected directly to any 7-28 VDC power supply, including the batteries powering your motors. Make sure the ESCs and motors you are using can handle the voltage you are providing; the Thruster Commander can handle up to 28 volts, but our thrusters cannot: see our thruster documentation for more details. 5 VDC regulated power is available from some ESCs with built-in battery eliminator circuits (BECs) which can power the Thruster Commander via the PWM cable, eliminating the need for the 7-28 VDC power input. These ESCs will have a third, red wire on their PWM input cables which carries the 5 VDC supply. Note that not all ESCs with such three-wire PWM input cables have built-in BECs. If none of your ESCs have BECs, you will need to supply either 7-28 VDC power via the POWER connector or 5 VDC regulated power from an external regulator/BEC. Our Basic ESCs do not have built-in BECs and are not able to power the Thruster Commander; they will require the use of the 7-28 VDC power input. Connecting the Battery and ESCs A 6-pole barrier block and jumpers are provided for connecting power between the Thruster Commander, ESCs, and the provided XT90 connector/your power source of choice. The six poles of the barrier block can be connected as shown into two sets of three with the provided jumpers; one set for power (red) and one for ground (black). The spade connectors from the XT90 connector/power source can be connected beneath two of the jumpers (the spade connectors are too wide to fit on top); this helps to limit the distance the current to the ESCs must flow through the jumpers and barrier blocks. The Thruster Commander power cable and ESCs can then be connected along the other side of the barrier block. Note that the Thruster Commander power cable should be omitted if the Commander is being powered via a BEC (see above). The Thruster Commander, ESCs, and batteries do not contain reverse-polarity or short-circuit protections. Double-check your wiring before applying power for the first time. Mounting Thruster Commander Two M3x0.5 holes spaced 31.75 mm (1.25”) apart on the back of each Thruster Commander provide a solid way to mount it on your project. Alternatively the Thruster Commander can be mounted with double-sided tape. However you mount the Thruster Commander, make sure it is in a location where it will stay dry, because the Thruster Commander and its accessories are not waterproof. The potentiometers can be mounted through a 10 mm (3/8”) hole in a panel or box up to 2.5 mm (0.1”) thick. After removing the nut and washer from the potentiometer, pass the potentiometer through the mounting hole from the back of the panel or box. Replace and tighten the nut and washer, then attach the knobs to the potentiometer by tightening the set screw onto the flat section of the potentiometer shaft using the provided 1.5 mm allen wrench. Deadman Switch/Enabling Output To enable PWM outputs from Thruster Commander, the center SWITCH pin must be connected to ground, indicated on the board as “GND”. This is achieved out-of-the-box with a jumper between the center and GND pins of SWITCH. A deadman switch can be added by replacing this jumper with any normally-open (NO) momentary switch connected between the center pin and GND. External “enable” switches can also be used. Should the switch be released or become disconnected from Thruster Commander, the SWITCH-to-GND connection will be broken and Thruster Commander will stop all connected motors. If you are using the Thruster Commander on a vehicle, we highly recommend that you install a deadman switch to prevent it from running off without you. Modifying or Replacing Potentiometers The potentiometers included with the Thruster Commander are standard 10 kΩ potentiometers with center detents. These can be replaced with any potentiometers with a resistance of 10 kΩ or less. We recommend those with center detents to make it easier to set the motors to neutral. To fit the knobs, replacement potentiometers should have 6 mm diameter shafts with flats for a set screw. Should different-length cables for the potentiometers be necessary, the wires may be lengthened or shortened as desired. However they should not be run directly alongside wires carrying power for motors, as they may interfere with the potentiometers’ analog outputs (i.e. sharing a tether). Operation ESC Initialization Most BLDC ESCs have a built-in safety feature that requires a neutral input signal (1500 µs) before they will complete their initialization. While this reduces the chances of a thruster unexpectedly turning on and lacerating you or your friends, it necessitates an extra initialization step at runtime. The Thruster Commander outputs a neutral PWM signal to connected ESCs when: - SWITCH is open (i.e. not connected to ground), - input potentiometers are centered, or - not enough potentiometers are connected. This condition must be held until the ESC gives a high-pitched beep (the fifth beep after being powered on in the case of the Blue Robotics BasicESC). This beep signifies that the ESC has finished its initialization and is ready to accept an input. If you are using a deadman switch connected to SWITCH, simply release the button until the ESC beeps to initialize. Should you be using the included jumper, you will need to center the knobs to initialize the ESCs. Controlling Motors Each Thruster Commander contains two output channels (LEFT OUTPUT and RIGHT OUTPUT) with two sets of output pins each. This means that the Thruster Commander can control two sets of motors independently. Three modes of operation are supported: Single Input, Dual Input, and Mixed Input. The mode is selected by connecting potentiometers to specific inputs. See Modes of Operation for valid combinations of potentiometer connections. Single Input uses a single potentiometer connected to the SPEED input to drive both the RIGHT and LEFT OUTPUTs identically. Dual Input uses two potentiometer inputs, one on RIGHT IN and one on LEFT IN. The input on RIGHT IN will dictate the output on RIGHT OUTPUT and the input on LEFT IN will dictate the output on LEFT OUTPUT. Mixed Input is designed for use with crafts with left and right thrusters for differential thrust steering. In these setups, SPEED controls the base speed of the craft and STEERING controls the differential thrust. The 3-pin connectors on the potentiometers are reversible, so if turning a knob runs the motors the wrong way, the connector can be reversed to correct the problem. Note that only the potentiometer inputs are reversible; the SWITCH and OUTPUT connectors should not be reversed. Should you be using a BLDC motor such as the T100 or T200, each motor’s direction can be reversed by swapping any two of the three connections between the ESC and the motor. Potentiometers can be hot-swapped. However it is advised that this only be done while the outputs are disabled (SWITCH disconnected from GND or power to thrusters disconnected), as hot-swapping potentiometers while the outputs are enabled may cause one or more motors to suddenly start running. Troubleshooting Specifications 2D Drawings Thruster Commander Board 3D Model All 3D models are provided in zip archives containing the follow file types: - SolidWorks Part (.sldprt) - IGES (.igs) - STEP (.step) - STL (.stl)
http://docs.bluerobotics.com/commander/
2018-10-15T20:12:00
CC-MAIN-2018-43
1539583509690.35
[array(['Thruster-Commander-Banner.png', None], dtype=object) array(['CMDR-QUICK-START-DIAGRAM-R1-01.png', None], dtype=object)]
docs.bluerobotics.com
Configuring authorizationConfiguring authorization When a list handles an event that results in adding a new item, this item inherits the authorization from the event. This ensures that the authorization is consistent across events and lists. Additionally, the user that caused the event becomes the owner of the list item. E.g., configure the authorization of an invoice in a way that authenticated users can receive issued events, but public users can't: const initialState = { isAuthorized: { commands: {}, events: { issued: { forAuthenticated: true, forPublic: false } } } }; Now, make the list of invoices handle this event and add a new item: const when = { 'accounting.invoice.issued' (invoices, event, mark) { invoices.add({ amount: event.data.amount }); mark.asDone(); } }; Then, the new item will be readable by authenticated users, but not by public users. Additionally, the user who caused the issued event becomes the owner of this list item. Granting access at runtimeGranting access at runtime Sometimes you need to grant or revoke access to a list item at runtime. For that, use the authorize function. This function requires you to provide a where clause to select the desired items as well as the access rights that you want to change. To grant access to any authenticated user set the forAuthenticated flag to true, to grant access to anyone set the forPublic flag to true. To revoke access use false instead. Not providing a flag at all is equivalent to not changing the current configuration. E.g., to grant read access to all invoices that have an amount less than 1000 to all authenticated users, and revoke access for public users at the same time, use the following code: invoices.authorize({ where: { amount: { $lessThan: 1000 }}, forAuthenticated: true, forPublic: false }); Transferring ownershipTransferring ownership To transfer ownership of a list item, use its transferOwnership function. This function requires you to provide the id of the new owner using the to property. E.g., to transfer ownership of all invoices that have an amount less than 1000 to the user with the id 09ee43c9-5abc-4e9b-acc3-e8b75a3e4b98, use the following code: invoices.transferOwnership({ where: { amount: { $lessThan: 1000 }}, to: '09ee43c9-5abc-4e9b-acc3-e8b75a3e4b98' }); Only known users If you provide an id of a non-existent user, the ownership will be transferred anyway. You will not be able to return to the previous state.
https://docs.wolkenkit.io/1.1.0/reference/creating-the-read-model/configuring-authorization/
2018-10-15T19:17:12
CC-MAIN-2018-43
1539583509690.35
[]
docs.wolkenkit.io
Service Event Statuses¶ Similar to Test Instance Status’s, Service Events have a status associated with them. These Service Event Status help manage the flow of a Service Event from initiation to completion. To create Service Event Statuses go to the Admin section and click the Service Event Statuses link in the Service Log section and then click the Add Service Event Status button. Fill in the fields as follows: - Name A short descriptive name for the status - Is default Check off whether this should be considered the default Service Event Status when initiating a Service Event. - Is review required Do service events with this status require review? - RTS qa review required Service events with Return To Service (RTS) QA that has not been reviewed can not have this status selected if set to true. For example, you may have an Approved Service Event Status that requires one or more Test Lists to be performed and approved before the Service Event can have its status set to Approved. - Description A description of this status - Color Service Event Statuses can have different colours associated with them.
http://docs.qatrackplus.com/en/latest/admin/service_log/statuses.html
2018-10-15T19:57:51
CC-MAIN-2018-43
1539583509690.35
[]
docs.qatrackplus.com
Sometimes your application needs to execute a transaction that should be retried if it fails. For example, your REST API might be handling an HTTP request in its own database transaction, and if it fails for transient reasons, you simply want to "replay" the whole request from the start, in a fresh transaction. One of those transient reasons might be a deadlock during a SERIALIZABLE or REPEATABLE READ transaction. Another reason might be that your network connection to the database fails, and perhaps you don't just want to give up when that happens. In situations like these, the right thing to do is often to restart your transaction from scratch. You won't necessarily want to execute the exact same SQL commands with the exact same data, but you'll want to re-run the same application code that produced those SQL commands. The transactor framework makes it a little easier for you to do this safely, and avoid typical pitfalls. You encapsulate the work that you want to do in a transaction into something that you pass to perform. Transactors come in two flavours. The old, pre-C++11 way is to derive a class from the transactor template, and pass an instance of it to your connection's connection_base::perform member function. That function will create a transaction object and pass it to your transactor, handle any exceptions, commit or abort, and repeat as appropriate. The new, simpler C++11-based way is to write your transaction code as a lambda (or other callable), which creates its own transaction object, does its work, and commits at the end. You pass that callback to pqxx::perform. If any given attempt fails, its transaction object goes out of scope and gets destroyed, so that it aborts implicitly. Your callback can return its results to the calling code. The transactor's name. Optional overridable function to be called if transaction is aborted. This need not imply complete failure; the transactor will automatically retry the operation a number of times before giving up. on_abort() will be called for each of the failed attempts. One parameter is passed in by the framework: an error string describing why the transaction failed. This will also be logged to the connection's notice processor. Optional overridable function to be called after successful commit. If your on_commit() throws an exception, the actual back-end transaction will remain committed, so any changes in the database remain regardless of how this function terminates. Overridable function to be called when "in doubt" about outcome. This may happen if the connection to the backend is lost while attempting to commit. In that case, the backend may have committed the transaction but is unable to confirm this to the frontend; or the transaction may have failed, causing it to be rolled back, but again without acknowledgement to the client program. The best way to deal with this situation is typically to wave red flags in the user's face and ask him to investigate. The robusttransaction class is intended to reduce the chances of this error occurring, at a certain cost in performance. Overridable transaction definition; insert your database code here. The operation will be retried if the connection to the backend is lost or the operation fails, but not if the connection is broken in such a way as to leave the library in doubt as to whether the operation succeeded. In that case, an in_doubt_error will be thrown. Recommended practice is to allow this operator to modify only the transactor itself, and the dedicated transaction object it is passed as an argument. This is what makes side effects, retrying etc. controllable in the transactor framework. Referenced by pqxx::transactor< TRANSACTION >::transactor(). Simple way to execute a transaction with automatic retry. Executes your transaction code as a callback. Repeats it until it completes normally, or it throws an error other than the few libpqxx-generated exceptions that the framework understands, or after a given number of failed attempts, or if the transaction ends in an "in-doubt" state. (An in-doubt state is one where libpqxx cannot determine whether the server finally committed a transaction or not. This can happen if the network connection to the server is lost just while we're waiting for its reply to a "commit" statement. The server may have completed the commit, or not, but it can't tell you because there's no longer a connection. Using this still takes a bit of care. If your callback makes use of data from the database, you'll probably have to query that data within your callback. If the attempt to perform your callback fails, and the framework tries again, you'll be in a new transaction and the data in the database may have changed under your feet. Also be careful about changing variables or data structures from within your callback. The run may still fail, and perhaps get run again. The ideal way to do it (in most cases) is to return your result from your callback, and change your program's data after perform completes successfully. This function replaces an older, more complicated transactor framework. The new function is a simpler, more lambda-friendly way of doing the same thing. Referenced by pqxx::connection_base::set_client_encoding(). This has been superseded by the new transactor framework and pqxx::perform. Invokes the given transactor, making at most Attempts attempts to perform the encapsulated code. If the code throws any exception other than broken_connection, it will be aborted right away. References pqxx::transactor< TRANSACTION >::operator()().
https://libpqxx.readthedocs.io/en/latest/a00204.html
2018-10-15T19:27:55
CC-MAIN-2018-43
1539583509690.35
[]
libpqxx.readthedocs.io
Here, the matter is straight forward. If pixel value is greater than a threshold value, it is assigned one value (may be white), else it is assigned another value (may be black). The function used is c.threshold() function is used, but pass an extra flag, c. If you are not interested, you can skip this. Since we are working with bimodal images, Otsu's algorithm tries to find a threshold value (t) which minimizes the weighted within-class variance given by the relation : \[\sigma_w^2(t) = q_1(t)\sigma_1^2(t)+q_2(t)\sigma_2^2(t)\] where \[q_1(t) = \sum_{i=1}^{t} P(i) \quad \& \quad q)*
https://docs.opencv.org/trunk/d7/d4d/tutorial_py_thresholding.html
2018-10-15T19:40:42
CC-MAIN-2018-43
1539583509690.35
[]
docs.opencv.org
slaves and builders as child services. The botmaster acts as the parent service for a buildbot.process.botmaster.BuildRequestDistributorinstance (at master.botmaster.brd) as well as all active slaves ( buildbot.buildslave.AbstractBuildSlave debug client and manhole. master.status - A buildbot.status.master.Statusinstance that provides access to all status data. This instance is also the service parent for all status listeners.
http://docs.buildbot.net/current/developer/master-overview.html
2016-08-29T17:59:12
CC-MAIN-2016-36
1471982290497.47
[]
docs.buildbot.net
Considering checksum type when planning array LUN size and number Contributors Download PDF of this page When planning the number and size of array LUNs that you need for ONTAP, you must consider the impact of the checksum type on the amount of usable space in the array LUN. A checksum type must be specified for each array LUN assigned to an ONTAP system. When an array LUN on the storage array is mapped to be used by an ONTAP system , ONTAP treats the array LUN as a raw, unformatted disk. When you assign an array LUN to an ONTAP system you specify the checksum type, which tells ONTAP how to format the raw array LUN. The impact of the checksum type on usable space depends on the checksum type you specify for the LUN.
https://docs.netapp.com/us-en/ontap-flexarray/install/concept_considering_checksum_type_when_planning_array_lun_size_and_number.html
2022-01-17T02:19:44
CC-MAIN-2022-05
1642320300253.51
[]
docs.netapp.com
- CONSTRAINT - One or more security constraint values (levels and categories) are being changed for the session. - row_level_security_constraint_name - Name of an existing constraint. - The specified constraint_name must be currently assigned to the user. - You can specify a maximum of 6 hierarchical constraints and 2 non-hierarchical constraints per SET SESSION CONSTRAINT statement. - level_name - Name of a hierarchical level, valid for the constraint_name, that is to replace the default level. - The specified level_name must be currently assigned to the user. Otherwise, Vantage returns an error to the requestor. - category_name - A. - NULL - If you specify NULL and then update a table, only users with OVERRIDE privileges can subsequently access the affected table rows.
https://docs.teradata.com/r/UG7kfQnbU2ZiX41~Mu75kQ/5EggOyjoY9Owc6BztfUUMQ
2022-01-17T00:58:52
CC-MAIN-2022-05
1642320300253.51
[]
docs.teradata.com
Treasure Data allows you to directly importing data from your organization’s Zendesk account. For sample workflows on importing data from Zendesk, view Treasure Boxes. Basic knowledge of Treasure Data Zendesk account Zendesk Zopim account to retrieve Chat data Open the TD Console. Navigate to Integrations Hub > Catalog. Search and select Zendesk. Click Create. You are creating an authenticated connection. The following dialog opens. There are three options for Auth method: basic, token, oauth. To import chat data, enter in this url for Login Url for Zendesk:. There is no support for token authentication with Chat. Fill in all the required fields, then click Continue. Name your new Zendesk connection. Click Done. After creating the authenticated connection, you are automatically taken to the Authentications tab. Edit the appropriate fields. Preview your data. To make changes, click Advanced Settings. Select Next. After selecting Advanced Settings, the following dialog opens. Edit the parameters. Select Save and Next. Select a database and a table where you want to transfer your data. Specify the schedule of the data transfer using the following dialog and click Next. Name your source and click Done: Install the newest TD Toolbelt. Prepare seed.yml as shown in the following example, with your login_url, username (email), token and, target. In this example, you use “append” mode: token can be created by going to Admin Home –> CHANNELS –> API –> "add new token" ( https://<YOUR_DOMAIN_NAME>.zendesk.com/agent/admin/api). target specifies the type of object that you want to dump from Zendesk. tickets, ticket_events, ticket_forms, ticket_fields, users, organizations, scores, recipients, object_records, relationship_records and user_events are supported. For more details on available out modes, see Appendix. Use connector:guess. This command automatically reads the target data, and intelligently guesses the data format. If you open the load.yml file, you see guessed file format definitions including, in some cases, file formats, encodings, column names, and types. Then you can preview how the system parses the file by using preview command. If the system detects your column name or type unexpectedly, modify the load.yml directly and preview again. Submit the load job. It may take a couple of hours depending on the data size. Users need to specify the database and table where their data is stored. The preceding command assumes that you have already created database(td_sample_db) and table(td_sample_table). If the database or the table do not exist in TD this command will not succeed, so create the database and table manually or use --auto-create-table option with td connector:issue command to automatically create the database and table: You can load records incrementally from Zendesk by using the incremental flag. If False, the start_time and end_time in next.yml is not updated. The connector will always fetch all the data from Zendesk with static conditions. If True, the start_time and end_time is updated in next.yml. The default is True. You can schedule periodic data connector execution for periodic Zendesk import. We carefully configure our scheduler to ensure high availability. By using this feature, you no longer need a cron daemon on your local data center. A new schedule can be created by using the td connector:create command. The name of the schedule, cron-style schedule, the database and table where their data is stored, and the data connector configuration file are required. You can see the list of scheduled entries by td connector:list. td connector:show shows the execution setting of a schedule entry. td connector:history shows the execution history of a schedule entry. To investigate the results of each individual execution, use td job <jobid>. td connector:delete removes the schedule. You can specify file import mode in out section of seed.yml. This is the default mode and records are appended to the target table. This mode replaces data in the target table. Any manual schema changes made to the target table remain intact with this mode. You can specify includes option to get related objects. For example, if you want to get tickets data with comments, use a configuration as follows:. List of Options for Zendesk Data Connector
https://docs.treasuredata.com/plugins/viewsource/viewpagesrc.action?pageId=17404340
2022-01-17T02:00:32
CC-MAIN-2022-05
1642320300253.51
[]
docs.treasuredata.com
Guake User Documentation¶ Welcome to the official Guake User Documentation. Guake is a dropdown terminal made for the GNOME desktop environment. Guake’s style of window is based on a famous FPS game, and one of its goals is to be easy to reach and developer friendly. Main Features¶ - Lightweight - Simple Easy and Elegant - Smooth integration of terminal into GUI - Appears when you call and disappears once you are done by pressing a predefined hotkey (F12 by default) - Compiz transparency support - Multi tab - Plenty of color palettes - Quick Open in your favorite text editor with a click on a file name (with line number support) - Customizable hotkeys for tab access, reorganization, background transparency, font size,… - Extremely configurable - Configure Guake startup by running a bash script when Guake starts - Multi-monitor support (open on a specified monitor, open on mouse monitor) - Save terminal content to file - Open URL to your browser Guake Documentation - User Manual - Project information - Contributing to Guake Useful links¶ -.
https://guake.readthedocs.io/en/stable/index.html
2022-01-17T01:01:13
CC-MAIN-2022-05
1642320300253.51
[array(['_images/intro-small.jpg', '_images/intro-small.jpg'], dtype=object)]
guake.readthedocs.io
DeleteAnomalyDetector Deletes a detector. Deleting an anomaly detector will delete all of its corresponding resources including any configured datasets and alerts. Request Syntax POST /DeleteAnomalyDetector HTTP/1.1 Content-type: application/json { "AnomalyDetectorArn": " string" } URI Request Parameters The request does not use any URI parameters. Request Body The request accepts the following data in JSON format. - AnomalyDetectorArn The ARN of the detector to delete. Type: String Length Constraints: Maximum length of 256. Pattern: arn:([a-z\d-]+):.*:.*:.*:.+ There was a conflict processing the request. Try your request again. HTTP Status Code: 409 -:
https://docs.aws.amazon.com/lookoutmetrics/latest/api/API_DeleteAnomalyDetector.html
2022-01-17T02:20:44
CC-MAIN-2022-05
1642320300253.51
[]
docs.aws.amazon.com
GroupDocs.Comparison for .NET 21.7 Release Notes This page contains release notes for GroupDocs.Comparison for .NET 21.7 Major Features Below is the list of most notable changes in release of GroupDocs.Comparison for .NET 21.7: - Improved processing of paragraphs with style changes in PDF format - Fixed display of paragraphs with a table of contents in PDF format - Fixed an issue when comparing some annotations in PDF format - Fixed display of ComponentType property in the list of changes from the GetChanges() method - Improved image generation in PreviewOptions for Words documents Full List of Issues Covering all Changes in this Release Public API and Backward Incompatible Changes none
https://docs.groupdocs.com/comparison/net/groupdocs-comparison-for-net-21-7-release-notes/
2022-01-17T00:44:44
CC-MAIN-2022-05
1642320300253.51
[]
docs.groupdocs.com
Bootloader¶ Depthai bootloader is a small program which aids in booting and updating bootloader or depthai application packages. To be able to run hostless, the Depthai bootloader must be first flashed to the devices flash. This step is required only once. Plug USB to the board Flash bootloader using DeviceBootloader::flashBootloader (Check Example at the bottom) Disconnect the board and switch the boot mode GPIO to the following settings: BOOT[4:0] : 01000 (see attached images for reference) Reassemble the board Once the device has the bootloader flashed, it will perform the same as before. Running pipelines with a host connected doesn’t require any changes. Suggested workflow is to perform as much of development as possible with the host connected as the iteration cycle is greatly improved. Once desired pipeline is created, use the following function to flash: DeviceBootloader::flash API¶ DeviceBootloader is a class to communicate with the bootloader. It is used to flash created Pipeline, depthai application package or update the bootloader itself. progressCb parameter takes a callback function, which will be called each time an progress update occurs (rate limited to 1 second). This is mainly used to inform the user of the current flashing progress. You can also check the version of the current bootloader by using the Bootloader Version example. DepthAI Application Package (.dap)¶ Depthai application package is a binary file format which stores sections of data. The purpose of this format is to be able to extract individual sections and do OTA updates without requiring to update all data. Example: Between update 1 and 2 of users application, Depthai firmware, Asset storage (50MiB neural network) and asset structure remained the same, but some additional processing nodes were added to the pipeline. Instead of transferring the whole package only Pipeline description can be sent and updated. Depthai application package (.dap) consists of: SBR (512B header which describes sections of data) Depthai device firmware (section “__firmware”) Pipeline description (section “pipeline”) Assets structure (section “assets”) Asset storage (section “asset_storage”) Example¶ Following section will show an example of: Flashing bootloader (needed only once) and flashing a created Pipeline “myExamplePipeline” to the device (The example is written in Python, similar steps apply to C++) Flashing bootloader import depthai as dai (f, bl) = dai.DeviceBootloader.getFirstAvailableDevice() bootloader = dai.DeviceBootloader(bl) progress = lambda p : print(f'Flashing progress: {p*100:.1f}%') bootloader.flashBootloader(progress) Note Make sure to switch GPIO BOOT mode settings (See image below for more details) Flashing created pipeline import depthai as dai # ... # Create Pipeline 'myExamplePipeline' # ... (f, bl) = dai.DeviceBootloader.getFirstAvailableDevice() bootloader = dai.DeviceBootloader(bl) progress = lambda p : print(f'Flashing progress: {p*100:.1f}%') bootloader.flash(progress, myExamplePipeline) GPIO boot settings. Boot settings must be set as following: BOOT[4:0] : 01000 and GPIO58 (WAKEUP): 0 Got questions? We’re always happy to help with code or other questions you might have.
https://docs.luxonis.com/projects/api/en/latest/components/bootloader/
2022-01-17T00:47:48
CC-MAIN-2022-05
1642320300253.51
[array(['../../_images/boot-depthai.jpeg', 'boot-depthai'], dtype=object)]
docs.luxonis.com
Deploying a service template in SCVMM 2012 using a Windows core VHD fails This article fixes an issue in which you receive error 22042 when you deploy a service template in System Center 2012 Virtual Machine Manager. Original product version: System Center 2012 Virtual Machine Manager Original KB number: 2680242 Symptoms When attempting to deploy a service template in System Center 2012 Virtual Machine Manager using a Windows core virtual hard disk (VHD), the process fails with the following errors: Error (22042) The service and returned a result exit code (87). Error (22010) VMM failed to enable Server Manager PowerShell on the guest virtual machine. Please log into the virtual machine and look in the event logs (%WINDIR%\Logs\Dism\dism.log). Cause This is by design. Service templates using Windows core operating systems are not currently supported. Resolution The following workarounds are available: Enable all the roles or features in the virtual machine prior to creating the Sysprep VHD. Run DISM.exeas part of the application deployment using a pre-install script. - Executable program: C:\Windows\system32\dism.exe - Parameters: /online /norestart /enable-feature /featurename:NetFx2-ServerCore /featurename:NetFx3-ServerCore /featurename:DNS-Server-Core /featurename:DirectoryServices-DomainController-ServerFoundation - Timeout: 240 seconds
https://docs.microsoft.com/en-US/troubleshoot/system-center/vmm/deploy-service-template-error-22042
2022-01-17T02:42:07
CC-MAIN-2022-05
1642320300253.51
[]
docs.microsoft.com
USB Device MSC Class - USB Device MSC Class Overview - USB Device MSC Class Resource Needs from Core - USB Device MSC Class Configuration - USB Device MSC Class Programming Guide - USB Device MSC Class Storage Drivers This section describes the mass storage device class (MSC) supported by Silicon Labs USB Device. MSC is a protocol that enables the transfer of information between a USB device and a host. The information being transferred is anything that can be stored electronically, such as executable programs, source code, documents, images, configuration data, or other text or numeric data. The USB device appears as an external storage medium to the host, enabling the transfer of files via drag and drop. A file system defines how the files are organized in the storage media. The USB mass storage class specification does not require any particular file system to be used on conforming devices. Instead, it provides a simple interface to read and write sectors of data using the Small Computer System Interface (SCSI) transparent command set. As such, operating systems may treat the USB drive like a hard drive, and can format it with any file system they like. The USB mass storage device class supports two transport protocols, as follows: Bulk-Only Transport (BOT) Control/Bulk/Interrupt (CBI) Transport (used only for floppy disk drives) The mass storage device class implements the SCSI transparent command set using the BOT protocol only, which signifies that only bulk endpoints will be used to transmit data and status information. The MSC implementation supports multiple logical units. The MSC implementation is in compliance with the following specifications: Universal Serial Bus Mass Storage Class Specification Overview, Revision 1.3 Sept. 5, 2008. Universal Serial Bus Mass Storage Class Bulk-Only Transport, Revision 1.0 Sept. 31, 1999. USB Device MSC Class Overview Protocol In this section, we will discuss the Bulk-Only Transport (BOT) protocol of the Mass Storage Class. The Bulk-Only Transport protocol has three stages: The Command Transport The Data Transport The Status Transport Mass storage commands are sent by the host through a structure called the Command Block Wrapper (CBW). For commands requiring a data transport stage, the host will attempt to send or receive the exact number of bytes from the device as specified by the length and flag fields of the CBW. After the data transport stage, the host attempts to receive a Command Status Wrapper (CSW) from the device that details the status of the command as well as any data residue (if any). For commands that do not include a data transport stage, the host attempts to receive the CSW directly after CBW is sent. The protocol is detailed in Figure - MSC Protocol. Figure - MSC Protocol Figure 12 MSC Protocol Endpoints On the device side, in compliance with the BOT specification, the MSC is composed of the following endpoints: A pair of control IN and OUT endpoints called default endpoint. A pair of bulk IN and OUT endpoints. The table below indicates the different usages of the endpoints. Table - MSC Endpoint Usage Class Requests There are two defined control requests for the MSC BOT protocol. These requests and their descriptions are detailed in the table below. Table - Mass Storage Class Requests Small Computer System Interface (SCSI) At the programming interface level, the MSC device implements one of the standard storage-media communication protocols, like SCSI and SFF-8020i (ATAPI). The "Programming Interface" specifies which protocol is implemented, and helps the host operating system to load the suitable device driver for communicating with the USB storage device. SCSI is the most common protocol used with USB MSC storage devices. We provide an implementation for MSC SCSI subclass that our GSDK users can use out of the box. SCSI is a set of standards for handling communication between computers and peripheral devices. These standards include commands, protocols, electrical interfaces and optical interfaces. Storage devices that use other hardware interfaces, such as USB, use SCSI commands for obtaining device/host information and controlling the device’s operation and transferring blocks of data in the storage media. SCSI commands cover a vast range of device types and functions and as such, devices need a subset of these commands. In general, the following commands are necessary for basic communication: INQUIRY READ CAPACITY(10) READ(10) REQUEST SENSE TEST UNIT READY WRITE(10) USB Device MSC Class Resource Needs from Core Each time you add an MSC class instance to a USB configuration via the function sl_usbd_msc MSC Class Configuration Two groups of configuration parameters are used to configure the MSC class: USB Device MSC Class Application-Specific Configurations USB Device MSC Class Logical Unit Configuration USB Device MSC Class Application-Specific Configurations Class Compile-Time Configurations Silicon Labs USB Device MSC class and SCSI subclass are configurable at compile time via #defines located in the sl_usbd_core_config.h file. Table - Generic Configuration Constants Class Instance Creation Creating a USB Device MSC SCSI class instance is done by calling the sl_usbd_msc_scsi_create_instance() function. This function takes one configuration argument that is described below. p_scsi_callbacks p_scsi_callbacks is a pointer to a configuration structure of type sl_usbd_msc_scsi_callbacks_t. In addition to the common usb device class callbacks connect/disconnect, it provides the MSC class with a set of optional callback functions that are called when an event occurs on the logical unit. A null pointer ( NULL) can be passed to this argument if no callbacks are needed. The table below describes each configuration field available in this configuration structure. Table - sl_usbd_msc_scsi_callbacks_t Configuration Structure USB Device MSC Class Logical Unit Configuration Adding a logical unit to an MSC class instance is done by calling the function sl_usbd_msc_lun_add(). This function takes one configuration argument that is described below. p_lu_info p_lu_info is a pointer to a structure of type sl_usbd_msc_scsi_lun_info_t. Its purpose is to provide the information on the logical unit to the MSC class. The table below describes each configuration field available in this configuration structure. Table - sl_usbd_msc_scsi_lun_info_t Configuration Structure USB Device MSC Class Programming Guide This section explains how to use the MSC class. Initializing the USB Device MSC Class Adding a USB Device MSC SCSI Class Instance to Your Device USB Device MSC Class Logical Unit Handling Initializing the USB Device MSC Class To add MSC SCSI class functionality to your device, first initialize the MSC base class and the SCSI subclass by calling the function sl_usbd_msc_init() and sl_usbd_msc_scsi_init(). The example below shows how to call sl_usbd_msc_init() and sl_usbd_msc_scsi_init(). Example - Calling sl_usbd_msc_init() and sl_usbd_msc_scsi_init() sl_status_t status; status = sl_usbd_msc_init(); if (status != SL_STATUS_OK) { /* An error occurred. Error handling should be added here. */ } status = sl_usbd_msc_scsi_init(); if (status != SL_STATUS_OK) { /* An error occurred. Error handling should be added here. */ } Adding a USB Device MSC SCSI Class Instance to Your Device To add MSC SCSI class functionality to your device, first create an instance, then add it to your device's configuration(s). You must add at least one logical unit to your instance. Creating an MSC SCSI Class Instance Create a MSC SCSI class instance by calling the function sl_usbd_msc_scsi_create_instance(). The example below shows how to call sl_usbd_msc_scsi_create_instance() using default arguments. For more information on the configuration arguments to pass to sl_usbd_msc_scsi_create_instance(), see USB Device MSC Class Application Specific Configurations . Example - Calling sl_usbd_msc_scsi_create_instance() uint8_t class_nbr; sl_status_t status; sl_usbd_msc_scsi_callbacks_t app_usbd_msc_scsi_callbacks = { .enable = NULL, .disable = NULL, .host_eject = NULL }; status = sl_usbd_msc_scsi_create_instance(&app_usbd_msc_scsi_callbacks,0 &class_nbr); if (status != SL_STATUS_OK) { /* An error occurred. Error handling should be added here. */ } Adding the MSC Class Instance to Your Device's Configuration(s) After you have created an MSC class instance, you can add it to a configuration by calling the function sl_usbd_msc_add_to_configuration(). The example below show how to call sl_usbd_msc_scsi_add_to_configuration() using default arguments. Example - Calling sl_usbd_msc_scsi_add_to_configuration() sl_status_t status; status = sl_usbd_msc_scsi_add_to_configuration(class_nbr, (1) config_nbr_fs); (2) if (status != SL_STATUS_OK) { /* An error occurred. Error handling should be added here. */ } (1) Class number to add to the configuration returned by sl_usbd_msc_scsi_create_instance(). (32) Configuration number (here adding it to a Full-Speed configuration). USB Device MSC Class Logical Unit Handling Adding a Logical Unit When adding a logical unit to your MSC SCSI class instance, it must be bound to a storage medium (RAMDisk, SD card, flash memory, etc). The MSC class uses a storage driver to communicate with storage media. This driver will need to be supply when adding the logical unit. The example below shows how to add a logical unit via sl_usbd_msc_scsi_lun_add(). Example - Adding a Logical Unit via sl_usbd_msc_scsi_lun_add() sl_usbd_msc_scsi_lun_t *lu_object_ptr = NULL; sl_usbd_msc_scsi_lun_info_t lu_info; sl_status_t status; lu_info.sl_usbd_msc_scsi_lun_api_t = &app_usbd_scsi_storage_block_device_api; lu_info.vendor_id_ptr = "Silicon Labs"; lu_info.product_id_ptr = "block device example"; lu_info.product_revision_level = 0x1000u; lu_info.is_read_only = false; status = sl_usbd_msc_scsi_lun_add(class_nbr, &lu_info, &lu_object_ptr); if (status != SL_STATUS_OK) { /* An error occurred. Error handling should be added here. */ } Attaching/Detaching a Storage Medium After the logical unit has been added, a storage medium must be attached to be available from the host side. The MSC class offers two functions to control the storage media association to the logical unit: sl_usbd_msc_scsi_lun_attach() and sl_usbd_msc_scsi_lun_detach(). These functions allow you to emulate the removal of a storage device in order to re-gain access from the embedded application if necessary. The example below shows how to use the function sl_usbd_msc_scsi_lun_attach() and sl_usbd_msc_scsi_lun_detach(). Example - Media Attach/Detach sl_status_t status; status = sl_usbd_msc_scsi_lun_attach(lu_object_ptr); if (status != SL_STATUS_OK) { /* An error occurred. Error handling should be added here. */ } ... (1) status = sl_usbd_msc_scsi_lun_detach(lu_object_ptr); if (status != SL_STATUS_OK) { /* An error occurred. Error handling should be added here. */ } ... (2) status = sl_usbd_msc_scsi_lun_attach(lu_object_ptr) if (status != SL_STATUS_OK) { /* An error occurred. Error handling should be added here. */ } ... (3) (1) From this moment, if the MSC device is connected to a host, the storage media is accessible. (2) If the MSC device is connected to a host, the media will now appear as unavailable. At this moment, operations can be performed on the media from the embedded application. (3) Again, if the MSC device is connected to the host, the storage media will appear as connected. USB Device MSC Class Storage Drivers The USB Device MSC Class needs a storage driver to communicate with a storage medium. For the moment, Silicon Labs doesn't offer drivers. The driver API is defined by typedef sl_usbd_msc_scsi_lun_api_t. Your sl_usbd_msc_scsi_lun_api_t variable must be included to your sl_usbd_msc_scsi_lun_info_t variable, passed as argument when you add a logical unit with sl_usbd_msc_scsi_lun_add(). See section USB Device MSC SCSI API for more details on the structures. The storage driver implementation can be as simple as an array of sectors in RAM. Typical sector size (i.e., block size) is 512 for mass storage devices, and 2048 for CD-ROMs.
https://docs.silabs.com/usb/1.0/05c-usb-device-class-msc-scsi
2022-01-17T01:04:29
CC-MAIN-2022-05
1642320300253.51
[]
docs.silabs.com
Health and Value Dashboards Many enterprises deploy BMC Server Automation across large scale, distributed infrastructure. When managing a large implementation, it can be challenging to visualize the infrastructure that is deployed and understand how well it is performing. For example, administrators commonly ask: - How many of my agents are currently reachable? - How many agent licenses am I using? - Are any of my Application Servers misconfigured? - How big is my database and is that size a problem? - Is network latency affecting performance? The Health Dashboard provides a single interface that gathers information about system components and assesses how well they are functioning. You can also set up a Value Dashboard that provides metrics on the time and money saved by using this product. For more information, see Using the Health and Value Dashboards. Was this page helpful? Yes No Submitting... Thank you
https://docs.bmc.com/docs/ServerAutomation/86/getting-started/key-concepts/health-and-value-dashboards
2019-04-18T13:24:51
CC-MAIN-2019-18
1555578517639.17
[]
docs.bmc.com
Development:Synchronization Service Contents Overview This page contains notes for the reimplementation of CollectiveAccess' master/slave sync'ing system. This system keeps one or more CollectiveAccess systems synchronized with changes made to one or more "master" systems. Typical use cases are: - Sync a public-facing server running a front-end collections web site with a back-end cataloguing server. Changes are always made to the back-end and sync'ed periodically to the front-end for dispel to the public. - Periodically sync several back-end servers to a single front-end collections web site. This is done for consortia portals such as NovaMuse, where 52 museum back-end systems are presented on a single web site. This work is being performed within the scope of the 1.6 release planned for late 2015. Current implementation The current implementation is an ad-hoc script that uses the now-deprecated web service API. To perform a sync the "slave" system executes a search on the "master" together with a modified:"after <timestamp>" where the timestamp is the date/time of the last sync. The search can be configured but is typically "*", which returns all records modified since the last sync. For each record in the returned set the script pulls the item-level data, then queues any related records is it configured to sync for sync. It will recursively spider related records until it hits a record with no related record, or only related records it has already sync'ed in the current session. The current process makes the following assumptions: - The configuration of the two systems is exactly the same in terms of metadata element codes, types and lists. Internal table primary key ids don't need to match but idno's, list codes and element codes do. - Related media should be pulled over in its original form and reprocessed using local rules. - All sync'ing is focused on a primary kind of record (Eg. ca_objects) with other kinds (Eg. ca_entities, ca_collections) pulled in as needed via relations to the primary. - All communication is done via HTTP, typically on port 80, for simplicity and to avoid firewall headaches. - Sync'ing is done periodically via cron job. While it could be run at any interval it is typically run daily. The current process has several problems: - It's relatively slow since it pulls item level data for records one at a time using discrete service calls. - It's relatively slow because it spiders across a network of related records. For example, sync'ing a single object record may cascade to a sync of hundreds of objects related to entities related to the initial object. The script does not analyze the change history of related records, but rather blindly syncs them. - It can miss sync'ing changes: - when records are changed in the period between when it starts and completes a sync. - when the change in the master is to a related record that is not logged against the subject. (Eg. an entity is related to an object but the object's modified time is not incremented) - when records are deleted. Some versions of the script can detect deletions but this is all very hacky. - when the idno of the record being sync'ed changes on the master. The current script uses idno's to pair records on the master with their slave equivalents. If the master idno changes the script will create a new record and orphan the old one which will no longer get updates (not to mention being an unwanted duplicate) - Sync will fail if the related list or list item used on the master is not on the slave. - Sync cannot deal with two items with the same idno. This should not but does happen with some datasets. When sync'ing several masters into a single slave (Eg. a consortium like NovaMuse) this should happen and does happen. The NovaMuse script has a hack to deal with this, but general support is needed. - Sync replicates media by pulling the original by URL to the slave and reprocessing using slave-specific rules. Depending upon who you talk to this is a feature (the slave can have only the derivatives it needs; don't waste bandwidth pulling 8 or 10 or 12 files per record) or a bug (slave needs to be able to do serious media processing and have all server-side processing application installed). - Handling of FT_MEDIA and FT_FILE metadata attributes seems to be broken. - Have not tested sync with InformationService metadata attributes; probably broken. - Not entirely sure sync'ing of hierarchical records works in all cases. Some features that are not standard currently, but have been hacked into various iterations of the script and should be standard in a new implementation: - Rules based quality control, allowing the sync to reject records that don't validate. When this has been done in the current script it's just a lump of project specific code. For the new implementation it could be support using expressions or plugins or both. - Ability to report rejected records back to the master. These reports can be made available to cataloguers and system administrators. - Filtering sync'ed records by access - Filtering sync'ed records by type - Filtering sync'ed records by source - Ability to configure media sync'ing to use either on-slave processing (the current arrangement) or simple copy of all, or selected, derivatives. - Ability to configure on-slave media processing to use version other than original. This can be useful for storage-constrained slaves, or for cases where it is not desirable to expose high-quality original media on a public server. Features of new implementation Key features: - Use current web services APIs. These may be currently available ones, or new ones optimized for sync. - Devise sync protocol that can: - Support periodic or near-real-time change tracking - Does not miss changes on the master; this may require making changes to change logging on the master to log a wider variety of changes in related records. - Can properly handle deletes. - Handle spidering more efficiently by: - Only sync'ing related records that have actually changed. - Pulling item-level information in batches rather than one at a time (perhaps we precomputed the set of records that are needed before sync'ing?) - Create a registry associating master internal id's to IDNO's, to make it possible to track IDNO changes on the master. Literature - - Protocol - Need to devise a unique "system id" (hostname/config setting!?) and a GUID for each record (system_id + primary record id) - Leverage sequential change log we already have to compute the full set of records that need to be pulled in advance - Keep sync log on the slave with the last sequence# (change log id) that was successfully applied for this master -- keep in mind slaves can have multiple masters - Treat relationships like every other record. That way we don't have to worry about "spidering" graphs for changed records. If a relationship is new or has changed then sync it, - Have to add change logging for relationship models - Replicator should be a discrete utility/script that talks to both sides using REST APIs. Some users may want to run it on the slave side, most will use the master -- but we shouldn't make assumptions about "pushing" and "pulling". sphinx
https://docs.collectiveaccess.org/wiki/Development:Synchronization_Service
2019-04-18T12:18:48
CC-MAIN-2019-18
1555578517639.17
[]
docs.collectiveaccess.org
Creates a curve that a hanging chain or cable assumes under its own weight when supported only at its ends. Draw a conic section curve with options for the start, end, apex, and rho value. Draw a conic curve perpendicular at the start. Draw a conic curve tangent at the start. Draw a conic curve tangent at start and end. Draw a curve from control point locations. Fit a curve through point objects. Creates a control-point curve though polyline vertices. Creates an interpolated curve through polyline vertices. Draw chained Bézier curves with editing handles. Draw a helical curve with options for number of turns, pitch, vertical, reverse, and around a curve. Draw a vertical helical curve. Draw a hyperbolic curve from focus points, vertices, or coefficient. Fit a curve through picked locations. Draw chained Bézier curves with editing handles. Fit a curve through locations on a surface. Draw a parabolic curve from focus points. Draw a parabolic curve from vertex and end. Draw a parabolic curve through three picked points.. Create curves between two open or closed input curves. Rhinoceros 6 © 2010-2019 Robert McNeel & Associates. 12-Apr-2019
https://docs.mcneel.com/rhino/6/help/en-us/toolbarmap/curve_toolbar.htm
2019-04-18T12:30:20
CC-MAIN-2019-18
1555578517639.17
[]
docs.mcneel.com
Summary This guide cover design and construction of road drainage (stormwater) drains. These drain water from a water table to the outside of a road. They are often made of corrugated PVC. It is also common practice to construct a sediment trap immediately before a culvert inlet. Culverts used to cross rivers are described in FPGs Crossings. File type: File size: 3.28 MB Pages: 9 cloud_download Download document
https://docs.nzfoa.org.nz/forest-practice-guides/erosion-and-sediment-control-measures/2.4-erosion-and-sediment-control-measures-road-drainage-stormwater-culverts/
2019-04-18T13:02:00
CC-MAIN-2019-18
1555578517639.17
[]
docs.nzfoa.org.nz
Table of Contents Package Management: A Hands-On Explanation WORK IN PROGRESS Anatomy of a Slackware package A Slackware package is a simple TGZ or TXZ compressed archive containing: - the tree structure of files and directories ; - post-installation scripts ; - the package description. The name of every package provides a series of informations: - the program name ; - the program version ; - the architecture of the package ; - the build number. Here's a few examples: emacs-24.2-i486-1 mozilla-firefox-15.0.1-i486-1 vim-7.3.645-x86_64-1 Managing Slackware packages using the traditional tools Since its early releases, Slackware provides a collection of simple tools - the pkgtools - enabling the user to install, upgrade and remove software packages, as well as build them: installpkg removepkg upgradepkg explodepkg makepkg Installing software packages Install the Emacs editor from the Slackware DVD 1): # mount /dev/cdrom /mnt/cdrom # cd /mnt/cdrom/slackware/e # installpkg emacs-24.2-i486-1.txz Verifying package emacs-24.2-i486-1.txz. Installing package emacs-24.2-i486-1.txz [ADD]: PACKAGE DESCRIPTION: # emacs (GNU Emacs) # # Emacs is the extensible, customizable, self-documenting real-time # display editor. If this seems to be a bit of a mouthful, an # easier explanation is that Emacs is a text editor and more. At # its core is an interpreter for Emacs Lisp, a dialect of the Lisp # programming language with extensions to support text editing. # This version supports X. # # # Executing install script for emacs-24.2-i486-1.txz. Package emacs-24.2-i486-1.txz installed. Checking if a package is installed The package installation process has created a new entry in /var/log/packages : # ls /var/log/packages/em* /var/log/packages/emacs-24.2-i486-1 Knowing if a package is installed boils down to checking the existence of the corresponding entry in /var/log/packages. Example : # ls /var/log/packages/*firefox* /var/log/packages/mozilla-firefox-15.0.1-i486-1 Firefox is installed on the system, in version 15.0.1. Another example : # ls /var/log/packages/kdebase* ls: cannot access /var/log/packages/kdebase*: No such file or directory There is no kdebase-* package installed on the system. Removing a package Use removepkg to remove an installed package. The command can take the simple basename of the package as an argument. Example: # removepkg emacs It's also possible to provide the complete name as an argument. In that case, it's better to call the command from within /var/log/packages and use tab completion: # cd /var/log/packages # removepkg emacs-24.2-i486-1 Upgrading a package Slackware provides security updates for its latest releases. Visit the official site to know more about the latest updates: # links - Follow the ChangeLogslink. Slackware-stable ChangeLog. - Read the file ChangeLog.txtcorresponding to the architecture of your system. You can also use the Links browser to fetch updates manually. Before launching Links, create a /root/updates directory 2) to store your downloaded updates: # cd # mkdir updates # cd updates/ # links mirrors.slackware.com - Follow the Slackware File Treelink. - Check out the directory corresponding to your release and architecture. - Change into the patches/packagesdirectory. - Download any available updates. Quit Links and install your updates like this : # upgradepkg bind-9.9.1_P4-i486-1_slack14.0.txz +============================================================================== | Upgrading bind-9.9.1_P3-i486-1 package using ./bind-9.9.1_P4-i486-1_slack14.0.txz +============================================================================== Pre-installing package bind-9.9.1_P4-i486-1_slack14.0... Removing package /var/log/packages/bind-9.9.1_P3-i486-1-upgraded-2012-11-21,12:14:32... --> Deleting /usr/doc/bind-9.9.1-P3/CHANGES --> Deleting /usr/doc/bind-9.9.1-P3/COPYRIGHT --> Deleting /usr/doc/bind-9.9.1-P3/FAQ ... Verifying package bind-9.9.1_P4-i486-1_slack14.0.txz. Installing package bind-9.9.1_P4-i486-1_slack14.0.txz: PACKAGE DESCRIPTION: bind (DNS server and utilities) # # The named daemon and support utilities such as dig, host, and # nslookup. Sample configuration files for running a simple caching # nameserver are included. Documentation for advanced name server # setup can be found in /usr/doc/bind-9.x.x/. # Executing install script for bind-9.9.1_P4-i486-1_slack14.0.txz. Package bind-9.9.1_P4-i486-1_slack14.0.txz installed. Package bind-9.9.1_P3-i486-1 upgraded with new package ./bind-9.9.1_P4-i486-1_slack14.0.txz. Another example : # upgradepkg iptables-1.4.14-i486-2_slack14.0.txz Know more about the contents of a package Every package has a corresponding entry in /var/log/packages. These are all simple text files providing information about the contents of the respective packages. Example: # less /var/log/packages/wget-1.14-i486-1 PACKAGE NAME: wget-1.14-i486-1 COMPRESSED PACKAGE SIZE: 478.5K UNCOMPRESSED PACKAGE SIZE: 2.0M PACKAGE LOCATION: /var/log/mount/slackware/n/wget-1.14-i486-1.txz PACKAGE DESCRIPTION: wget: wget (a non-interactive network retriever) wget: wget: GNU Wget is a free network utility to retrieve files from the wget: World Wide Web using HTTP and FTP, the two most widely used Internet wget: protocols. It works non-interactively, thus enabling work in the wget: background after having logged off. wget: wget: The author of Wget is Hrvoje Niksic <[email protected]>. wget: wget: wget: FILE LIST: ./ install/ install/slack-desc install/doinst.sh usr/ usr/bin/ usr/bin/wget usr/man/ usr/man/man1/ usr/man/man1/wget.1.gz usr/info/ usr/info/wget.info.gz ... Managing Slackware packages with slackpkg The slackpkg utility has been officially included in Slackware since the 13.0 release. It enables the user to manage Slackware packages much more comfortably. A few remarks: - Only official Slackware packages are handled by slackpkg. - Third-party packages can be managed if you use Matteo Rossini's slackpkg+plugin. - Dependencies still have to be managed manually. Initial configuration Edit /etc/slackpkg/mirrors and comment out one and only one package source, for example: # /etc/slackpkg/mirrors ... # FRANCE (FR) # Slackware-current. If you do that, you will upgrade to a development version of Slackware! If you prefer managing packages locally without the benefit of updates, you can still use the Slackware installation DVD as a package source. In that case, you will have to configure the default mount point: # /etc/slackpkg/mirrors ... #---------------------------------------------------------------- # Local CD/DVD drive #---------------------------------------------------------------- cdrom://mnt/cdrom/ ... Don't forget to mount the DVD before calling slackpkg: # mount /dev/cdrom /mnt/cdrom Update the information on available packages: # slackpkg update slackpkg updatebefore searching, installing or updating a package, so the system's informations about available packages are up to date. Installing packages Example with a single package: # slackpkg install mplayerplug-in Confirm the installation in the subsequent screen, and the package is automatically downloaded and installed. You can also provide several packages as an argument: # slackpkg install mplayerplug-in bittorrent You can also manage whole package groups: # slackpkg install kde Another example for package groups: # slackpkg install xfce Remove packages Example with a single package: # slackpkg remove mplayerplug-in As above, confirm the removal of the package in the subsequent screen. Remove several packages at once: # slackpkg remove mplayerplug-in bittorrent Likewise, you can remove a whole package group: # slackpkg remove kde Or: # slackpkg remove xfce Upgrading packages When a package update is available, you can install it using the following command: # slackpkg upgrade iptables Update several packages at once: # slackpkg upgrade mozilla-firefox mozilla-thunderbird It is common practice to keep your whole system up to date: # slackpkg upgrade-all # slackpkg search k3b Looking for k3b in package list. Please wait... DONE The list below shows all packages with name matching "k3b". [uninstalled] - k3b-2.0.2_20120226.git-i486-1 If the package is already installed, here's what you get: # slackpkg search Terminal Looking for Terminal in package list. Please wait... DONE The list below shows all packages with name matching "Terminal". [ installed ] - Terminal-0.4.8-i486-1 You can also search for individual files. The search will eventually display on or several packages containing the file in question: # slackpkg file-search libncurses.so Looking for libncurses.so in package list. Please wait... DONE The list below shows the packages that contains "libncurses\.so" file. [ installed ] - aaa_elflibs-14.0-i486-4 [ installed ] - ncurses-5.9-i486-1 If you want to know more about the content of a package: # slackpkg info mesa PACKAGE NAME: mesa-8.0.4-i486-1.txz PACKAGE LOCATION: ./slackware/x PACKAGE SIZE (compressed): 19208 K PACKAGE SIZE (uncompressed): 83930 K PACKAGE DESCRIPTION: mesa: mesa (a 3-D graphics library) mesa: mesa: Mesa is a 3-D graphics library with an API very similar to that of mesa: another well-known 3-D graphics library. :-) The Mesa libraries are mesa: used by X to provide both software and hardware accelerated graphics. mesa: mesa: Mesa was written by Brian Paul. mesa: Cleaning the system Remove all third-party packages: # slackpkg clean-system If you decide to keep some of the packages, simply unselect them in the subsequent screen. You can also use slackpkg to repair a damaged package. Let's say I accidentally deleted the file /usr/bin/glxgears. First, I have to search for the package providing that file: # slackpkg file-search glxgears Looking for glxgears in package list. Please wait... DONE The list below shows the packages that contains "glxgears" file. [ installed ] - mesa-8.0.4-i486-1 With this information, I can simply reinstall the package: # slackpkg reinstall mesa Rebuild official packages Slackware provides the entire system's source code in the source directory. Every binary system package will have his corresponding source directory. These source directories usually contain: - the source code for the application or the library; - its fabrication recipe in the shape of a *.SlackBuildfile; - the package description in a slack-descfile; - eventually, a post-installation script named doinst.sh; - various other files like patches, custom menu entries, etc. Build a package from source In the example below, we will build the Terminal application from the source code provided by Slackware. You might want to remove the corresponding package if it is installed. Terminalpackage is Xfce's terminal. In Slackware 14.1, the package has been renamed to xfce4-terminal. # removepkg Terminal Choose an appropriate place on your system to store the source code and the scripts, for example: # cd # mkdir -pv source/Terminal mkdir: created directory 'source' mkdir: created directory 'source/Terminal' # cd source/Terminal/ # links mirrors.slackware.com Fetch the content from the source/xfce/Terminal directory on a Slackware mirror. Here's what we get: # ls -lh total 1,4M -rw-r--r-- 1 root root 821 nov. 24 15:09 slack-desc -rw-r--r-- 1 root root 1,4M nov. 24 15:11 Terminal-0.4.8.tar.xz -rw-r--r-- 1 root root 3,6K nov. 24 15:10 Terminal.SlackBuild Make the Terminal.SlackBuild file executable and start the building process: # chmod +x Terminal.SlackBuild # ./Terminal.SlackBuild The script initiates the package compilation. If everything goes as expected, the operation exits with the following message: Slackware package /tmp/Terminal-0.4.8-i486-1.txz created. Now we can install the resulting package: # installpkg /tmp/Terminal-0.4.8-i486-1.txz Modify an official Slackware package The main reason for rebuilding an official package is to modify it, for example to add or strip certain functionalities. In the following example, we will rebuild the audacious-plugins package in order to modify the Audacious audio player. The vanilla application sports two different graphical interfaces, and we will disable one of them. Let's begin with removing the package if it is installed: # removepkg audacious-plugins Now create a suitable directory to store the source code: # cd /root/source # mkdir audacious-plugins # cd audacious-plugins # links mirrors.slackware.com Fetch the contents of the /source/xap/audacious-plugins directory and make the audacious-plugins.SlackBuild script executable: # chmod +x audacious-plugins.SlackBuild # ls -lh total 1,4M -rw-r--r-- 1 root root 1,4M nov. 24 15:28 audacious-plugins-3.3.1.tar.xz -rwxr-xr-x 1 root root 4,0K nov. 24 15:28 audacious-plugins.SlackBuild* -rw-r--r-- 1 root root 892 nov. 24 15:28 slack-desc Now edit audacious-plugins.SlackBuild and add one option: ... # Configure: CFLAGS="$SLKCFLAGS" \ CXXFLAGS="$SLKCFLAGS" \ ./configure \ --prefix=/usr \ --libdir=/usr/lib${LIBDIRSUFFIX} \ --sysconfdir=/etc \ --mandir=/usr/man \ --enable-amidiplug \ --disable-gtkui \ -> add this option --program-prefix= \ --program-suffix= \ ${ARCHOPTS} \ --build=$ARCH-slackware-linux ... Build and install the package: # ./audacious-plugins.SlackBuild ... Slackware package /tmp/audacious-plugins-3.3.1-i486-1.txz created. # installpkg /tmp/audacious-plugins-3.3.1-i486-1.txz Choosing your configuration options for compiling The source configuration script (or more exactly the sometimes very long line in the SlackBuild beginning with ./configure) often displays an overview of activated and/or deactivated options. To interrupt the package construction process and display this overview, you can temporarily edit the SlackBuild like this: ... # Configure: CFLAGS="$SLKCFLAGS" \ CXXFLAGS="$SLKCFLAGS" \ ./configure \ --prefix=/usr \ --libdir=/usr/lib${LIBDIRSUFFIX} \ --sysconfdir=/etc \ --mandir=/usr/man \ --enable-amidiplug \ --program-prefix= \ --program-suffix= \ ${ARCHOPTS} \ --build=$ARCH-slackware-linux exit 1 -> add this option to interrupt the script # Build and install: make $NUMJOBS || make || exit 1 make install DESTDIR=$PKG || exit 1 ... Now run the script and wait a few seconds for the configuration overview: # ./audacious-plugins.SlackBuild ... Configuration: ... Interfaces ---------- GTK (gtkui): yes Winamp Classic (skins): yes Use the ./configure –help option to display a list of all the possible options: # tar xvf audacious-plugins-3.3.1.tar.xz # cd audacious-plugins-3.3.1 # ./configure --help | less ... --disable-speedpitch disable Speed and Pitch effect plugin --disable-gtkui disable GTK interface (gtkui) --disable-skins disable Winamp Classic interface (skins) --disable-lyricwiki disable LyricWiki plugin (default=enabled) ... /tmpdirectory. So you can simply run ./configure –help | lessfrom this directory, without manually uncompressing the source tarball to the current directory. Once you've chosen all your configuration options, get rid of the temporary exit 1 command in your script and launch the build and installation process: # ./audacious-plugins.SlackBuild ... Slackware package /tmp/audacious-plugins-3.3.1-i486-1.txz created. # installpkg /tmp/audacious-plugins-3.3.1-i486-1.txz Building third-party packages Slackware offers only a limited choice of packages compared to behemoth distributions like Ubuntu or Debian. More often than not, you'll want to install a package that's not provided by the distribution. In that case, what can a poor boy do? The SlackBuilds.org website is probably the best address to find third-party software. You won't find any packages there, because SlackBuilds.org is not a binary package repository nor will it ever be. It's an extremely clean and well organized collection of build scripts, each one reviewed and tested. Using these scripts will enable you to build about every piece of third party software under the sun. Building packages using the SlackBuilds.org scripts In the following example, we will build and install the cowsay package using the build script provided by SlackBuilds.org. For a start, cd into the build directory we've defined earlier: # cd /root/source Download the following components into this directory : - the compressed tarball containing the scripts to build the package; - the compressed source code tarball. In our case: # links - In the Search field in the upper left corner of the screen, type cowsay, move the cursor to Search(CursorDown key) and confirm by hitting Enter. - Follow the cowsaylink on the search results page. - Once you're on the cowsaypage, download the SlackBuild ( cowsay.tar.gz) and the source code ( cowsay-3.03.tar.gz) and quit Links. lynxinstead of links. Here's our two downloaded tarballs: # ls -l cowsay* -rw-r--r-- 1 root root 15136 nov. 25 08:14 cowsay-3.03.tar.gz -rw-r--r-- 1 root root 2855 nov. 25 08:14 cowsay.tar.gz Uncompress the tarball containing the scripts: # tar xvzf cowsay.tar.gz cowsay/ cowsay/cowsay.SlackBuild.patch cowsay/README cowsay/slack-desc cowsay/cowsay.SlackBuild cowsay/cowsay.info Eventually, you can do a little cleanup and delete the tarball: # rm -f cowsay.tar.gz Now move the source tarball to the newly created cowsay/ directory: # mv -v cowsay-3.03.tar.gz cowsay/ « cowsay-3.03.tar.gz » -> « cowsay/cowsay-3.03.tar.gz » Here's what we have: # tree cowsay cowsay |-- cowsay-3.03.tar.gz |-- cowsay.info |-- cowsay.SlackBuild |-- cowsay.SlackBuild.patch |-- README `-- slack-desc Now cd into that directory. Check if the cowsay SlackBuild is executable, and then launch it to start the package construction: # cd cowsay/ # ls -l cowsay.SlackBuild -rwxr-xr-x 1 kikinovak users 1475 mai 27 2010 cowsay.SlackBuild* # ./cowsay.SlackBuild ... If everything goes well, the process spews out a package in /tmp, or more exactly in the $OUTPUT directory defined by the script: ... Slackware package /tmp/cowsay-3.03-noarch-1_SBo.tgz created. All that's left to do is install the package using installpkg: # installpkg /tmp/cowsay-3.03-noarch-1_SBo.tgz # cowsay Hi there ! ------------- < Hi there ! > ------------- \ ^__^ \ (oo)\_______ (__)\ )\/\ ||----w | || || Managing package dependencies Some packages require the presence of other packages, either to build (build dependencies) and/or to run (runtime dependencies) correctly. In some cases, a required package can depend itself on one or more other packages, and so on. To take an example, let's have a look at the libgnomeprint page on SlackBuilds.org. The package description is followed by the following caveat: This requires: libgnomecups. Moreover, every script tarball contains an *.info file which states explicitly all the required package dependencies. If we look at the libgnomeprint.info file, we'll find a REQUIRES field: PRGNAM="libgnomeprint" VERSION="2.18.8" HOMEPAGE="" ... REQUIRES="libgnomecups" ----> package dependency ... REQUIRESfield has been introduced with Slackware 14.0. This simply means that before we build the libgnomeprint package, we have to build and install the libgnomecups package. Besides strictly required dependencies, a package can also have some optional dependencies to offer some extra functionality. As an example, the Leafpad text editor can be built against the optional libgnomeprint and libgnomeprintui dependencies. WORK IN PROGRESS Sources - Originally written by Niki Kovacs
http://docs.slackware.com/slackware:package_management_hands_on?rev=1392896563&amp;mddo=cite
2019-04-18T12:58:06
CC-MAIN-2019-18
1555578517639.17
[array(['https://docs.slackware.com/lib/plugins/bookcreator/images/add.png', None], dtype=object) array(['https://docs.slackware.com/lib/plugins/bookcreator/images/del.png', None], dtype=object) ]
docs.slackware.com
Trend Micro Deep Security on the AWS Cloud: Quick Start Reference Deployment Deployment Guide Trend Micro Software Development Team AWS Quick Start Reference Team June 2015 (last update: March 2018) Trend Micro Deep Security is a host-based security product that provides Anti-Malware, Host Firewall, Intrusion Prevention, File Integrity Monitoring, Log Inspection, Web Application Firewalling, and Content Filtering modules in a single agent running in the guest operating system. This Quick Start reference deployment guide describes how to deploy Trend Micro Deep Security on the Amazon Web Services (AWS) Cloud. It contains links to AWS CloudFormation templates that automate this deployment as well as providing additional information. This guide covers how to deploy Trend Micro Deep Security using these templates. It does not cover other aspects of administering Deep Security. For information about administering Deep Security, see the Trend Micro Deep Security Help Center..
https://docs.aws.amazon.com/quickstart/latest/deep-security/welcome.html
2019-04-18T12:54:34
CC-MAIN-2019-18
1555578517639.17
[]
docs.aws.amazon.com
After you, the network administrator, install the App Visibility components, perform the following configuration procedures: - Changing the App Visibility database password after installation - Importing a KeyStore file or replacing the certificate - Configuring network settings after the App Visibility server installation - Configuring App Visibility agents for Java after installation Where to go from here After you configure the App Visibility system, configure application discovery and configure event thresholds (SLAs) for automatically discovered applications. For synthetic applications, configure synthetic transactions. Related topics Adding and editing components Viewing App Visibility agent status and properties Installing App Visibility components Uninstalling the App Visibility server
https://docs.bmc.com/docs/display/TSPS101/Configuring+App+Visibility+after+installation
2019-04-18T13:31:48
CC-MAIN-2019-18
1555578517639.17
[]
docs.bmc.com
An Act to amend 11.12 (2), 11.16 (2), 11.16 (3), 11.26 (1) (a), 11.26 (2) (a), 11.26 (9), 11.31 (1) (d), 11.60 (4) and 11.61 (2); and to create 8.35 (4) (b), 11.26 (1) (am), 11.26 (2) (am), 11.26 (13), 11.501 to 11.522, 20.511 (1) (r), 20.585 (1) (q), 20.585 (1) (r), 20.855 (4) (ba), 20.855 (4) (bb), 25.17 (1) (cm), 25.421 and 71.10 (3) of the statutes; Relating to: public financing of campaigns for the office of justice of the supreme court, making appropriations, and providing penalties. (FE)
https://docs.legis.wisconsin.gov/2013/proposals/ab543
2019-04-18T12:15:08
CC-MAIN-2019-18
1555578517639.17
[]
docs.legis.wisconsin.gov
The exception type thrown when a claims challenge error occurs during token acquisition. Error code returned as a property in AdalException The exception type thrown when an error occurs during token acquisition. Helper class to get ADAL EventSource The exception type thrown when the server returns an error. It's required to look at the internal details of the exception for a more information. The exception type thrown when a token cannot be acquired silently. The exception type thrown when user returned by service does not match user in the request. The AuthenticationContext class retrieves authentication tokens from Azure Active Directory and ADFS services. Extension class to support username/password flow. Contains authentication parameters based on unauthorized response from resource server. Contains the results of one token acquisition operation. Credential type containing an assertion of type "urn:ietf:params:oauth:token-type:jwt". Containing certificate used to create client assertion. Credential including client id and secret. This class represents the response from the service when requesting device code. This class is responsible for managing the callback state and its execution. Additional parameters used in acquiring user's authorization This class allows to pass client secret as a SecureString to the API. Token cache class used by AuthenticationContext to store access and refresh tokens. Token cache item Contains parameters used by the ADAL call accessing the cache. Credential type containing an assertion representing user credential. Credential used for integrated authentication on domain-joined machines. Contains identifier for a user. Contains information of a single user. This information is used for token cache lookup. Also if created with userId, userId is sent to the service when login_hint is accepted. Credential used for username/password authentication. Obsolete Callback for capturing ADAL logs to custom logging schemes. Will be called only if LogCallback delegate is not set and only for messages with no Pii Interface for implementing certificate based operations Empty interface implemented in each supported platform. Interface to allow for client secret to be passed in as a SecureString ADAL Log Levels Indicates whether AcquireToken should automatically prompt only if necessary or whether it should prompt regardless of whether there is a cached token. Indicates the type of UserIdentifier Callback delegate that allows the developer to consume logs handle them in a custom manner. Notification for certain token cache interactions during token acquisition.
https://docs.microsoft.com/en-us/dotnet/api/microsoft.identitymodel.clients.activedirectory?view=azure-dotnet
2019-04-18T12:30:38
CC-MAIN-2019-18
1555578517639.17
[]
docs.microsoft.com
Event execution pipeline Applies To: Dynamics 365 (online), Dynamics 365 (on-premises), Dynamics CRM 2016, Dynamics CRM Online The Microsoft Dynamics 365 event processing subsystem executes plug-ins based on a message pipeline execution model. A user action in the Microsoft Dynamics 365 365 365 platform with respect to both synchronous and asynchronous event processing. Synchronous and Asynchronous Event Processing Diagram. Important Regardless of whether a plug-in executes synchronously or asynchronously, there’s a two-minute time limit imposed on the execution of a (message) request. If the execution of your plug-in logic exceeds the time limit, a System.TimeoutException is thrown. If a plug-in needs more processing time than two minutes, consider using a workflow or other background process to accomplish the intended task. This two-minute time limit applies only to plug-ins registered to execute under partial trust, also known as the sandbox. More information: Plug-in isolation, trusts, and statistics 365 365 365 365 Microsoft Dynamics 365
https://docs.microsoft.com/en-us/previous-versions/dynamicscrm-2016/developers-guide/gg327941%28v%3Dcrm.8%29
2019-04-18T12:40:57
CC-MAIN-2019-18
1555578517639.17
[array(['images/gg327941.0105d9d4-bfb2-4ae3-a462-0510373b2768%28crm.8%29.jpeg', 'Event processing architecture Event processing architecture'], dtype=object) ]
docs.microsoft.com
Configure Packet Buffer Protection You configure Packet Buffer Protection settings globally and then apply them per ingress zone. When the firewall detects high buffer utilization, the firewall only monitors and takes action against sessions from zones with packet buffer protection enabled. Therefore, if the abusive session is from a zone without packet buffer protection, the high packet buffer utilization continues. Packet buffer protection can be applied to a zone but it is not active until global settings are configured and enabled. - Configure the global session thresholds. - Select DeviceSetupSession. - Edit the Session Settings. - Select the Packet Buffer Protection check box to enable and configure the packet buffer protection thresholds. - Enter a value for each threshold and timer to define the packet buffer protection behavior. - Alert (%)—When packet buffer utilization exceeds this threshold for more than 10 seconds, the firewall creates a log event every minute. The firewall generates log events when packet buffer protection is enabled globally. The default threshold is 50% and the range is 0% to 99%. If the value is 0%, the firewall does not create a log event. - Activate (%)—When a packet buffer utilization exceeds this threshold, the firewall applies RED to abusive sessions. The default threshold is 50% and the range is 0% to 99%. If the value is 0%, the firewall does not apply RED.The firewall records alert events in the System log and events for dropped traffic, discarded sessions, and blocked IP address in the Threat log. - Block Hold Time (sec)—The amount of time a RED-mitigated session is allowed to continue before the firewall discards it. By default, the block hold time is 60 seconds. The range is 0 to 65,535 seconds. If the value is 0, the firewall does not discard sessions based on packet buffer protection. - Block Duration (sec)—This setting defines how long a session remains discarded or an IP address remains blocked. The default is 3,600 seconds with a range of 1 seconds to 15,999,999 seconds. - Click OK. - Commit your changes. - Enable packet buffer protection on an ingress zone. - Select NetworkZones. - Choose an ingress zone and click on its name. - Select the Enable Packet Buffer Protection check box in the Zone Protection section. - Click OK. - Commit your changes. Related Documentation Session Settings Session Settings The following table describes session settings. Session Settings Description Rematch Sessions Click Edit and select Rematch Sessions to cause the firewall to apply ... Configure Session Settings Configure Session Settings This topic describes various settings for sessions other than timeouts values. Perform these tasks if you need to change the default settings. ... Packet Buffer Protection Protect the firewall’s packet buffers from single-session DoS attacks that attempt to take down the firewall. ... Deploy DoS and Zone Protection Using Best Practices DoS and Zone Protection deployment best practices help to ensure a smooth rollout that protects your network and your most critical servers. ... How Do the Zone Defense Tools Work? Zone defense tools work together to form layers of DoS protection for your network. ... Zone Defense Tools Use a layered approach with multiple levels of protection to defend your network against DoS attacks. ... Custom PAN-OS Metrics Published for Monitoring PAN-OS® metrics published to public cloud monitoring systems such as AWS® CloudWatch, Azure® Application Insights, and Google® Stackdriver. ... Building Blocks of Security Zones Building Blocks of Security Zones To define a security zone, click Add and specify the following information. Security Zone Settings Description Name Enter a zone ... DoS Protection Against Flooding of New Sessions DoS Protection Against Flooding of New Sessions DoS protection against flooding of new sessions is beneficial against high-volume single-session and multiple-session attacks. In a single-session ...
https://docs.paloaltonetworks.com/pan-os/8-1/pan-os-admin/zone-protection-and-dos-protection/configure-zone-protection-to-increase-network-security/configure-packet-buffer-protection.html
2019-04-18T13:15:39
CC-MAIN-2019-18
1555578517639.17
[]
docs.paloaltonetworks.com
Improved contextual search). Search results from AppSource We have improved the Tell me (Alt+Q) feature by allowing more flexible terms and surfacing results for partner solutions on AppSource. This allows users to seek help and easily extend Business Central with the many solutions that are available from the partner community. Additional search terms New users often use different business terms than those used to name the related entities in Business Central. For example, they might use "product" instead of "item," or "client" instead of "customer." Developers can now add alternate search terms to pages and reports to make it easier for users to find what they are looking for. In the AdditionalSearchTermsML property on pages and reports, developers can add company-specific terms that users can then enter in the Tell me box to find the page or report in question. Business Central is published with around 200 such alternate search terms for selected pages and reports, such as "product" to find the Items page and "kit" to find the Assembly BOM page.
https://docs.microsoft.com/en-us/business-applications-release-notes/April19/dynamics365-business-central/tell-me-more
2019-04-18T12:47:35
CC-MAIN-2019-18
1555578517639.17
[array(['media/search_commission.png', 'Search results now also include results from AppSource search Screenshot of search results from AppSource'], dtype=object) ]
docs.microsoft.com
YieldProcessor function Signals to the processor to give resources to threads that are waiting for them. This macro is only effective on processors that support technology allowing multiple threads running on a single processor, such as Intel's Hyperthreading technology. Syntax void YieldProcessor( ); Parameters This function has no parameters. Return Value This function does not return a value. Remarks This macro can be called on all processor platforms where Windows is supported, but it has no effect on some platforms. The definition varies from platform to platform. The following are some definitions of this macro in Winnt.h: #define YieldProcessor() __asm { rep nop } #define YieldProcessor _mm_pause #define YieldProcessor __yield
https://docs.microsoft.com/en-us/windows/desktop/api/winnt/nf-winnt-yieldprocessor
2019-04-18T12:20:40
CC-MAIN-2019-18
1555578517639.17
[]
docs.microsoft.com
Integrations Apptimize allows iOS and Android teams to make real-time updates to the native app experience and data-driven product decisions. Teams can A/B test user flows, control feature rollouts, make instant UI changes, and more. A/B Testing In order to enable mParticle’s integration with Apptimize, you will need your App Key which can be found on your Apptimize settings page. mParticle’s Apptimize integration requires that you add the Apptimize kit to your iOS or Android app, and the mParticle SDK will initialize and automatically map mParticle method calls directly onto Apptimize method calls. This approach means that every feature of the Apptimize SDKs are supported, as if the app had integrated Apptimize directly. The source code for each kit is available if you would like to learn exactly how the method mapping occurs: Add the Apptimize Kit to your iOS or Android app. For the Android kit, you must also add Aptimize’s Maven repository to your build.gradle. See the Cocoapods and Gradle examples below, and reference the Apple SDK and Android SDK GitHub pages to read more about kits. //Sample Podfile target '<Your Target>' do pod 'mParticle-Apptimize', '~> 6' end //Sample build.gradle repositories { maven { url '' } } dependencies { compile ('com.mparticle:android-apptimize-kit:4.+') } Was this page helpful?
https://docs.mparticle.com/integrations/apptimize/
2019-04-18T13:02:59
CC-MAIN-2019-18
1555578517639.17
[]
docs.mparticle.com
You can use a Simple Network Management Protocol (SNMP) manager to monitor event-driven alerts and operational statistics for the firewall, Panorama, or WF-500 appliance and for the traffic they process. The statistics and traps can help you identify resource limitations, system changes or failures, and malware attacks. You configure alerts by forwarding log data as traps, and enable the delivery of statistics in response to GET messages (requests) from your SNMP manager. Each trap and statistic has an object identifier (OID). Related OIDs are organized hierarchically within the Management Information Bases (MIBs) that you load into the SNMP manager to enable monitoring. The firewall, Panorama, and WF-500 appliance support SNMP Version 2c and Version 3. Decide which to use based on the version that other devices in your network support and on your network security requirements. SNMPv3 is more secure and enables more granular access control for system statistics than SNMPv2c. The following table summarizes the security features of each version. You select the version and configure the security features when you Monitor Statistics Using SNMP and Forward Traps to an SNMP Manager.
https://docs.paloaltonetworks.com/pan-os/7-1/pan-os-admin/monitoring/snmp-support.html
2019-04-18T12:51:18
CC-MAIN-2019-18
1555578517639.17
[array(['/etc/framemaker/pan-os/7-1/pan-os-admin/pan-os-admin-408.gif', None], dtype=object) ]
docs.paloaltonetworks.com
In PPaaS, the backend members in the instances, such as in a Virtual Machine (VM) setup the VM instances and in a Kubernetes setup the Docker instances, are fronted by a Load Balancer. The proxyPort property is used to define the port of the Load Balancer. When the Load Balancer receives traffic, it will route the traffic to the members (worker nodes) in the respective clusters, based on their resource availability. PPaaS uses a Proxy Service for Kubernetes as there are different service port types with different port ranges. Therefore, when using Kubernetes, you need to set the proxyPort to zero in the Cartridge definition and define the Kubernetes proxy service port range as 30000 - 32767 using the portRange property in the Kubernetes cluster definition. Follow the instructions below to access the WSO2 service: The URL to access a service depends on the port mapping (VM) and port range (Docker) that you defined in the cartridge definition and the Kubernetes cluster definition respectively. When creating the dependent artifacts that are needed to deploy an application, you need to define port mapping in the cartridge definition JSON for each port that is being used with the WSO2 product cartridge. The following examples illustrate how to set unique proxy ports for each port that is used with the cartridge. "portMapping": [ { "name": "mgt-http", "protocol": "http", "port": 9763, "proxyPort": 80 }, { "name": "mgt-https", "protocol": "https", "port": 9443, "proxyPort": 443 } ], "portMapping": [ { "name": "mgt-http", "protocol": "http", "port": 9763, "proxyPort": 0, "kubernetesPortType": "NodePort" }, { "name": "mgt-https", "protocol": "https", "port": 9443, "proxyPort": 0, "kubernetesPortType": "NodePort" } ], kubernetesPortType- The Kubernetes port that gets created will not be exposed to the outside. This is useful if you don't want to expose a service to outside, but need to maintain internal communication within Docker containers. Access URLs are generated only for the NodePort service type. "portRange": { "upper": "32767", "lower": "30000" }, Identify the Load Balancer IP and the hostname of the each of the clusters that are available in the deployed application. For more information, see Getting the Runtime Topology of an Application. Map the cluster hostname with one of the Load Balancer IPs. Each Load Balancer IP refers to the IP of a node. Open the /etc/hosts/ file. Example: If you are using Vim, which is a text editor, you can open the file in the terminal as follows: vim /etc/hosts/ Map all the hostnames against the available LB IPs in the /etc/hosts/file and save the file. <LB_IP> <HOSTNAME> Example: 172.17.8.103 wso2as-521-application.mgt.as.wso2.org Each LB IP can have more than one hostname mappings. However, these mappings need to be defined separately in the /etc/hosts/file. Optionally, add domain mapping if required.Click here for instructions...You can add domain mappings using the CLI tool or REST API as shown below: As the signup process takes place automatically when a single tenant application is deployed, domain mapping can be added straight after the application is deployed. However, if domain mapping is being added to a multi-tenant application, after the application is deployed, ensure to first carryout the application sign up process before adding the domain mappings. Adding domain mappings via the CLI Overview Parameter definition Example Add the domain mappings defined in the <TEST>/domainmappings.jsonfile to the application with the ID: wso2am-190-application add-domain-mappings wso2am-190-application -p <TEST_PATH>/domainmappings.json Sample output Adding domain mappings via the REST API Overview Example Add the domain mappings in the <TEST_PATH>/domainmappings.jsonfile and add this domain mapping to the application with the ID: wso2am-190-application. cd <TEST_PATH> curl -X POST -H "Content-Type: application/json" -d @'domain-mappings.json' -k -v -u admin:admin Sample output You will come across the following HTTP status codes while adding an application: Sample domain mapping JSON { "domainMappings": [ { "cartridgeAlias": "tomcat", "domainName": "abc.com", "contextPath": "/abc/app" } ] } For more information, see Working with Domain Mappings. Use the following URL format to access the WSO2 service (e.g., the ESB service): http://<INSTANCE_HOSTNAME>:<LB_PROXY_PORT>/<CONTEXT_PATH> - The hostname of the cluster. <INSTANCE_HOSTNAME> <LB_PROXY_PORT>- The LB proxy port to which the port was mapped to as explained in Prerequisites. Example: http://<INSTANCE_HOSTNAME>:<PROXY_SERVICE_PORT>/<CONTEXT_PATH> <INSTANCE_HOSTNAME>- The hostname of the cluster. <PROXY_SERVICE_PORT>- When using Kubernetes, you need to define the Kubernetes proxy service port range as 30000 - 32767 in the Kubernetes cluster definition, as there are different service port types with different port ranges in Kubernetes. Therefore, when using Kubernetes, the first proxy service that gets created will be assigned to port 30000, and the subsequent proxy services that get created will be assigned port values in order incrementally. Port ranges are not applicable when using PPaaS on Virtual Machines. Example: Currently, it is not possible to query auto generated Kubernetes proxy service ports via the Stratos API. However, they can be found on the PPaaS server log.
https://docs.wso2.com/display/PP411/Accessing+an+Application+Instance+Deployed+in+PPaaS
2019-04-18T12:23:51
CC-MAIN-2019-18
1555578517639.17
[]
docs.wso2.com
Parameter¶ Next up we have a quick API for representing parameters coming out of the OGC Abstraction specification. Much like with with Record and RecordType we have a split between ParameterValue and ParameterDescriptor. Here is what ParameterValue looks like: And the associated descriptors: We have a default implementation in gt-referencing which we can use for a code example of both pieces working together: final DefaultParameterDescriptor RANGE = new DefaultParameterDescriptor("Range", 15.0, -30.0, +40.0, null) ParameterValue value = (ParameterValue) RANGE.createValue(); value.setValue( 2.0 ); Parameters are used in a few sections of the library, notably when working with imagery and referencing.
http://docs.geotools.org/latest/userguide/library/opengis/parameter.html
2019-04-18T12:37:20
CC-MAIN-2019-18
1555578517639.17
[]
docs.geotools.org
Windows Password Recovery Tool is one of the most reliable pcunlocker alternatives I recommended to you. If there's any kind of problem after that you haven't any kind of desire to reboot your device to launch knowledge as a result of Unlocker for Windows 10 could open up those information suddenly. It is also not available to reset password up to 14 characters and Windows identifies it as malware. But it doesn't work with Windows 8 and will cause a possible hard drive issues when resetting password. . This is really annoying especially when you want to use the computer urgently. Thank you so very much for your help, and your program is fantastic, and I will tell the world about it for you. Numerous documents were utilized by multiple procedures that are being run in system history. Check the Import Hashes from local system radio button and click Next. Unlocker is one of the most useful cleaners ever to be created. It saves your time and makes you prepared to operating that record differently without unlocked you'll now not prepared to paints on those. In some instances we need to delete some records information nonetheless the document can't be deleted then can also be difficult disk are total as well as could not find out that record or could additionally be this record remain in the usage of various person after that to deal with this case this unlocker is layout which remove that document right away. Additionally, Using is as well easier as well as faster after that any type of 3rd event choice software application. To reset lost Windows password, you only need to create a password reset disk to help you get into the locked computer and perform the password resetting. It worked just like you said it would. Instantly unlock your system if you have forgotten Windows password or user account is locked out or disabled. Free download the pcunlocker full version and install it on a workable computer. According to the description on its site, Ophcrack doesn't support the latest Windows 8. If you don't want to try one by one and need to reset your forgotten Windows password immediately, is a straightforward choice for you. You just need to download this pcunlocker. Excellent results, easy to use and clear instructions. I have eliminated the password, and the computer is up and running again. Download Windows Password Unlocker Enterprise and install it on a normally accessible computer. Regain access to your locked computer without reinstalling the operating system. I just got back from vacation, and realized I needed to read something urgently for work the next day. However, my daughter or wife must have put a password on the computer, locking me out! Right-click on the username you need the password for. You have to broke the web link of file with history running process with killing that process. It runs on all Windows versions, including Windows 10. When the burning completes, insert the disk into the locked computer. Not only that is lightweight and it integrates into Windows shell, but it can also list and close programs that use a file that you want to delete. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
http://reckon-docs.com.au/torrent-download/pc-unlocker-torrent-download.html
2019-04-18T13:16:01
CC-MAIN-2019-18
1555578517639.17
[]
reckon-docs.com.au
CloudWatch Events has the following limits: API requests Up to 50 requests per second for all CloudWatch Events API operations except PutEvents. PutEvents is limited to 400 requests per second by default. Default event bus There is no limit on the rate of events that can be received from AWS services or other AWS accounts. If you send custom events to your event bus using the PutEvents API, the PutEvents API limits apply. Any events that are sent on to the targets of the rules in your account count against your invocations limit. PutEvents The policy size of the default event bus is limited to 10240 characters. This policy size increases each time you grant access to another account. You can see your current policy and its size by using the DescribeEventBus API. You can request a limit increase. For instructions, see AWS Service Limits. DescribeEventBus Event pattern 2048 characters maximum. Invocations An invocation is an event matching a rule and being sent on to the rule’s targets. The limit is 750 per second (after 750 invocations, the invocations are throttled; that is, they still happen but they are delayed). If the invocation of a target fails due to a problem with the target service, account throttling, etc., new attempts are made for up to 24 hours for a specific invocation. If you are receiving events from another account, each of those events that matches a rule in your account and is sent on to the rule’s targets counts against your account’s limit of 750 invocations per second. You can request a limit increase. For instructions, see AWS Service Limits. ListRuleNamesByTarget Up to 100 results per page for requests. ListRules ListTargetsByRule 10 entries per request and 400 requests per second. Each request can be up to 256 KB in size. PutTargets 10 entries per request. RemoveTargets Rules 100 per region per account. You can request a limit increase. For instructions, see AWS Service Limits. Before requesting a limit increase, examine your rules. You may have multiple rules each matching to very specific events. Consider broadening their scope by using fewer identifiers in your Event Patterns in CloudWatch Events. In addition, a rule can invoke several targets each time it matches an event. Consider adding more targets to your rules. 1 target key and 1 target value Systems Manager Run Command does not currently support multiple target values. Targets 5 per rule. Javascript is disabled or is unavailable in your browser. To use the AWS Documentation, Javascript must be enabled. Please refer to your browser's Help pages for instructions.
https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/cloudwatch_limits_cwe.html
2019-04-18T12:55:28
CC-MAIN-2019-18
1555578517639.17
[]
docs.aws.amazon.com
The mParticle Unity package contains a class file named MParticle.cs, which is a Unity Monobehavior that exposes the mParticle API via MParticle.Instance. The package also contains the classes MParticleiOS and MParticleAndroid. These classes are used by the mParticle singleton to interact with the native iOS and Android libraries. You should never access those classes directly from your code. The plugin must be initialized with your mParticle workspace key and secret prior to use, for example by placing the following into you main script’s Awake method: using UnityEngine; using mParticle; namespace MyProject { public class Example : MonoBehaviour { void Awake () { //use the correct workspace API key and secret for iOS and Android #if UNITY_ANDROID MParticle.Instance.Initialize("REPLACE ME", "REPLACE ME"); #elif UNITY_IPHONE MParticle.Instance.Initialize("REPLACE ME", "REPLACE ME"); #endif } } } Was this page helpful?
https://docs.mparticle.com/developers/sdk/unity/initialize-the-sdk/
2019-04-18T12:17:08
CC-MAIN-2019-18
1555578517639.17
[]
docs.mparticle.com
Reducing Permissions Using Service Last Accessed Data You can view a report about the last time that an IAM entity (user or role) attempted to access a service. This is known as service last accessed data. You can then use this information to refine your policies to allow access to only the services that the entities use. You can generate a report for each type of resource in IAM. In each case, the report covers allowed services for the given reporting period: User – View the last time that the user tried to access the service. Group – View information about the last time that a group member attempted to access the service. This report also includes the total number of members that attempted access. Role – View the last time that someone used the role in an attempt to access the service. Policy – View information about the last time that a user or role attempted to access the service. This report also includes the total number of entities that attempted access. You can use service last accessed data to identify unused and not recently used permissions in associated policies. You can then choose to remove permissions for unused services or reorganize users with similar usage patterns into a group. This helps improve account security. Knowing if and when an entity last exercised a permission can help you remove unnecessary permissions and tighten your IAM policies with less effort. To learn how to view service last accessed data using the AWS Management Console, AWS CLI, or AWS API, see Viewing Service Last Accessed Data. For example scenarios for using service last accessed data to make decisions about the permissions that you grant to your IAM entities, see Example Scenarios for Using Access Data. Things to Know Before you use service last accessed data from a report to change the permissions for an entity, review the following details about the data. Reporting period – Recent activity usually appears within 4 hours. IAM reports activity for the last 365 days, or less if your region began supporting this feature within the last year. For more information, see Regions Where Data Is Tracked. Authenticated entities – Your report includes data only for authenticated entities (users or roles) in your account. The report does not include data about unauthenticated attempts. It also does not include data for attempts made from other accounts Policy types – Your report includes data for only services that are allowed by an entity's policy. These are policies attached to a role or attached to a user directly or through a group. Access allowed by other policy types is not included in your report. The excluded policy types include resource-based policies, access control lists, AWS Organizations policies, IAM permissions boundaries, and session policies. To learn how the different policy types are evaluated to allow or deny access, see Policy Evaluation Logic. Topics Permissions Required To view the service last accessed data using the AWS Management Console, you must have a policy that includes the following actions: iam:GenerateServiceLastAccessedDetails iam:Get* iam:List* Note These permissions allow a user to see the following: Which users, groups, or roles are attached to a managed policy Which services a user or role can access The last time they accessed the service To view the service last accessed data using the AWS CLI or AWSAPI, you must also have permissions that match the operation you want to use: iam:GenerateServiceLastAccessedDetails iam:GetServiceLastAccessedDetails iam:GetServiceLastAccessedDetailsWithEntities iam:ListPoliciesGrantingServiceAccess This example shows how you might create a policy that allows viewing service last accessed data, and read-only access to all of IAM. This policy also grants the permissions necessary to complete this action on the console. { "Version": "2012-10-17", "Statement": { "Effect": "Allow", "Action": [ "iam:GenerateServiceLastAccessedDetails", "iam:Get*", "iam:List*" ], "Resource": "*" } Troubleshooting Entity Activity If the AWS Management Console service last accessed data table is empty, or your AWS CLI or AWS API request returns an empty set of data or a null field, review the following examples: For a user, make sure that the user has at least one inline or managed policy attached, either directly or through group memberships. For a group, verify that the group has at least one inline or managed policy attached. For a group, the report returns only the service last access data for members that used the group's policies to access a service. To learn whether a member used other policies, review the service last accessed data for that user. For a role, verify that the role has at least one inline or managed policy attached. For an entity (user or role), review other policy types that might affect the permissions of that entity. These include resource-based policies, access control lists, AWS Organizations policies, IAM permissions boundaries, or session policies. For more information, see Policy Types or Evaluating Policies Within a Single Account. For a policy, make sure that the specified managed policy is attached to at least one user, group with members, or role. When you make changes, wait at least 4 hours for activity to appear in your report. If you use the AWS CLI or AWS API, you must generate a new report to view the updated data. Regions Where Data Is Tracked AWS collects service last accessed data in most regions. Data is stored for a maximum of 365 days. When AWS adds additional regions, those regions are added to the following table, including the date that AWS started tracking data in each region: If a region is not listed in the previous table, then that region does not yet provide service last accessed data.
https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_access-advisor.html
2019-04-18T13:03:52
CC-MAIN-2019-18
1555578517639.17
[]
docs.aws.amazon.com
Selecting log fields Updated: February 1, 2011 Applies To: Forefront Threat Management Gateway (TMG). The following procedure describes how to select fields to log. To specify fields to log In the Forefront TMG Management console, in the tree, click the Logs & Reports node.. Related Topics Concepts Configuring Forefront TMG logs
https://docs.microsoft.com/en-us/previous-versions/tn-archive/bb794834(v%3Dtechnet.10)
2019-04-18T12:44:38
CC-MAIN-2019-18
1555578517639.17
[]
docs.microsoft.com
In-House Distribution for Xamarin.iOS Apps This document gives a brief overview of distribution of applications In-House, as a member of the Apple Enterprise Developer Program. Once your Xamarin.iOS app has been developed, the next step in the software development lifecycle is to distribute your app to users. Proprietary apps can be distributed In-House (previously called Enterprise) through the Apple Developer Enterprise Program, which offers the following benefits: - Your application does not need to be submitted for review by Apple. - There are no limits to the amount of devices onto which you can deploy an application - It is important to note that Apple makes it very clear that In-House applications are for internal use only. It is also important to note that the Enterprise Program: - Does not provide access to iTunes Connect for distribution or testing (including TestFlight). - The cost of membership is $299 per year. All apps still need to be signed by Apple. Testing your Application Testing your application is carried out by using Ad Hoc distribution. For more information about testing, follow the steps in the Ad-Hoc Distribution guide. Be aware that you can only test on up to a maximum of 100 devices. Getting Set Up for Distribution As with other Apple Developer Programs, under the Apple Developer Enterprise Program, only Team Admins and Agents can create Distribution Certificates and Provisioning Profiles. Apple Developer Enterprise Program certificates will last for three years, and provisioning profiles will expire after one year. It is important to note that expired certificates cannot be renewed, and instead, you will have to replace the expired certificate with a new one, as detailed below. Creating a Distribution Certificate Browse to. Alternatively, it is possible to request a Certificate via the Preferences dialog in Xcode. To do this, follow the steps below: Select your team, and click View Details: Next, click the Create button next to iOS Distribution Certificate: Next, click the plus (+) button and select iOS App Store: Creating a Distribution Provisioning Profile Creating an App ID As with any other Provisioning Profile you create, an App ID will be required to identify the App that you will be distributing to the user's device. If you haven't already created this, follow the steps below to create one: - In the Apple Developer Center browse to the Certificate, Identifiers and Profiles section. Select App IDs under Identifiers. - Click the + button and provide a Name which will identify it in the Portal. - The App prefix should be already set as your Team ID, and cannot be changed. Select either an Explicit or Wildcard App ID, and enter a Bundle ID in a reverse DNS format like: Explicit: com.[DomainName].[AppName] Wildcard:com.[DomainName].* - Select any App Services that your app requires. - Click the Continue button and following the on screen instructions to create the new App ID. Once you have the required components needed for creating a Distribution Profile, follow the steps below to create it: Return to the Apple Provisioning Portal and select Provisioning > Distribution: Click the + button and select the type of Distribution Profile that you want to create as In-House:. You may have to quit Visual Studio for Mac and have Xcode refresh it's list of available Signing Identities and Provisioning Profiles (by following the instructions in Requesting Signing Identities section) before a new Distribution Profile is available in Visual Studio for Mac. Distributing your App In-House With the Apple Developer Enterprise Program, the licensee is the person responsible for distributing the application, and for adhering to the guidelines set by Apple. Your app can be distributed securely using a variety of different means, such as: - Locally through iTunes - MDM server - An internal, secure web server To distribute your app in any of these ways you must first create an IPA file, as explained in the next section. Creating an IPA for In-House gave a brief overview of distributing Xamarin.iOS applications In-House. Related Links Feedback Send feedback about:
https://docs.microsoft.com/en-us/xamarin/ios/deploy-test/app-distribution/in-house-distribution
2019-04-18T12:50:43
CC-MAIN-2019-18
1555578517639.17
[]
docs.microsoft.com
Integrations Focusing on programmatic mobile performance advertising, Manage offers solutions for media buying, campaign planning, execution and optimization. In order to forward an mParticle audience to Manage you will need to work with your Manage Account Manager to get the Configuration Settings - API Key and Suppressions IDs. When forwarding audience data to Manage, mParticle will send the following identifiers where available: Was this page helpful?
https://docs.mparticle.com/integrations/manage/
2019-04-18T12:24:35
CC-MAIN-2019-18
1555578517639.17
[]
docs.mparticle.com
Multi-Server Install¶ It is possible to run multiple ZoneMinder servers and manage them from a single interface. To achieve this each zoneminder server is connected to a single shared database server and shares file storage for event data. Topology Design Notes¶ - Device symbols represent separate logical functions, not necessarily separate hardware. For example, the Database Server and a ZoneMinder Server, can reside on the same physical hardware. - Configure each ZoneMinder Server to use the same, remote Database Server (Green). - The Storage Server (Red) represents shared storage, accessible by all ZoneMinder Servers, mounted under each server’s events folder. - Create at least two networks for best performance. Dedicate a Storage LAN for communication with the Storage and Database Servers. Make use of multipath and jumbo frames if possible. Keep all other traffic off the Storage LAN! Dedicate the second LAN, called the Video LAN in the diagram, for all other traffic. New installs¶ - Follow the normal instructions for your distro for installing ZoneMinder onto all the ZoneMinder servers in the normal fashion. Only a single database will be needed either as standalone, or on one of the ZoneMinder Servers. - On each ZoneMinder server, edit zm.conf. Find the ZM_DB_HOST variable and set it to the name or ip address of your Database Server. Find the ZM_SERVER_HOST and enter a name for this ZoneMinder server. Use a name easily recognizable by you. This name is not used by ZoneMinder for dns or any other form of network conectivity. - Copy the file /usr/share/zoneminder/db/zm_create.sql from one of the ZoneMinder Servers to the machine targeted as the Database Server. - Install mysql/mariadb server onto the Database Server. - It is advised to run “mysql_secure_installation” to help secure the server. - Using the password for the root account set during the previous step, create the ZoneMinder database and configure a database account for ZoneMinder to use: mysql -u root -p < zm_create.sql mysql -uroot -p -e "grant all on zm.* to 'zmuser'@localhost identified by 'zmpass';" mysqladmin -u root -p reload The database account credentials, zmuser/zmpass, are arbitrary. Set them to anything that suits your environment. Note that these commands are just an example and might not be secure enough for your environment. - If you have chosen to change the ZoneMinder database account credentials to something other than zmuser/zmpass, you must now update zm.conf on each ZoneMinder Server. Change ZM_DB_USER and ZM_DB_PASS to the values you created in the previous step. - All ZoneMinders Servers must share a common events folder. This can be done in any manner supported by the underlying operating system. From the Storage Server, share/export a folder to be used for ZoneMinder events. - From each ZoneMinder Server, mount the shared events folder on the Storage Server to the events folder on the local ZoneMinder Server. NOTE: The location of this folder varies by distro. This folder is often found under “/var/lib/zoneminder/events” for RedHat based distros and “/var/cache/zoneminder/events” for Debain based distros. This folder is NOT a Symbolic Link! - Open your browser and point it to the web console on any of the ZoneMinder Servers (they will all be the same). Open Options, click the Servers tab,and populate this screen with all of your ZoneMinder Servers. Each server has a field for its name and its hostname. The name is what you used for ZM_SERVER_HOST in step 2. The hostname is the network name or ip address ZoneMinder should use. - When creating a new Monitor, remember to select the server the camera will be assigned to from the Server drop down box.
https://zoneminder.readthedocs.io/en/stable/installationguide/multiserver.html
2019-04-18T13:17:04
CC-MAIN-2019-18
1555578517639.17
[array(['../_images/zm-multiserver.png', '../_images/zm-multiserver.png'], dtype=object) ]
zoneminder.readthedocs.io
Troubleshooting DllNotFoundException If you see a DllNotFoundException, make sure that you are running the 64-bit version of mono. On macOS, mono can default to 32-bit, so please either use the --arch=64 flag by passing it as the first argument, e.g. mono --arch=64 CsharpWorkerName.exe, or run mono64 directly, to ensure that SpatialOS SDK native libraries are loaded correctly. This flag is only valid for macOS versions of Mono. Xbuild bug The generated build scripts use wildcards to specify which generated code sources should be compiled. xbuild (the Mono build tool) reports warnings about source files being included multiple times if they use wildcards, which is a known bug in the compiler. If you see warnings such as or similar during your build, you can safely ignore them. ## Assembly loading bug The mechanism to resolve a component metaclass to the associated `ComponentId` (`ComponentDatabase.MetaclassToId`) can fail. If it does, you will get an `ArgumentException`, with a stacktrace similar to the following: System.ArgumentException: This occurs if the assembly containing the generated code has has not been loaded before your code runs, as assemblies are loaded lazily. You can work around this by manually loading the assembly, i.e. Assembly.Load("GeneratedCode"). GeneratedCode is the correct assembly name if you are using the generated build scripts, otherwise it may be different.
https://docs.improbable.io/reference/13.1/csharpsdk/troubleshooting
2019-04-18T12:18:05
CC-MAIN-2019-18
1555578517639.17
[]
docs.improbable.io
Tomcat This document is an integration guide for using Solace JMS as a JMS provider within Apache Tomcat. Tomcat is one of the first Apache projects, and one of the first servlet engines, dating back to 1999 as a reference servlet engine built by Sun Microsystems engineers. These integration steps apply to the Tomcat release version 7.0. If you have problems getting this integration to work, check the Solace community for answers to common issues.
https://docs.solace.com/Developer-Tools/Integration-Guides/Tomcat.htm
2019-04-18T12:43:17
CC-MAIN-2019-18
1555578517639.17
[]
docs.solace.com
? Use three ingredients: a Mule Credentials Vault a global Secure Property Placeholder element a key to unlock the vault How It Works Imagine a situation in which a developer is tasked with designing a Mule application for the Human Resources department Mule will unlock (i.e. terminates the application), Mule discards the key. To configure Mule to demand that the user enter a key at runtime, the developer includes the following in the system properties (the mule-app.properties file in the src>main>app folder): (i.e. create a Credentials Vault), click the Encrypt button. Studio opens a Setup encryption information dialog, in which you: select the type of algorithm you wish to use to encrypt the value enter the key that Mule will require (i.e. close In Studio, access the src>main>appfolder, then double-click the mule-app.propertiesfile to open it.. Open your project’s mule-app.propertiesfile... See Also Access the example application which demonstrate Mule Enterprise Security in action.
https://docs.mulesoft.com/mule-user-guide/v/3.3/mule-credentials-vault
2017-06-22T16:31:30
CC-MAIN-2017-26
1498128319636.73
[array(['./_images/one-one-one.png', 'one-one-one'], dtype=object) array(['./_images/one-one-many.png', 'one-one-many'], dtype=object) array(['./_images/multiple-one-one.png', 'multiple-one-one'], dtype=object)]
docs.mulesoft.com
. Aj. RPC Example Server Code This configuration is very similar to the one in the previous example. As a matter of fact, the only significant changes are the channel name and an out-of-the-box echo component to bounce the request back to the caller.. There are no default values in the following table.. Outbound Endpoint Allows a Mule service to send Ajax events over HTTP using Bayeux. JavaScript clients can register interest in these events using the Mule JavaScript client. No child elements..
https://docs.mulesoft.com/mule-user-guide/v/3.8/ajax-transport-reference
2017-06-22T16:32:18
CC-MAIN-2017-26
1498128319636.73
[]
docs.mulesoft.com
Skeletal Mesh Components Skeletal Mesh Components are used for anything that has complex animation data and uses a skeleton. Skeletal Mesh Components SkeletalMeshComponents are used to create an instance of a USkeletalMesh. The Skeletal Mesh (outward appearance) has a complex Skeleton (interconnected bones) inside which helps move the individual vertices of the Skeletal Mesh to match the current animation that is being played. This makes SkeletalMeshComponents ideal for things like characters, creatures, complex machinery, or anything that needs to deform or display complex motion. See Skeletal Meshes and Skeletal Mesh Actors for more information on working with Skeletal Meshes. Above, a SkeletalMeshComponent is used with a Character Blueprint to create a playable character. In addition to specifying the Skeletal Mesh asset to use, you can also define the Animation Mode for the mesh to use (either an Animation Blueprint or an Animation Asset ).
https://docs.unrealengine.com/latest/INT/Engine/Components/SkeletalMesh/index.html
2017-06-22T16:32:12
CC-MAIN-2017-26
1498128319636.73
[array(['./../../../../images/Engine/Components/SkeletalMesh/mesh_component.jpg', 'mesh_component.png'], dtype=object) ]
docs.unrealengine.com
Migrating Databases to Amazon Web Services (AWS) AWS Migration Tools You can use several AWS tools and services to migrate data from an external database to AWS. Depending on the type of database migration you are doing, you may find that the native migration tools for your database engine are also effective. AWS Database Migration Service (AWS DMS) helps you migrate databases to AWS efficiently and securely. The source database can remain fully operational during the migration, minimizing downtime to applications that rely on the database. AWS DMS can migrate your Oracle data to the most widely used commercial and open-source databases on AWS. AWS DMS migrates data, tables, and primary keys to the target database. All other database elements are not migrated. If you are migrating an Oracle database to Amazon Aurora, for example, you would want to use the AWS Schema Conversion Tool in conjunction with AWS DMS. The AWS Schema Conversion Tool (SCT). You can use this tool to convert your source Oracle databases to an Amazon Aurora, MySQL, or PostgreSQL target database on either Amazon RDS or EC2. It is important to understand that DMS and SCT are two different tools and serve different needs and they don’t interact with each other in the migration process. As per the DMS best practice, migration methodology for this tutorial is outlined as below: AWS DMS takes a minimalist approach and creates only those objects required to efficiently migrate the data for example tables with primary key – therefore, we will use DMS to load the tables with data without any foreign keys or constraints. (We can also use the SCT to generate the table scripts and create it on the target before performing the load via DMS). We will leverage SCT: To identify the issues, limitations and actions for the schema conversion To generate the target schema scripts including foreign key and constraints To convert code such as procedures and views from source to target and apply it on target The size and type of Oracle database migration you want to do greatly determines the tools you should use. For example, a heterogeneous migration, where you are migrating from an Oracle database to a different database engine on AWS, is best accomplished using AWS DMS. A homogeneous migration, where you are migrating from an Oracle database to an Oracle database on AWS, is best accomplished using native Oracle tools. Walkthroughs in this Guide Migrating an On-Premises Oracle Database to Amazon Aurora Using AWS Database Migration Service Migrating an Amazon RDS Oracle Database to Amazon Aurora Using AWS Database Migration Service Migrating MySQL-Compatible Databases to AWS Migrating a MySQL-Compatible Database to Amazon Aurora
http://docs.aws.amazon.com/dms/latest/sbs/CHAP_Introduction.html
2017-06-22T16:22:16
CC-MAIN-2017-26
1498128319636.73
[]
docs.aws.amazon.com
Docs @ Psychtoolbox Wiki : CedrusResponseBox Search PTB Function help: • psychtoolbox.org • page updates • CedrusResponseBox Psychtoolbox › ('Open', port [, lowbaudrate]); - Open a compatible response box which is connected to the given named serial 'port'. 'port'names differ accross operating systems. A typical port name for Windows would be 'COM2', whereas a typical port name on OS/X or Linux would be the name of a serial port device file, e.g., '/dev/cu.usbserial-FTDI125ZX9' on OS/X, or '/dev/ttyS0' on Linux. All names on OS/X are like '/dev/cu.XXXXX', where the XXXXX part depends on your serial port device, typically '/dev/cu.usbserial-XXXXX' for serial over USB devices with product name XXXXX. On Linux, all names are of pattern '/dev/ttySxx' for standard serial ports, e.g., '/dev/ttyS0' for the first serial port in the system, and of type '/dev/ttyUSBxx' for serial over USB devices, e.g., '/dev/ttyUSB0' for the first serial line emulated over the USB protocol. After the connection is established and some testing and initialization is, done, the function returns a device 'handle', a unique identifier to use for all other subfunctions. By default the commlink is opened at a baud transmission rate of 115200 Baud (All DIP switches on the box need to be in 'down' position!). If you specify the optional flag 'low 'ptb_cedrus_drivertype' parameter inside the code with the id of a supported driver (Matlab serial()). This option may go away in the future and is for debugging only! CedrusResponseBox (' ', handle); - connection to response box. The 'handle' becomes invalid after that command. CedrusResponseBox (' CloseAll '); - all connections to all response boxes. This is a convenience function for quick shutdown. dev = CedrusResponseBox (' GetDeviceInfo ', handle); - Return queried information about the device in a struct 'dev'. 'dev' contains (amongst other) the following fields: General information: dev.Name = Device name string. dev. VersionMajor and dev. VersionMinor = Major and Minor firmware revision. dev.productId = Type of device, e.g., 'Lumina', ' VoiceKey ' or 'RB response pad'. dev.modelId = Submodel of the device if the device is a RB response pad, e.g., 'RB-530', 'RB-730', 'RB-830' or 'RB (' ClearQueues ', handle); - Clear all queues, discard all pending data. [status = ] CedrusResponseBox (' Fl ' GetButtons ' or ' Wait, 'status', 'status' (' FlushEvents ', mybox); end ...to wait for the subject to release any buttons which might currently be down. evt = CedrusResponseBox (' Get 'evt' struct, if so. If no event is pending, it returns an empty 'evt', ie. isempty(evt) is true. 'ev., 'top' or 'left'. (' WaitButtons ', handle); - Queries and returns the same info as ' GetButtons ', but waits for events. If there isn't any event available, will wait until one becomes available. evt = CedrusResponseBox (' WaitButtonPress ', handle); - Like WaitButtons , but will wait until the subject /presses/ a key -- the signal that a key has been released is not acceptable -- Button release events are simply discarded. evt = CedrusResponseBox (' Get 'nSamples' allows to specify if multiple samples of PTB timer vs. the response boxes timer should be measured. If 'nSamples' is set to a value greater than one, a cell array with nSamples elements will be returned, each corresponding to one measurement. This allows, e.g., to check if PTBs timer and the boxes timer drift against each other. resetTime = CedrusResponseBox (' ResetR 'resetTime' PTB's best guess of when the reset was carried out -- essentially a GetSecs () timestamp of when the reset command was sent. Note that this automatically discards all pending events in the queue before performing the query! slope = CedrusResponseBox (' GetBoxTimerSlope ', handle); - Compute slope (drift) between computer clock and device clock. 'slope' ('Open') before calling this function, the more accurate the clock-drift estimate will be. roundtrip = CedrusResponseBox (' RoundTripTest ', handle); - Initiate 100 trials of the roundtrip test of the box. Data is echoed forth and back 100 times between PTB and the box, and the latency is measured (in seconds, with msecs resolution). The vector of all samples is returned in 'roundtrip' for evaluation and debugging. The measured latency is also used for delay correction for the ' GetBaseTimer ' subfunction. However, a roundtrip test is performed automatically when opening the response box connection, so this is rarely needed. Note that this automatically discards all pending events in the queue before performing the query! [currentMode] = CedrusResponseBox (' SetConnectorMode ', handle [, mode]); - Set or get mode of operation of external accessory connector: 'mode' can be any of the following text strings: ' GeneralPurpose ': Input/Output assignment of pins can be freely programmed via the ' DefineInputLinesAndLevels ' subcommand (see below), and the output lines only change if the ' SetOutputLineLevels ' command (see below) is used. The connector doesn't change state by itself. ' ReflectiveContinuous ': Line levels reflect button state: Line is active if button is pressed and goes inactive when the button is released again. ' ReflectiveSinglePulse ': A single pulse is sent to an output line if a button is pressed on the box. Nothing is sent on release. ' ReflectiveDoublePulse ': A single pulse is sent to an output line if a button is pressed on the box. Another pulse is sent on button release. If 'mode' is left out, the function queries and returns the current mode as return argument 'currentMode'. If mode is given, nothing is returned. CedrusResponseBox (' SetOutputLineLevels ', handle, outlevels); - Set accessory connector output lines to state specified in '1' aka active and the 4 lines with the highest numbers (lines 4,5,6,7) to '0' aka inactive. This corresponds to XiD command 'ah'. The command is only effective if connector is set to ' GeneralPurpose '. CedrusResponseBox (' DefineInputLinesAndLevels ', handle, inputlines, logiclevel, debouncetime); - Define which lines on the connector are inputs: 'inputlines' is a vector with the line numbers of the input lines. All other lines are designated as output lines, e.g., inputlines = [0, 2, 4] would set lines 0, 2 and 4 as inputs, remaining lines 1,3,5,6,7 as outputs. 'log 'debouncetime' must be the debounce time for the input lines in milliseconds. After an event on a input line, the box will ignore all further events on than input line for 'debouncetime' milliseconds. This corresponds to XiD commands 'a4', 'a50' and 'a51', as well as 'a6'. The command is only effective if connector is set to ' GeneralPurpose '. inputLines = CedrusResponseBox (' ReadInputLines ', handle); - Read current state of the connectors input lines: Returns an 8 element vector where each element corresponds to one input line and a 1 means active, 0 means inactive. This corresponds to XiD command 'ar'. Note that this automatically discards all pending events in the queue before performing the query! The command is only effective if connector is set to ' GeneralPurpose '. Path Retrieve current version from GitHub | View changelog Psychtoolbox/PsychHardware/CedrusResponseBox.m Page History :: 2017-06-13 22:02:00 :: Owner: DocBot :: Search: Valid XHTML 1.0 Transitional :: Valid CSS WikkaWiki Page was generated in 0.0536 seconds
http://docs.psychtoolbox.org/CedrusResponseBox
2017-06-22T16:39:48
CC-MAIN-2017-26
1498128319636.73
[]
docs.psychtoolbox.org
Crate consist [−] [src] consist: An implementation of consistent hashing in Rust. The goal of consistent hashing is to partition entries in such a way that the addition or removal of buckets minimizes the number of items that must be shifted between buckets, i.e. it optimizes the rehashing stage that is usually needed for hash tables. The algorithm was originally put forth by David Karger et al. in their 1997 paper "Consistent Hashing and Random Trees". As of version 0.3.0, we use CRC64 with ECMA polynomial, so that updating the rust version does not change behavior.
https://docs.rs/consist/0.3.2/consist/
2017-06-22T16:26:56
CC-MAIN-2017-26
1498128319636.73
[]
docs.rs
1.0.3 Release Notes¶ Channels 1.0.3 is a minor bugfix release, released on 2017/02/01. Changes¶ - Database connections are no longer force-closed after each test is run. - Channel sessions are not re-saved if they’re empty even if they’re marked as modified, allowing logout to work correctly. - WebsocketDemultiplexer now correctly does sessions for the second/third/etc. connect and disconnect handlers. - Request reading timeouts now correctly return 408 rather than erroring out. - The rundelaydelay server now only polls the database once per second, and this interval is configurable with the --sleepoption.
http://channels.readthedocs.io/en/latest/releases/1.0.3.html
2017-06-22T16:22:29
CC-MAIN-2017-26
1498128319636.73
[]
channels.readthedocs.io
Table of Contents¶ pgRouting extends the PostGIS/PostgreSQL geospatial database to provide geospatial routing and other network analysis functionality. This is the manual for pgRouting 2.1.0 (b38118a. - Dictionary of columns & Custom Query that is used in the routing algorithms. - Performance Tips to improve your performance. - Sample Data that is used in the examples of this manual. - User’s Recipes List For a more complete introduction how to build a routing application read the pgRouting Workshop._driving_distance - Driving Distance - pgr_kDijkstra - Mutliple destination Shortest Path Dijkstra - pgr_ksp - K-Shortest Path - pgr_trsp - Turn Restriction Shortest Path (TRSP) - pgr_tsp - Traveling Sales Person Pre processing or post processing helping functions¶ Driving Distance post-processing - pgr_alphaShape - Alpha shape computation - pgr_pointsAsPolygon - Polygon around set of points Experimental and Proposed functions¶ - This section contains new experimental or proposed signatures for any of the following sections: - topology functions - routing functions - vehicle routing functions - pre / post procesing helper functions We are including them so that the pgRouting community can evaluate them before including them as an official function of pgRouting. Some of them are unsupported like the GSoC functions. Experimental functions: Proposed by Steve Woodbridge¶ - Convenience Functions - pgr_pointToEdgeNode - convert a point geometry to a vertex_idbased. Experimental functions: by GSoC¶ - The following functions are experimental - They may lack documentation, - Were created by GSoC students. - they are unsupported. - pgr_vrpOneDepot - VRP One Depot - pgr_vrppdtw - Pickup and Delivery problem - Pickup and Delivery problem Proposed functions: Proposed by Zia Mohammed¶ - About this proposal: - Author: Zia Mohammed. - Status: Needs a lot of testing. I am working on that. - I did not add automated test. - Temporary name: pgr_labelGraph - Need: I need feedback from the community. - pgr_labelGraph - Analyze / label subgraphs within a network Discontinued Functions¶ Developer¶ Warning In V3.0 This function are going to be discontinued. Use the already available underscored version instead. Warning Developers’s Functions documentation is going to be deleted from the pgRouting documentation in V3.0 - The following functions are used internaly the topology. Indices and tables
http://docs.pgrouting.org/2.1/en/doc/index.html
2017-06-22T16:21:04
CC-MAIN-2017-26
1498128319636.73
[array(['../_images/ccbysa.png', 'Creative Commons Attribution-Share Alike 3.0 License'], dtype=object) ]
docs.pgrouting.org
Inline options¶ Luacheck supports setting some options directly in the checked files using inline configuration comments. These inline options have the highest priority, overwriting both config options and CLI options. An inline configuration comment starts with luacheck: label, possibly after some whitespace. The body of the comment should contain comma separated options, where option invocation consists of its name plus space separated arguments. It can also contain notes enclosed in balanced parentheses, which are ignored. The following options are supported: Options that take no arguments can be prefixed with no to invert their meaning. E.g. --luacheck: no unused args disables unused argument warnings. Part of the file affected by inline option dependes on where it is placed. If there is any code on the line with the option, only that line is affected; otherwise, everything till the end of the current closure is. In particular, inline options at the top of the file affect all of it: For fine-grained control over inline option visibility use luacheck: push and luacheck: pop directives: Inline options can be completely disabled using --no-inline CLI option or inline config option.
http://luacheck.readthedocs.io/en/stable/inline.html
2017-06-22T16:16:42
CC-MAIN-2017-26
1498128319636.73
[]
luacheck.readthedocs.io
Configuring SSH Access for PCF Page last updated: To help troubleshoot applications hosted by a deployment, Pivotal Cloud Foundry (PCF) supports SSH access into running applications. This document describes how to configure a PCF deployment to allow SSH access to application instances, and how to configure load balancing for those application SSH sessions. Elastic Runtime Configuration This section describes how to configure Elastic Runtime to enable or disable deployment-wide SSH access to application instances. In addition to this deployment-wide configuration, Space Managers have SSH access control over their Space, and Space Developers have SSH access control over their to their Applications. For details about SSH access permissions, see the Application SSH Overview topic. To configure Elastic Runtime SSH access for application instances: Open the Pivotal Elastic Runtime tile in Ops Manager. Under the Settings tab, select the Application Containers section. Enable or disable the Allow SSH access to app containers checkbox. Optionally, select Enable SSH when an app is created to enable SSH access for new apps by default in spaces that allow SSH. If you deselect this checkbox, developers can still enable SSH after pushing their apps by running cf enable-ssh APP-NAME. SSH Load Balancer Configuration If you use HAProxy as a load balancer and SSH access is enabled, SSH requests are load balanced by HAProxy. This configuration relies on the presence of the same Consul server cluster that Diego components use for service discovery. This configuration also works well for deployments where all traffic on the system domain and its subdomains is directed towards the HAProxy job, as is the case for a BOSH-Lite Cloud Foundry deployment on the default 192.0.2.34.xip.io domain. For AWS deployments, where the infrastructure offers load-balancing as a service through ELBs, the deployment operator can provision an ELB to balance load across the SSH proxy instances. You should configure this ELB to listen to TCP traffic on the port given in app_ssh.port and to send it to port 2222. To register the SSH proxies with this ELB, add the ELB identifier to the elbs property in the cloud_properties hash of the Diego manifest access_zN resource pools. If you used the Spiff-based manifest-generation templates to produce the Diego manifest, specify these cloud_properties hashes in the iaas_settings.resource_pool_cloud_properties section of the iaas-settings.yml stub.
https://docs.pivotal.io/pivotalcf/1-11/opsguide/config-ssh.html
2017-08-16T15:03:32
CC-MAIN-2017-34
1502886102307.32
[array(['images/er-config-app-containers.png', 'Er config app containers'], dtype=object) ]
docs.pivotal.io
I don't have permission to upload files to the root of my shared hosting space From WebarchDocs This is not an accident, the initial directory you access using FTP (for Ecohost servers) or SFTP (for Webarchitects and Ecodissident servers), your home directory, often represented as ~/ is set not to be writable by you. You will however find a ~/private directory where you can upload files that won't be accessible via the web and you will find the DocumentRoot for your web site under ~/web/ for Ecohost servers, ~/public_html for Ecodissident servers and ~/sites/default for Webarchitects servers — this is where you should upload files that you want to be accessed via the web.
https://docs.webarch.net/wiki/I_don%27t_have_permission_to_upload_files_to_the_root_of_my_shared_hosting_space
2017-08-16T15:24:30
CC-MAIN-2017-34
1502886102307.32
[]
docs.webarch.net
City of Jade by Midi Z In the war-torn Kachin State, Myanmar, waves of poor workers flock to dig for jade, dreaming of getting rich overnight. The eldest brother of director Midi Z was among them. With this film, Midi tries to find out why his brother became a drug addict and abandoned his family. Moreover, the film depicts how people struggle for survival in the darkest corners of Myanmar. AWARDS & FESTIVALS 2016 Golden Horse Awards - Best Original Film Score, Best Documentary nomination 2016 Vienna International Film Festival 2016 Hong Kong International Film Festival 2016 Taipei Film Festival - Award for an Outstanding Artistic Contribution (Music) 2016 Berlin International Film Festival TECHNICAL SPECIFICATIONS Running Time: 98 mins Year of Completion: 2016 Director: Midi Z ABOUT DIRECTOR & PRODUCER Midi Z Born in Myanmar in 1982, Midi Z arrived in Taiwan at the age of 16. He studied design and art before obtaining a master’s degree from National Taiwan University of Technology and Science. In 2006, his graduation film, Paloma Blanca, was selected for several festivals such as Busan and Gothenburg. From 2011 to 2014, Midi made three narrative features, including Return to Burma, Poor Folk and Ice Poison; all were shot in less than ten days with a budget lower than US,000. In 2011, Return to Burma was nominated for Busan’s New Currents competition and Rotterdam’s Tiger Awards. In 2014, Ice Poison won Best International Film at Edinburgh Film Festival, Best Director at the Love and Peace Film Festival in Sweden, Best Director and Press Award at Taipei Film Festival, and was nominated for Best Director at Taipei Golden Horse Awards. Moreover, it represented Taiwan at the Oscars’ foreign language film category in the following year.
http://docs.tfi.org.tw/content/110
2017-09-19T15:09:52
CC-MAIN-2017-39
1505818685850.32
[array(['http://docs.tfi.org.tw/images/director/1490249883257180700.jpg', None], dtype=object) ]
docs.tfi.org.tw
Indicates whether this user is a player (playerId greater than 0) in the passed Room or not.Namespace: Sfs2X.Entities Assembly: SmartFox2X (in SmartFox2X.dll) Version: 1.7.3.0 (1.7.3) Syntax Parameters - room - Type: Sfs2X.Entities..::..Room The object representing the Room where to check if this user is a player. Return ValueType: Boolean true if this user is a player in the passed Room. Remarks Non-Game Rooms always return false. If a user can join one Game Room at a time only, use the IsPlayer property.
http://docs2x.smartfoxserver.com/api-docs/csharp-doc/html/db6434e5-3a7b-4e25-1f26-7ee7fcdd2bab.htm
2017-09-19T15:16:02
CC-MAIN-2017-39
1505818685850.32
[]
docs2x.smartfoxserver.com
Letter #69 by LIN Hsin-i Letter ). AWARDS & FESTIVALS 2017 Visions du Réel - Media Library 2017 Busan International Short Film Festival - Landscape of Asian Shorts 2016 Women Make Waves Film Festival - Taiwan Competition: Excellence Award TECHNICAL SPECIFICATIONS Running Time: 19 mins Year of Completion: 2016 Director: LIN Hsin-i ABOUT DIRECTOR LIN Hsin-i
http://docs.tfi.org.tw/content/111
2017-09-19T15:22:38
CC-MAIN-2017-39
1505818685850.32
[]
docs.tfi.org.tw
Configuring OpenTripPlanner Base directory The OTP base directory defaults to /var/otp. Unless you tell OTP otherwise, all other configuration, input files and storage directories will be sought immediately beneath this one. This prefix follows UNIX conventions so it should work in Linux and Mac OSX environments, but it is inappropriate in Windows and where the user running OTP either cannot obtain permissions to /var or simply wishes to experiment within his or her home directory rather than deploy a system-wide server. In these cases one should use the basePath switch when starting up OTP to override the default. For example: --basePath /home/username/otp on a Linux system, --basePath /Users/username/otp in Mac OSX, or --basePath C:\Users\username\otp in Windows. Routers A single OTP instance can handle several regions independently. Each of these separate (but potentially geographically overlapping) services is called a router and is referred to by a short unique ID such as 'newyork' or 'paris'. Each router has its own subdirectory in a directory called 'graphs' directly under the OTP base directory, and each router's directory is always named after its router ID. Thus, by default the files for the router 'tokyo' will be located at /var/otp/graphs/tokyo. Here is an example directory layout for an OTP instance with two routers, one for New York City and one for Portland, Oregon: /var/otp ├── cache │ └── ned └── graphs ├── nyc │ ├── build-config.json │ ├── Graph.obj │ ├── long-island-rail-road_20140216_0114.zip │ ├── mta-new-york-city-transit_20130212_0419.zip │ ├── new-york-city.osm.pbf │ └── port-authority-of-new-york-new-jersey_20150217_0111.zip └── pdx ├── build-config.json ├── Graph.obj ├── gtfs.zip ├── portland_oregon.osm.pbf └── router-config.json You can see that each of these subdirectories contains one or more GTFS feeds (which are just zip files full of comma-separated tables), a PBF street map file, some JSON configuration files, and another file called Graph.obj. On startup, OTP scans router directories for input and configuration files, and can optionally store the resulting combined representation of the transportation network as Graph.obj in the same directory to avoid re-processing the data the next time it starts up. The cache directory is where OTP will store its local copies of resources fetched from the internet, such as US elevation tiles. System-wide vs. graph build vs. router configuration OTP is configured via JSON files. The file otp-config.json is placed in the OTP base directory and contains settings that affect the entire OTP instance. Each router within that instance is configured using two other JSON files placed alongside the input files (OSM, GTFS, elevation data etc.) in the router's directory. These router-level config files are named build-config.json and router-config.json. Each configuration option within each of these files is optional, as are all three of the files themselves. If any option or an entire file is missing, reasonable defaults will be applied. Some parts of the process that loads the street and transit network description are time consuming and memory-hungry. To avoid repeating these slow steps every time OTP starts up, we can trigger them manually whenever the input files change, saving the resulting transportation network description to disk. We call this prepared product a graph (following mathematical terminology), and refer to these "heavier" steps as graph building. They are controlled by build-config.json. There are many other details of OTP operation that can be modified without requiring the potentially long operation of rebuilding the graph. These run-time configuration options are found in router-config.json. Graph build configuration Reaching a subway platform The boarding locations for some modes of transport such as subways and airplanes can be slow to reach from the street. When planning a trip, we need to allow additional time to reach these locations to properly inform the passenger. For example, this helps avoid suggesting short bus rides between two subway rides as a way to improve travel time. You can specify how long it takes to reach a subway platform // build-config.json { "subwayAccessTime": 2.5 } Stops in GTFS do not necessarily serve a single transit mode, but in practice this is usually the case. This additional access time will be added to any stop that is visited by trips on subway routes (GTFS route_type = 1). This setting does not generalize well to airplanes because you often need much longer to check in to a flight (2-3 hours for international flights) than to alight and exit the airport (perhaps 1 hour). Therefore there is currently no per-mode access time, it is subway-specific. Transferring within stations Subway systems tend to exist in their own layer of the city separate from the surface, though there are exceptions where tracks lie right below the street and transfers happen via the surface. In systems where the subway is quite deep and transfers happen via tunnels, the time required for an in-station transfer is often less than that for a surface transfer. A proposal was made to provide detailed station pathways in GTFS but it is not in common use. One way to resolve this problem is by ensuring that the GTFS feed codes each platform as a separate stop, then micro-mapping stations in OSM. When OSM data contains a detailed description of walkways, stairs, and platforms within a station, GTFS stops can be linked to the nearest platform and transfers will happen via the OSM ways, which should yield very realistic transfer time expectations. This works particularly well in above-ground train stations where the layering of non-intersecting ways is less prevalent. Here's an example in the Netherlands: When such micro-mapping data is not available, we need to rely on information from GTFS including how stops are grouped into stations and a table of transfer timings where available. During the graph build, OTP can create preferential connections between each pair of stops in the same station to favor in-station transfers: // build-config.json { "stationTransfers": true } Note that this method is at odds with micro-mapping and might make some transfers artificially short. Elevation data OpenTripPlanner can "drape" the OSM street network over a digital elevation model (DEM). This allows OTP to draw an elevation profile for the on-street portion of itineraries, and helps provide better routing for bicyclists. It even helps avoid hills for walking itineraries. DEMs are usually supplied as rasters (regular grids of numbers) stored in image formats such as GeoTIFF. U.S. National Elevation Dataset In the United States, a high resolution National Elevation Dataset is available for the entire territory. The US Geological Survey (USGS) delivers this dataset in tiles via a somewhat awkward heavyweight web-based GIS which generates and emails you download links. OpenTripPlanner contains a module which will automatically contact this service and download the proper tiles to completely cover your transit and street network area. This process is rather slow (download is around 1.5 hours, then setting elevation for streets takes about 5 minutes for the Portland, Oregon region), but once the tiles are downloaded OTP will keep them in local cache for the next graph build operation. To auto-download NED tiles when building your graph, add the following line to build-config.json in your router directory: // build-config.json { "fetchElevationUS": true } You may also want to add the --cache <directory> command line parameter to specify a custom NED tile cache location. NED downloads take quite a long time and slow down the graph building process. The USGS will also deliver the whole dataset in bulk if you send them a hard drive. OpenTripPlanner contains another module that will then automatically fetch data in this format from an Amazon S3 copy of your bulk data. You can configure it as follows in build-config.json: { "elevationBucket" : { "accessKey" : "your-aws-access-key", "secretKey" : "corresponding-aws-secret-key", "bucketName" : "ned13" } } Other raster elevation data For other parts of the world you will need a GeoTIFF file containing the elevation data. These are often available from national geographic surveys, or you can always fall back on the worldwide Space Shuttle Radar Topography Mission (SRTM) data. This not particularly high resolution (roughly 30 meters horizontally) but it can give acceptable results. Simply place the elevation data file in the directory with the other graph builder inputs, alongside the GTFS and OSM data. Make sure the file has a .tiff or .tif extension, and the graph builder should detect its presence and apply the elevation data to the streets. OTP should automatically handle DEM GeoTIFFs in most common projections. You may want to check for elevation-related error messages during the graph build process to make sure OTP has properly discovered the projection. If you are using a DEM in unprojected coordinates make sure that the axis order is (longitude, latitude) rather than (latitude, longitude). Unfortunately there is no reliable standard for WGS84 axis order, so OTP uses the same axis order as the above-mentioned SRTM data, which is also the default for the popular Proj4 library. Fares configuration By default OTP will compute fares according to the GTFS specification if fare data is provided in your GTFS input. For more complex scenarios or to handle bike rental fares, it is necessary to manually configure fares using the fares section in build-config.json. You can combine different fares (for example transit and bike-rental) by defining a combinationStrategy parameter, and a list of sub-fares to combine (all fields starting with fare are considered to be sub-fares). // build-config.json { // Select the custom fare "seattle" "fares": "seattle", // OR this alternative form that could allow additional configuration "fares": { "type": "seattle" } } // build-config.json { "fares": { // Combine two fares by simply adding them "combinationStrategy": "additive", // First fare to combine "fare0": "new-york", // Second fare to combine "fare1": { "type": "bike-rental-time-based", "currency": "USD", "prices": { // For trip shorter than 30', $4 fare "30": 4.00, // For trip shorter than 1h, $6 fare "1:00": 6.00 } } // We could also add fareFoo, fareBar... } } The current list of custom fare type is: bike-rental-time-based- accepting the following parameters: currency- the ISO 4217 currency code to use, such as "EUR"or "USD", prices- a list of {time, price}. The resulting cost is the smallest cost where the elapsed time of bike rental is lower than the defined time. san-francisco(no parameters) new-york(no parameters) seattle(no parameters) The current list of combinationStrategy is: additive- simply adds all sub-fares. OSM / OpenStreetMap configuration It is possible to adjust how OSM data is interpreted by OpenTripPlanner when building the road part of the routing graph. Way property sets OSM tags have different meanings in different countries, and how the roads in a particular country or region are tagged affects routing. As an example are roads tagged with `highway=trunk (mainly) walkable in Norway, but forbidden in some other countries. This might lead to OTP being unable to snap stops to these roads, or by giving you poor routing results for walking and biking. You can adjust which road types that are accessible by foot, car & bicycle as well as speed limits, suitability for biking and walking. There are currently 2 wayPropertySets defined; defaultwhich is based on California/US mapping standard norwaywhich is adjusted to rules and speeds in Norway To add your own custom property set have a look at org.opentripplanner.graph_builder.module.osm.NorwayWayPropertySet and org.opentripplanner.graph_builder.module.osm.DefaultWayPropertySet. If you choose to mainly rely on the default rules, make sure you add your own rules first before applying the default ones. The mechanism is that for any two identical tags, OTP will use the first one. // build-config.json { osmWayPropertySet: "norway" } Custom naming You can define a custom naming scheme for elements drawn from OSM by defining an osmNaming field in build-config.json, such as: // build-config.json { "osmNaming": "portland" } There is currently only one custom naming module called portland (which has no parameters). Runtime router configuration This section covers all options that can be set for each router using the router-config.json file. These options can be applied by the OTP server without rebuilding the graph. Routing defaults There are many trip planning options used in the OTP web API, and more exist internally that are not exposed via the API. You may want to change the default value for some of these parameters, i.e. the value which will be applied unless it is overridden in a web API request. A full list of them can be found in the RoutingRequest class in the Javadoc. Any public field or setter method in this class can be given a default value using the routingDefaults section of router-config.json as follows: { "routingDefaults": { "walkSpeed": 2.0, "stairsReluctance": 4.0, "carDropoffTime": 240 } } Drive-to-transit routing defaults When using the "park and ride" or "kiss and ride" modes (drive to transit), the initial driving time to reach a transit stop or park and ride facility is constrained. You can set a drive time limit in seconds by adding a line like maxPreTransitTime = 1200 to the routingDefaults section. If the limit is too high on a very large street graph, routing performance may suffer. Boarding and alighting times Sometimes there is a need to configure a longer boarding or alighting times for specific modes, such as airplanes or ferries, where the check-in process needs to be done in good time before boarding. The boarding time is added to the time when going from the stop (offboard) vertex to the onboard vertex, and the alight time is added vice versa. The times are configured as seconds needed for the boarding and alighting processes in router-config.json as follows: { "boardTimes": { "AIRPLANE": 2700 }, "alightTimes": { "AIRPLANE": 1200 } } Timeouts Path searches can sometimes take a long time to complete, especially certain problematic cases that have yet to be optimized. Often a first itinerary is found quickly, but it is time-consuming or impossible to find subsequent alternative itineraries and this delays the response. You can set timeouts to avoid tying up server resources on pointless searches and ensure that your users receive a timely response. When a search times out, a WARN level log entry is made with information that can help identify problematic searches and improve our routing methods. The simplest timeout option is: // router-config.json { "timeout": 5.5 } This specifies a single timeout in (optionally fractional) seconds. Searching is aborted after this many seconds and any paths already found are returned to the client. This is equivalent to specifying a timeouts array with a single element. The alternative is: // router-config.json { "timeouts": [5, 4, 3, 1] } Here, the configuration key is timeouts (plural) and we specify an array of times in floating-point seconds. The Nth element in the array applies to the Nth itinerary search, and importantly all values are relative to the beginning of the search for the first itinerary. If OTP is configured to find more itineraries than there are elements in the timeouts array, the final element in the timeouts array will apply to all remaining unmatched searches. This allows you to keep overall response time down while ensuring that the end user will get at least one response, providing more only when it won't hurt response time. The timeout values will typically be decreasing to reflect the decreasing marginal value of alternative itineraries: everyone wants at least one response, it's nice to have two for comparison, but we only care about having three, four, or more options if completing those extra searches doesn't cause annoyingly long response times. Logging incoming requests You can log some characteristics of trip planning requests in a file for later analysis. Some transit agencies and operators find this information useful for identifying existing or unmet transportation demand. Logging will be performed only if you specify a log file name in the router config: // router-config.json { "requestLogFile": "/var/otp/request.log" } Each line in the resulting log file will look like this: 2016-04-19T18:23:13.486 0:0:0:0:0:0:0:1 ARRIVE 2016-04-07T00:17 WALK,BUS,CABLE_CAR,TRANSIT,BUSISH 45.559737193889966 -122.64999389648438 45.525592487765635 -122.39044189453124 6095 3 5864 3 6215 3 The fields are separated by whitespace and are (in order): - Date and time the request was received - IP address of the user - Arrive or depart search - The arrival or departure time - A comma-separated list of all transport modes selected - Origin latitude and longitude - Destination latitude and longitude Finally, for each itinerary returned to the user, there is a travel duration in seconds and the number of transit vehicles used in that itinerary. Real-time data GTFS feeds contain schedule data that is is published by an agency or operator in advance. The feed does not account for unexpected service changes or traffic disruptions that occur from day to day. Thus, this kind of data is also referred to as 'static' data or 'theoretical' arrival and departure times. GTFS-Realtime The GTFS-RT spec complements GTFS with three additional kinds of feeds. In contrast to the base GTFS schedule feed, they provide real-time updates ('dynamic' data) and are are updated from minute to minute. Alerts are text messages attached to GTFS objects, informing riders of disruptions and changes. TripUpdates report on the status of scheduled trips as they happen, providing observed and predicted arrival and departure times for the remainder of the trip. VehiclePositions give the location of some or all vehicles currently in service, in terms of geographic coordinates or position relative to their scheduled stops. Bicycle rental systems Besides GTFS-RT transit data, OTP can also fetch real-time data about bicycle rental networks including the number of bikes and free parking spaces at each station. We support bike rental systems from JCDecaux, BCycle, VCub, Keolis, Bixi, the Dutch OVFiets system, ShareBike, GBFS and a generic KML format. It is straightforward to extend OTP to support any bike rental system that exposes a JSON API or provides KML place markers, though it requires writing a little code. The generic KML needs to be in format like <?xml version="1.0" encoding="utf-8" ?> <kml xmlns=""> <Document id="root_doc"> <Schema name="citybikes" id="citybikes"> <SimpleField name="ID" type="int"></SimpleField> </Schema> <Placemark> <name>A Bike Station</name> <ExtendedData><SchemaData schemaUrl="#citybikes"> <SimpleData name="ID">0</SimpleData> </SchemaData></ExtendedData> <Point><coordinates>24.950682884886643,60.155923430488102</coordinates></Point> </Placemark> </Document></kml> Configuration Real-time data can be provided using either a pull or push system. In a pull configuration, the GTFS-RT consumer polls the real-time provider over HTTP. That is to say, OTP fetches a file from a web server every few minutes. In the push configuration, the consumer opens a persistent connection to the GTFS-RT provider, which then sends incremental updates immediately as they become available. OTP can use both approaches. The OneBusAway GTFS-realtime exporter project provides this kind of streaming, incremental updates over a websocket rather than a single large file. Real-time data sources are configured in router-config.json. The updaters section is an array of JSON objects, each of which has a type field and other configuration fields specific to that type. Common to all updater entries that connect to a network resource is the url field. // router-config.json { // Routing defaults are any public field or setter in the Java class // org.opentripplanner.routing.core.RoutingRequest "routingDefaults": { "numItineraries": 6, "walkSpeed": 2.0, "stairsReluctance": 4.0, "carDropoffTime": 240 }, "updaters": [ // GTFS-RT service alerts (frequent polling) { "type": "real-time-alerts", "frequencySec": 30, "url": "", "feedId": "TriMet" }, // Polling bike rental updater. // sourceType can be: jcdecaux, b-cycle, bixi, keolis-rennes, ov-fiets, // city-bikes, citi-bike-nyc, next-bike, vcub, kml { "type": "bike-rental", "frequencySec": 300, "sourceType": "city-bikes", "url": "" }, //<!--- San Francisco Bay Area bike share --> { "type": "bike-rental", "frequencySec": 300, "sourceType": "sf-bay-area", "url": "" }, //<!--- Tampa Area bike share --> { "type": "bike-rental", "frequencySec": 300, "sourceType": "gbfs", "url": "" }, // Polling bike rental updater for DC bikeshare (a Bixi system) // Negative update frequency means to run once and then stop updating (essentially static data) { "type": "bike-rental", "sourceType": "bixi", "url": "", "frequencySec": -1 }, // Bike parking availability { "type": "bike-park" }, // Polling for GTFS-RT TripUpdates) { "type": "stop-time-updater", "frequencySec": 60, // this is either http or file... shouldn't it default to http or guess from the presence of a URL? "sourceType": "gtfs-http", "url": "", "feedId": "TriMet" }, // Streaming differential GTFS-RT TripUpdates over websockets { "type": "websocket-gtfs-rt-updater" }, // OpenTraffic data { "type": "opentraffic-updater", "frequencySec": -1, // relative to OTP's working directory, where is traffic data stored. // Should have subdirectories z/x/y.traffic.pbf (i.e. a tile tree of traffic tiles) "tileDirectory": "traffic" } ] } GBFS Configuration Steps to add a GBFS feed to a router: - Add one entry in the updaterfield of router-config.jsonin the format { "type": "bike-rental", "frequencySec": 60, "sourceType": "gbfs", "url": "" } - Follow these instructions to fill these fields: type: "bike-rental" frequencySec: frequency in seconds in which the GBFS service will be polled sourceType: "gbfs" url: the URL of the GBFS feed (do not include the gbfs.json at the end) * * For a list of known GBFS feeds see the list of known GBFS feeds
http://docs.opentripplanner.org/en/latest/Configuration/
2017-09-19T15:03:42
CC-MAIN-2017-39
1505818685850.32
[]
docs.opentripplanner.org
Time Splits in the River by HUANG I-chieh, LIAO Xuan-zhen, LEE Chia-hung, WANG Yu-ping Four artists… AWARDS & FESTIVALS 2016 Jihlava International Documentary Film Festival - Opus Bonum 2016 Taiwan Biennial: The Possibility of an Island TECHNICAL SPECIFICATIONS Year of Completion: 2016 Director: HUANG I-chieh, LIAO Xuan-zhen, LEE Chia-hung, WANG Yu-ping ABOUT DIRECTOR HUANG I-chieh, LIAO Xuan-zhen, LEE Chia-hung, WANG Yu-ping LIAO Xuan-zhen (born 1993), HUANG I-chieh (born 1992), LEE Chia-hung (born 1992), and WANG Yu-ping (born 1993) are Taiwanese artists who met at Taipei University of Art. All four graduated in 2015. Together, they have initiated a number of art projects based on a specific collective awareness.
http://docs.tfi.org.tw/content/112
2017-09-19T15:02:07
CC-MAIN-2017-39
1505818685850.32
[array(['http://docs.tfi.org.tw/images/director/1490766965812467167.jpg', None], dtype=object) ]
docs.tfi.org.tw
Ballet in Tandem by YANG Wei-hsin Ballet in Tandem is a feature-length documentary exploring the state of ballet in Taiwan. Through on-site filmic documentation of various schools and companies, archival materials, and interviews with students and professionals alike, the filmmaker intends to piece together an often-neglected chapter of dance narrative in Taiwan. As we follow the interwoven stories of generations of dancers who have dedicated themselves to the art and craft of ballet, their joys and pathos, successes and failures, dreams and disillusions will compel us to contemplate our collective understanding of the art form and ultimately question the policy and decision-making of Taiwan’s culture and education. TECHNICAL SPECIFICATIONS Producer: YANG Wei-hsin Total Budget (USD): 200,000 Funding in Place (USD): 100,000 Running Time: 100 mins Year of Completion: 2017 Director: YANG Wei-hsin ABOUT DIRECTOR & PRODUCER YANG Wei-hsin
http://docs.tfi.org.tw/content/114
2017-09-19T15:08:48
CC-MAIN-2017-39
1505818685850.32
[array(['http://docs.tfi.org.tw/images/director/1490772421223491674.jpg', None], dtype=object) ]
docs.tfi.org.tw
Note: Most user interface tasks can be performed in Edge Classic or the New Edge experience. For an overview, getting started topics, and release notes specific to the New Edge experience, see the docs. What's an API Console? The legacy API Console is a free hosted service that is separate from Apigee Edge with no service-level agreement (SLA) guarantee. Ap. What is Apigee Console To-Go? Apigee Console To-Go has been retired. Your existing API Consoles will continue to work. You cannot create new or modify existing consoles.. Browser support See Supported software and supported versions. Help or comments? - If something's not working: Ask the Apigee Community or see Apigee Support. - If something's wrong with the docs: Send Docs Feedback (Incorrect? Unclear? Broken link? Typo?)
http://docs.apigee.com/developer-services/content/whats-api-console?rate=IJhI38W6a8ZMhyxw_XnvTncxd9NbJZqYnFf__tPApi4
2017-09-19T15:26:39
CC-MAIN-2017-39
1505818685850.32
[array(['http://d3grn7b5c5cnw5.cloudfront.net/sites/docs/files/ConsoleSelctAPI_v24.png', None], dtype=object) array(['http://d3grn7b5c5cnw5.cloudfront.net/sites/docs/files/ConsoleRequestURL_v24.png', None], dtype=object) ]
docs.apigee.com
Telerik® JustCode™ enables you to check for updates and download new releases without even leaving Visual Studio. By default Telerik® JustCode™ checks for updates automatically on startup, but in case you have switched off this option you may also check for updates manually. Navigate on the main menu. Choose JustCode | Help | Check For Updates. In case there are no updates, Telerik® JustCode™ will notify you with a dialog window. Otherwise, you are notified about the latest release notes. Click Download to download the latest version. Save the executable somewhere on your disk. If you have already installed Dev license version of JustCode and your license has expired, you will see the Trial confirmation window. Telerik® JustCode™ starts reporting the download progress. Once the download has finished you are kindly asked to close all instances of Visual Studio and start the executable to install the new version of Telerik® JustCode™. You can be notified for available internal builds as well by selecting "Include internal builds when checking for updates" in JustCode Options window.
http://docs.telerik.com/help/justcode/reference-check-for-updates.html
2017-09-19T15:30:20
CC-MAIN-2017-39
1505818685850.32
[]
docs.telerik.com
Post-production Ça Fait Si Longtemps by Laha MEBOW Is it possible that total strangers of different ethnicity on the other side of the ocean can have a similar cultural background and fate with us? Two young indigenous Taiwanese pop musicians explore the beauty, creativity and cultural diversity of New Caledonia in the southwest Pacific Ocean by visiting some of its acclaimed indigenous Kanaky musicians to share their love and passion for music and life. Through the journey, they discover the key to be seen in the world is to find and redefine the roots of their own traditions. TECHNICAL SPECIFICATIONS Producer: Jewel Chen-lin LAI Total Budget (USD): 111,963 Funding in Place (USD): 111,963 Running Time: 75 mins Year of Completion: 2017 Director: Laha MEBOW ABOUT DIRECTOR Laha MEBOW Laha MEBOW is the first female indigenous film director in Taiwan. She has devoted herself to the industry for 18 years as a scriptwriter, director and TV producer. Her works focus mostly on the shared experience of indigenous peoples in Taiwan. Her latest feature Hang in There, Kids! is awarded the Grand Prize at Taipei Film Festival 2016. She received an Annual Top 10 Outstanding Young Women Award in 2015 for her contribution to indigenous issues. She recently visited the Cité Internationale des Arts in Paris as an artist-in-residence on the recommendation of Taiwan government. ABOUT PRODUCER Jewel Chen-lin LAI Jewel Chen-lin LAI received her Master in Film and Screen Study from Goldsmiths College, University of London. She has organised two editions of the acclaimed CNEX Documentary Film Festival and founded Dot Connect Studio in 2012. She currently works as a film producer and festival programmer.
http://docs.tfi.org.tw/content/115
2017-09-19T15:08:08
CC-MAIN-2017-39
1505818685850.32
[array(['http://docs.tfi.org.tw/images/director/1490773473750990580.jpg', None], dtype=object) array(['http://docs.tfi.org.tw/images/director/1490773496441713471.jpg', None], dtype=object) ]
docs.tfi.org.tw
Configuration - Oracle RAC iDataAgent Configuring the CommCell Console Users for Oracle RAC Administration The CommCell Console user that adds nodes to the Oracle RAC pseudo client must be in a group that has the Administrative Management capability for the member nodes. Create the group with the Administrative Management capability and then assign the user to the group. For information on how to configure and assign groups, see Getting Started with User Administration and Security. Creating a Pseudo-Client for Oracle RAC Once the Oracle Data Agent is installed on all the RAC database nodes, create a new Oracle RAC pseudo-client that is a logical grouping of Oracle instances in the Oracle RAC instance. - From the CommCell Bowser, right-click Client Computers, point to New Client >Application and then click Oracle RAC. The Create Oracle RAC Client dialog box is displayed. - On the General tab, provide the pseudo-client details. - In the Pseudo-client Name box, type a name for the RAC client. - In the Database Name box, type the name of the RAC database to which you are assigning this client. The Database Name must be of the same case as the name of the Oracle RAC database. Example: SQL> select name from v$database; NAME --------- db1 - From the Storage Policy list, select the storage policy used for default subclient data. If you do not have a storage policy, Create a Storage Policy. - On the Details tab, determine how you want to connect to Oracle. - On the Details tab, add the Oracle instances. - Click Add. The Add Instance dialog box is displayed. - From the Instance Physical Client list, select the client name of the Oracle instance. - In the Instance (ORACLE SID) box, type the name of the Oracle instance. - Click Change User Account and type the Oracle user name and password. - In the ORACLE HOME box, type or click Browse to locate the Oracle application install path. - In the Connect String box, type the Oracle database user with privileges to access the Oracle database in the Oracle RAC node . Use the SYS login for example, sys/password@Service for the connect string. For example, sys/pwd12@orcl4. - In the TNS_ADMIN Folder, type the path for the TNS_Admin directory. - Click OK to close the Add Instance dialog box. - Repeat the previous steps for each instance that you want to add. - On the Storage Device tab, configure the storage policy information. - In the Storage Policy used for user command backup of data box, select the data backup storage policy name. - Click the Logs Backup tab. - In the Storage Policy used for all Archive Log backups box, select the log backup storage policy name. - Click OK. Follow these steps if the database is in NoArchivelog mode Creating a Subclient for Offline Backup - From the CommCell Browser, expand Client Computers > client > Agent. - Data check box. - Select the Offline Database option. - On the Storage Device tab, select the name of the Data Storage Policy. If you do not have a storage policy, Create a Storage Policy. - Click OK to close the Create New Subclient dialog box.
http://docs.snapprotect.com/netapp/v10/article?p=products/oracle_rac/config_basic.htm
2017-09-19T15:24:18
CC-MAIN-2017-39
1505818685850.32
[]
docs.snapprotect.com
Face the Earth by HUANG Ming-chuan HUANG Ming-chuan is a film director based in Taipei, Taiwan. After graduating from the Law School of National Taiwan University, HUANG left for New York, establishing his career in still photography for years. After HUANG’s first independently made narrative feature The Man from Island West, he has been acknowledged as the “Forefather of Independent Film” in Taiwan. Since 2000, he has committed himself to full-time documentary production. His most renowned TV documentary series Avant-Garde Liberation depicts 14 Taiwanese contemporary artists and won the first Taishin Arts Award. AWARDS & FESTIVALS 2016 Ministry of Culture, Taiwan - Documentary Film Grant 2014 New Taipei City Doc - Grant (15 min version) TECHNICAL SPECIFICATIONS Producer: Annie BERMAN Total Budget (USD): 200,000 Funding in Place (USD): 100,000 Year of Completion: 2018 Director: HUANG Ming-chuan ABOUT DIRECTOR HUANG Ming-chuan ABOUT PRODUCER Annie BERMAN
http://docs.tfi.org.tw/content/116
2017-09-19T15:09:00
CC-MAIN-2017-39
1505818685850.32
[array(['http://docs.tfi.org.tw/images/director/14338768401821397980.jpg', None], dtype=object) array(['http://docs.tfi.org.tw/images/director/14907739791608506425.jpg', None], dtype=object) ]
docs.tfi.org.tw
The Signal Owl is a simple payload-based signals intelligence platform with a unique design for discreet planting or mobile operations on any engagement. This getting started guide video only applies to Signal Owl firmware version 1.0.0. From version 1.0.1 onwards, entering arming mode is accomplished by pressing the button any time during attack mode (roughly 1 minute after power-on). Continue reading. Software Basics Many popular tools, such as Kismet, Nmap, MDK4 and the Aircrack-ng suite as well as bash and Python 3 are included. Using these and the Owl framework, the pentester may execute bash payloads with Ducky Script loaded from an ordinary USB flash disk. Deployment The Signal Owl features a unique form-factor and is designed to function as a long-term wireless implant, or mobile operations in conjunction with a standard USB battery. Implant Operations During long term wireless engagements, the Signal Owl may be planted inline between any typical 5V USB power source. This may be useful in situations where ports are occupied, or to deter the unit from being unplugged. The Signal Owl is powered from the USB-A Male pigtail, while data and power are passed through to its USB 2.0 port closest to the pigtail. For example, with a keyboard or mouse plugged inline between this USB pass-through port and the pigtail connected to the target computer, the Signal Owl will not enumerate as a device on the operating system - only the keyboard or mouse will. Mobile Operations The Signal Owl features a low power consumption profile, with a typical power draw averaging 100-200 mAh. Additionally, it is thermally optimized for long term deployments in many indoor environments. One can expect a large 20,000 mAh USB battery bank to operate the unit for up to 4 days. Modes of Operation The Signal Owl has two basic modes of operation: Arming Mode and Attack Mode. By default the Signal Owl will boot into Attack Mode. To access Arming Mode, momentarily press the button on the bottom of the unit using a paperclip or similar instrument any time in the Attack mode (approximately 1 minute from power-on and after). Version 1.0.0 Note: In Signal Owl firmware version 1.0.0, entering Arming Mode was achieved by momentarily press the button during a 3 second mode selection phase at bootup which was indicated by the rapidly blinking LED. Arming Mode In Arming Mode, the Signal Owl will present the user with an open access point named Owl_xxxx (where xxxx is the last two octets of the devices MAC address) and will be accessible via SSH. The open access point may be configured with WPA security by editing the /etc/config/wireless file and adding the following options to the config 'wifi-iface' section: option 'encryption' 'psk2' option 'key' 'secret passphrase' Attack Mode In Attack Mode, the Signal Owl will execute the following operations: - If a payload is present on the root of a connected USB flash disk, it will be copied to the Signal Owl's internal storage at /root/payload - Similarly if an extension folder is present on the root of a USB flash disk, the contents of this folder will be copied to the Signal Owl's internal storage at /root/payload/extensions - The extensions will be sourced, and the payload on the Signal Owl's internal storage will execute - If no payload is found (on internal or external storage) the LED will indicate the FAIL status
https://docs.hak5.org/hc/en-us/articles/360034023853-Signal-Owl-Basics
2020-07-02T08:38:50
CC-MAIN-2020-29
1593655878639.9
[array(['/hc/article_attachments/360043963633/owl-diagram.png', 'owl-diagram.png'], dtype=object) ]
docs.hak5.org
HSVT templates Text Text that will appear in the user interface should be included in the HSVT file. For example, the pageTitle() and backLink() functions should be preferred to passing text in to the view from JavaScript. Always prepare the text in your templates for internationalisation, except if you know it’ll never be needed because it’s a bespoke feature for a single client. Whitespace HSVT ignores whitespace, so use it to increase the clarity of your HTML structure. In particular, an angle bracket should always have whitespace around it. <div> if(canEdit) { <p> <a href="..."> i("Edit") </a> </p> } </div> Layout HSVT files should be indented 2 spaces and follow the same conventions as JavaScript for control flow.
https://docs.haplo.org/dev/style/hsvt
2020-07-02T09:37:15
CC-MAIN-2020-29
1593655878639.9
[]
docs.haplo.org
Contents Strategic Partner Links Sepasoft - MES Modules Cirrus Link - MQTT Modules Resources Knowledge Base Articles Inductive University Forum IA Support SDK Documentation SDK Examples All Manual Versions Ignition 8 Ignition 7.9 Ignition 7.8 The chart itself is a grid of cells upon which elements are placed. There are just a few types of elements available to these charts, but they can be combined in complex ways to create whole processes and production flows that are easy to read. Often these charts get quite large because of the complicated rules that are required for the type of processes the SFCs are used for. In this section you will see the basic elements that make up every chart. For more information about putting them together, see SFCs in Action. The chart has some configuration that can determine how and when the chart is started up, as well as opportunities to respond to chart lifecycle events with scripting, such as onStart, onStop, onCancel, and onAbort, see Chart Lifecycle for details.
https://docs.inductiveautomation.com/display/DOC79/SFC+Elements
2020-07-02T08:11:51
CC-MAIN-2020-29
1593655878639.9
[]
docs.inductiveautomation.com
Discounted SydStart tickets for BizSpark members BizSpark is sponsoring SydStart this year and that means discounted tickets for our members! If you're interested in taking advantage of the 25% off discount, send me an email at [email protected] with your registered BizSpark email and we'll reply with the discount code! Too easy!
https://docs.microsoft.com/en-us/archive/blogs/bizspark_au/discounted-sydstart-tickets-for-bizspark-members
2020-07-02T10:53:06
CC-MAIN-2020-29
1593655878639.9
[]
docs.microsoft.com
Linking Xamarin.iOS Apps When building your application, Visual Studio for Mac or Visual Studio calls a tool called mtouch that includes a linker for managed code. It is used to remove from the class libraries the features that the application is not using. The goal is to reduce the size of the application, which will ship with only the necessary bits. The linker uses static analysis to determine the different code paths that your application is susceptible to follow. It's a bit heavy as it has to go through every detail of each assembly, to make sure that nothing discoverable is removed. It is not enabled by default on the simulator builds to speed up the build time while debugging. However since it produces smaller applications it can speed up AOT compilation and uploading to the device, all devices (Release) builds are using the linker by default. As the linker is a static tool, it can not mark for inclusion types and methods that are called through reflection, or dynamically instantiated. Several options exists to workaround this limitation. Linker Behavior The linking process can be customized via the linker behavior dropdown in Project Options. To access this double-click on the iOS project and browse to iOS Build > Linker Options, as illustrated below: The three main options are offered are described below: Don't Link Disabling linking will make sure that no assemblies are modified. For performance reasons this is the default setting when your IDE targets for the iOS simulator. For devices builds this should only be used as a workaround whenever the linker contains a bug that prevents your application to run. If your application only works with -nolink, please submit a bug report. This corresponds to the -nolink option when using the command-line tool mtouch. Link SDK assemblies only In this mode, the linker will leave your assemblies untouched, and will reduce the size of the SDK assemblies (i.e. what's shipped with Xamarin.iOS) by removing everything that your application doesn't use. This is the default setting when your IDE targets iOS devices. This is the most simple option, as it does not require any change in your code. The difference with linking everything is that the linker can not perform a few optimizations in this mode, so it's a trade-off between the work needed to link everything and the final application size. This correspond to the -linksdk option when using the command-line tool mtouch. Link all assemblies When linking everything, the linker can use the whole set of its optimizations to make the application as small as possible. It will modify user code, which may break whenever the code uses features in a way that the linker's static analysis cannot detect. In such cases, e.g. webservices, reflection, or serialization, some adjustements might be required in your application to link everything. This correspond to the -linkall option when using the command-line tool mtouch. Controlling the Linker When you use the linker it will sometimes will remove code that you might have called dynamically, even indirectly. To cover those cases the linker provides a few features and options to allow you greater control on its actions. Preserving Code When you use the linker it can sometimes remove code that you might have called dynamically either using System.Reflection.MemberInfo.Invoke, or by exporting your methods to Objective-C using the [Export] attribute and then invoking the selector manually. In those cases, you can instruct the linker to consider either entire classes to be used or individual members to be preserved by applying the [Xamarin.iOS.Foundation.Preserve] attribute either at the class-level or the member-level. Every member that is not statically linked by the application is subject to be removed. This attribute is hence used to mark members that are not statically referenced, but that are still needed by your application. For instance, if you instantiate types dynamically, you may want to preserve the default constructor of your types. If you use XML serialization, you may want to preserve the properties of your types. You can apply this attribute on every member of a type, or on the type itself. If you want to preserve the whole type, you can use the syntax [Preserve (AllMembers = true)] on the type. Sometimes you want to preserve certain members, but only if the containing type was preserved. In those cases, use [Preserve (Conditional=true)] If you do not want to take a dependency on the Xamarin libraries -for example, say that you are building a cross platform portable class library (PCL) - you can still use this attribute. To do this, you should just declare a PreserveAttribute class, and use it in your code, like this: public sealed class PreserveAttribute : System.Attribute { public bool AllMembers; public bool Conditional; } It does not really matter in which namespace this is defined, the linker looks this attribute by type name. Skipping Assemblies It is possible to specify assemblies that should be excluded from the linker process, while allowing other assemblies to be linked normally. This is helpful if using [Preserve] on some assemblies is impossible (e.g. 3rd party code) or as a temporary workaround for a bug. This correspond to the --linkskip option when using the command-line tool mtouch. When using Link All Assemblies option, if you want to tell the linker to skip entire assemblies, put the following in the Additional mtouch arguments options of your top-level assembly: --linkskip=NameOfAssemblyToSkipWithoutFileExtension If you want the linker to skip multiple assemblies, you include multiple linkskip arguments: --linkskip=NameOfFirstAssembly --linkskip=NameOfSecondAssembly There is no user interface to use this option but it can be provided in the Visual Studio for Mac Project Options dialog or the Visual Studio project Properties pane, within the Additional mtouch arguments text field. (E.g. --linkskip=mscorlib would not link mscorlib.dll but would link other assemblies in the solution). Disabling "Link Away" The linker will remove code that is very unlikely to be used on devices, e.g. unsupported or disallowed. In rare occasion it is possible that an application or library depends on this (working or not) code. Since Xamarin.iOS 5.0.1 the linker can be instructed to skip this optimization. This correspond to the -nolinkaway option when using the command-line tool mtouch. There is no user interface to use this option but it can be provided in the Visual Studio for Mac Project Options dialog or the Visual Studio project Properties pane, within Additional mtouch arguments text field. (E.g. --nolinkaway would not remove the extra code (about 100kb)). Marking your Assembly as Linker-ready Users can select to just link the SDK assemblies, and not do any linking to their code. This also means that any third party libraries that are not part of Xamarin's core SDK will not be linked. This happens typically, because they do not want to manually add [Preserve] attributes to their code. The side effect is that third party libraries will not be linked, and this in general is a good default, as it is not possible to know whether a third party library is linker friendly or not. If you have a library in your project, or you are a developer of reusable libraries and you want the linker to treat your assembly as linkable, all you have to do is add the assembly-level attribute LinkerSafe, like this: [assembly:LinkerSafe] Your library does not actually need to reference the Xamarin libraries for this. For example, if you are building a Portable Class Library that will run in other platforms you can still use a LinkerSafe attribute. The Xamarin linker looks up the LinkerSafe attribute by name, not by its actual type. This means that you can write this code and it will also work: [assembly:LinkerSafe] // ... assembly attribute should be at top, before source class LinkerSafeAttribute : System.Attribute {} Custom Linker Configuration Follow the instructions for creating a linker configuration file.
https://docs.microsoft.com/en-us/xamarin/ios/deploy-test/linker
2020-07-02T08:44:40
CC-MAIN-2020-29
1593655878639.9
[]
docs.microsoft.com
>>. - You collect asset and identity data from data sources using an add-on and a custom search or manually with a CSV file. See Collect and extract asset and identity data. - You format the data as a lookup, using a search or manually with a CSV file. See Format the asset or identity list as a lookup. - You configure the list as a lookup table, definition, and input. See Configure a new asset or identity list. - Splunk Enterprise Security identity manager modular input detects changed content in the identity_manager://<input_name>and changes to stanzas in the input. - You configure any settings in the identity lookup configuration setup. See Define identity formats on the identity configuration page. - Splunk Enterprise Security identity manager modular input updates settings in the transforms.confstanza identity_lookup_expanded - Splunk Enterprise Security identity manager modular input updates the macros used to identify the input sources based on the currently enabled stanzas in inputs.conf. For example, update the `generate_identities`macro dynamically based on the conventions specified on the Identity Lookup Configuration page. - Splunk Enterprise Security identity manager modular input dispatches lookup generating saved searches if it identifies changes that require the asset and identity lists to be merged. - Splunk Enterprise Security merges all configured and enabled asset and identity lists using lookup generating saved searches. - Splunk App for PCI Compliance). - You verify that the data looks as expected. See Verify that your asset or identity data was added to Splunk Enterprise Security. The merging of identity and asset lookups does not validate or de-duplicate input. Errors from the identity manager modular input are logged in identity_manager.log. This log does not show data errors. Merging assets and identities creates new lookup files After the merging process completes, four lookups store your asset and identity data. Asset fields after processing Asset fields of the asset lookup after the saved searches perform the merge process. Identity fields after processing Identity fields of the identity lookup after the saved searches perform the merge process. Test the asset and identity merge process Test the asset and identity merge process to confirm that the data produced by the merge process is expected and accurate. Run the saved searches that perform the merge process without outputting the data to the merged lookups to determine what the merge will do with your data without actually performing the merge. Test the merge process without performing a merge and outputting the data to a lookup. - From the Splunk ES menu bar, select Configure >. Force a merge Run the primary saved searches directly to force a merge immediately without waiting the five minutes for the scheduled search to run. - Open the Search page. - Run the primary saved searches. | savedsearch "Identity - Asset String Matches - Lookup Gen" | savedsearch "Identity - Asset CIDR Matches - Lookup Gen" | savedsearch "Identity - Identity Matches - Lookup Gen" Customize the asset and identity merge process You can modify the saved searches that perform the asset and identity merge process to perform additional field transformations or data sanitization. Add any operations that you want to change in the merge process to the search before the `output_*` macro. Caution: Certain modifications to the saved searches are unsupported and could break the merge process or asset and identity correlation. Do not perform the following actions. - Add or delete fields from the output. - Change the output location to a different lookup table or a KV store collection. - Replace the `output_*`macros with the outputlookupcommand. This documentation applies to the following versions of Splunk® Enterprise Security: 4.5.0, 4.5.1, 4.5.2, 4.5.3, 4.6.0 Cloud only Feedback submitted, thanks!
https://docs.splunk.com/Documentation/ES/4.6.0/User/AssetandIdentityMerging
2020-07-02T09:44:42
CC-MAIN-2020-29
1593655878639.9
[array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)]
docs.splunk.com
Setting Up and Reusing a Customized CXF Component To set up and reuse customized CXF behavior, you must create a global element and reference it within any CXF component in your Mule flow. The following configuration information applies to all types of SOAP API operations: service, client, and proxy. Studio On the Global Elements tab of the canvas, select CXF Configurationand click Create. The Choose Global Type dialog appears. Select Component configurations > CXF Configuration. Click OK. The CXF Configuration dialog appears. Configure the global CXF configuration attributes according to the following table: In Name, enter a unique name for the global element (optional). In Configuration Location, if you have created an .xmlfile that defines behavior of the CXF elements, enter the path and file name to the file in this format: filepath/filename.xml Check the Enable Mule SOAP Headers checkbox to ensure that Mule can add a header to a SOAP message when required by the message processing. In Initialize Static Bus Instance, check this option to ensure that the CXF SOAP API uses Mule transports instead of CXF transports. The default is checked. Click OK. Standalone XML. <mule xmlns: <cxf:configuration <http:listener-config <http:request-config <flow name="example_flow1" doc: <http:listener <cxf:jaxws-client doc: <http:request </flow>, such as adding a header to a message, use the CXF component to add extra interceptors to the interceptor chain. Studio To add a custom interceptor: Open the properties editor, and on the Interceptors tab, click the drop-down. The Interceptor Provider options appear: Select one of the options, for example Add in interceptor. The selected interceptor option appears in Settings. Double-click the interceptor option to open the interceptor provider panel. The in interceptor dialog appears. From the drop-down in Beans, select a bean. Alternatively, click new to define a new bean by importing a Java class that defines interceptor behavior. Click add to list to insert the selected bean into the interceptor chain. Click Finish, then OK to save your interceptor configuration. Standalone XML. <spring:beans> <spring:bean </spring:beans> <http:listener-config <http:request-config <flow name="example_flow1" doc: <http:listener <cxf:proxy-service doc: <cxf:inInterceptors> <spring:ref </cxf:inInterceptors> </cxf:proxy-service> <http:request </flow> Configuring Advanced Elements You can adjust several advanced CXF SOAP API configurations according to your requirements. Studio The Databinding Management configuration options are available for the following operations: Simple service JAX-WS Service The Schema Locations configuration is available only for the Proxy service operation. The following table describes the advanced configuration elements: Standalone XML.
https://docs.mulesoft.com/mule-runtime/3.9/extra-cxf-component-configurations
2020-07-02T09:48:58
CC-MAIN-2020-29
1593655878639.9
[array(['_images/extra-cxf-component-configurations-7d94a.png', 'extra cxf component configurations 7d94a'], dtype=object) array(['_images/extra-cxf-component-configurations-402ba.png', 'extra cxf component configurations 402ba'], dtype=object)]
docs.mulesoft.com
Crate galil_seiferas [−] [src] String search in constant space, linear time, for nonorderable alphabets. In Rust terms this means we can define the function: fn gs_find<T: Eq>(text: &[T], pattern: &[T]) -> Option<usize> { // ... } and the function computes in O(n) time and O(1) space. In the worst case, this algorithm makes 4 n character comparisons. Note that the Crochemore-Perrin (“Two Way” algorithm) is much superior if there is a linear order for the alphabet. This work is Copyright 2017 by Ulrik Sverdrup "bluss"; see license terms in the package. References Both papers are recommended reading. The comments in this crate’s implementation are also meant to explain and point out important details, so that’s recommended reading too. - [GS] Z. Galil and J. Seiferas, Time-Space-Optimal String Matching, Journal of Computer and System Sciences (1983) - [CR] M. Crochemore and W. Rytter, Squares, Cubes, and Time-Space Efficient String Searching, Algorithmica (1995) Crate Features The crate is always no_std
https://docs.rs/galil-seiferas/0.1.5/galil_seiferas/
2020-07-02T09:37:47
CC-MAIN-2020-29
1593655878639.9
[]
docs.rs
Metric Description Active Sessions Number of users and applications currently connected to database Component Down Number of nodes that are not available Component Passive Number of nodes not processing queries but can be made ready to process queries when needed CPU Average node CPU use Max Disk by Node Largest percentage of used disk space on a node Memory Average node memory use Node CPU Skew Comparison of CPU use on the busiest node to the average node Node I/O Skew Comparison of I/O use on the busiest node to the average node Queen Disk Space Percentage of used disk space on the queen node Replication Factor Number of copies of the user data Total Space Percentage of used space to overall storage capacity
https://docs.teradata.com/reader/LzJ4E~8~tgVsMLW1~wBJGA/ESUfZV1DyEWeWenLbPWdsA
2020-07-02T09:11:16
CC-MAIN-2020-29
1593655878639.9
[]
docs.teradata.com
Managing Ensemble Controlling Ensemble Data Storage [Back] Ensemble Management Guides > Managing Ensemble > Controlling Ensemble Data Storage Class Reference Search : This chapter describes how you can control where Ensemble stores data. Ensemble namespaces store data in Caché databases. For general information on how to control Caché database storage, see the Caché System Administration Guide . This chapter provides some supplementary information that is useful for Ensemble installations. This chapter contains: Separate Databases for Routines and Globals Productions and Namespaces Where Ensemble Stores Credentials Passwords Where Ensemble Stores Temporary Data For information on where Ensemble stores DICOM data, see Configuring a DICOM Production to Control the Storage Location in the Ensemble DICOM Development Guide .. Note: Some Ensemble classes, such as Ens.Production and Ens.Rule.Rule , can be updated dynamically but are stored in the routines database. Consequently, if you are mirroring the dynamic data in an Ensemble namespace, you should include the routines database in the mirror. You should always compile the production on the system that it is running. Although you can compile Caché code on one system and copy the database pre-compiled to another system, you should not attempt this with Ensemble namespaces. Productions and Namespaces In most cases, productions are defined and run in the same namespace, but you can use Caché Ensemble Stores Credentials Passwords When you create a namespace, Ensemble creates a separate database to store the credentials passwords for the namespace. By having the passwords in a separate database, you can encrypt this confidential account information without having to incur the overhead of encrypting all of the namespace data. The credentials password database is created in a subdirectory of the namespace's default database for globals. The database name and subdirectory name is the name of the namespace's default database for globals with "SECONDARY" appended. For example if the default globals database is LABS then the secondary database will be called LABSSECONDARY. The credentials passwords database is protected by a resource named after the database (e.g. %DB_LABSSECONDARY) without public access. Under most conditions no user needs to have privileges to this resource. Ensemble stores the data in this database under the global ^Ens.SecondaryData.Password . See the compatibility note New Global and Database Used to Store Credentials Passwords in the Ensemble Release Notes . Note: If you create the primary Ensemble database as a mirrored database, then the secondary database for credential passwords is automatically mirrored using the same settings as the primary database. If you add mirroring to an existing Ensemble database, then you must explicitly add mirroring to the secondary database. For information about mirroring, see the Caché High Availability Guide . Where Ensemble Stores Temporary Data Ensemble creates temporary data while a production is being run; this data is deleted when a production is stopped. In most cases, you do not need to be concerned with this temporary data, but in rare circumstances you may need to deal with it when recovering from error conditions. For new namespaces, Ensemble by default stores temporary data in a non-journaled database separate from CACHETEMP. The database name and subdirectory name is the name of the namespace's default database for Globals with "ENSTEMP" appended. For example if the default globals database is LABS then the temporary database will be called LABSENSTEMP. This database is be protected by the same resource as that protect the default globals database. For example, Ensemble stores the following globals in this database: ^CacheTemp.EnsRuntimeAppData contains the temporary data needed to run the production. By default, for new namespaces this global is stored in a database named by appending ENSTEMP to the name of the database used to store the namespace globals. For example, if you create a namespace TEST and specify that the globals are stored in TESTGL, this global is stored in TESTGLENSTEMP. ^CacheTemp.EnsJobStatus an entry is written when the production is started and removed when the production is stopped. ^CacheTemp.EnsMetrics contains production metrics data, such as that displayed by the production monitor. [Back] [Top of Page]   © 1997-2017, InterSystems Corporation Content for this page loaded from EGMG.xml on 2017-09-29 10:50:22
http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=EGMG_storage
2017-12-11T03:40:45
CC-MAIN-2017-51
1512948512121.15
[]
docs.intersystems.com
Filters in Displayr From Displayr Most outputs in Displayr that have been computed using a Variable Set as an input can be filtered by selecting the object and choosing a filter from Home > Data Selection > Filter. in the same way as labels. Approaches to using filters when writing R code R code typically needs to do the following things when taking filters into account: - Filter the data. The data may either by passed to a function in some format, or extracted indirectly from the environment. - Clean the data in the filter. - Include the name or label of the filter in the output. The most straightforward way to deal with these is to use an existing analysis as a template (e.g., flipMultivariates::LDA). Base R has a standard function for filtering data called subset.
http://docs.displayr.com/wiki/Filters_in_Displayr
2017-12-11T04:05:59
CC-MAIN-2017-51
1512948512121.15
[]
docs.displayr.com
Reach users rejoice! Instead of creating separate links for iOS and Android passes, you can start using a single Adaptive Link that automatically detects the device OS and installs the appropriate pass. With Adaptive Links, you can easily personalize passes by adding the personalization parameters to your request, and upload a list of relevant locations to associate with your passes. Instant Pass Personalization When clicked, Adaptive Links deliver instant pass personalization via any distribution channel by leveraging CRM data. No API call required! Simply append the personalized fields to the end of your link like so: In the above example, by adding the ?offercode=BACK2SCHOOL parameter to the end of the Adaptive Link, you are passing the BACK2SCHOOL offer code to anyone who clicks on the link to the pass, regardless of whether they access the link from an iOS or Android device, or a landing page URL. Location Detection Upload a list of locations into Reach and Reach will auto-select the ten nearest locations and associate them to iOS passes (iOS limits the pass to 10 locations) without any additional work. See Location Handling in our Reach API reference for details on uploading locations. See Also - Get usage examples, setup information, and other details in our Adaptive Links topic guide. - See our Reach API Reference for details.
https://docs.urbanairship.com/whats-new/2017-09-26-adaptive-links/
2017-12-11T04:12:10
CC-MAIN-2017-51
1512948512121.15
[]
docs.urbanairship.com
It will (optionally) replace existing content items if they already exist in the repository, but does not perform deletes (it is not designed to fully synchronize the repository with the local file system). The basic on-disk file/folder structure is preserved verbatim in the repository. It is possible to load metadata for the files and spaces being ingested, as well as a version history for files (each version may consist of content, metadata, or both). There are two types of bulk import: - Streaming import: This import streams the files into the repository content store by copying them in during the import. - In-place import: Available in Enterprise Only, these files are assumed to already exist within the repository content store, so no copying is required. This can result in a significant improvement in performance . - There is no support for AVM. - Only one bulk import can be running at a time. This is enforced by the JobLockService. - Only Alfresco administrators can access to the Bulk Import tool.
http://docs.alfresco.com/4.0/concepts/Bulk-Import-Tool.html
2017-12-11T03:54:07
CC-MAIN-2017-51
1512948512121.15
[]
docs.alfresco.com
Shows a chooser activity to share text.Accepted situations: SITUATION_PUSH_OPENED, SITUATION_WEB_VIEW_INVOCATION, SITUATION_MANUAL_INVOCATION, SITUATION_AUTOMATION, and SITUATION_FOREGROUND_NOTIFICATION_ACTION_BUTTON. Accepted argument values: A String used as the share text. Result value: nullDefault Registration Names: ^s, share_action Default registry name Default registry short name. Used to filter out the list of packages in the chooser dialog. trueto exclude the package from the chooser dialog, falseto include the package.
https://docs.urbanairship.com/reference/libraries/android/latest/reference/com/urbanairship/actions/ShareAction.html
2017-12-11T04:11:08
CC-MAIN-2017-51
1512948512121.15
[]
docs.urbanairship.com
Assets are components used to create a scene. All assets are selected/inserted by clicking the Assets button on the upper left corner of the STYLY editor. When the above button is pressed, it will show the five types of assets in STYLY. ・3D Model ・Image / Gallery ・Music ・Video
http://docs.styly.cc/types-of-assets/what-are-assets/
2017-12-11T03:49:40
CC-MAIN-2017-51
1512948512121.15
[array(['http://docs.styly.cc/wp-content/plugins/lazy-load/images/1x1.trans.gif', None], dtype=object) array(['http://docs.styly.cc/wp-content/plugins/lazy-load/images/1x1.trans.gif', None], dtype=object) ]
docs.styly.cc
MUnit Database Server Overview One of the main problems of testing production code are external system connections. If we create a test of a piece of code that connects with a database server, we need to install a DB server in our local environment in order to run the tests. Another option is to have an external DB server for testing only, but the major problem with this approach is that our Maven project would not be portable — we could not send it to a third party because they would not be able to compile it without installing the DB server first. To enable you to avoid this issue, MUnit allows you to implement a database server in your local environment. Maven Dependency When running from Maven, you need to add the following artifact to your .pom: <dependency> <groupId>com.mulesoft.munit.utils</groupId> <artifactId>dbserver</artifactId> <version>${munit.version}</version> <scope>test</scope> </dependency> Defining The MUnit DB Server For the purpose of this documentation we are going to assume we are testing the following Mule code: Defining the Database Server We start by defining the DB server parameters: In the examples in this document, the file FILE_NAME.sql defines a table named jobtitlelookup, with a single record. DB Server Connection Parameters The MUnit DB server has the following default set of connection parameters: db.driver=org.h2.Driver db.url=jdbc:h2:mem:DATABASE_NAME;MODE=MySQL db.user= db.password= The values of the db.user and db.password parameters are intentionally blank. Defining the DB Structure There are two different ways to define the structure and content of your database: SQL CSV Defining the DB Structure from an SQL File To define you DB structure and content from a SQL file, provide a valid set of ANSI SQL DDL (Data Definition Language) instructions. Defining the DB Structure from a CSV File You can create your DB from CSV files. The name of the table is the name of the file (in the example below, customers). The name of the columns are the headers of your CSV file. You can also split your DB structure among several CSV files. In this case, include the file names as a list separated by a semicolon, as shown below. Starting the DB Server In order to run, the database server must be started in the before-suite. You start the server using the start-db-server message processor. Running the Test Once our DB server is up and running we can run our test. As you can see, we are not using any new message processor, since the database has already been initialized and loaded with the proper data. Hence we are just validating that the query run in our production code is correct, and that the payload returned is the expected one. Other MUnit DB server Message Processors The MUnit DB server also offers a few other features, outlined in this section. Executing SQL instructions The MUnit DB Server allows you to execute instructions on the in-memory databases, so you can add or remove registries before a test, and also check if your data was stored correctly. Executing SQL Queries The MUnit DB Server allows you to execute SQL queries. The resulting value is a list of maps. Validating SQL Query Results The MUnit DB Server allows you to validate that the results of a query are as expected. To do this, you use the validate-that tag. Set the results property to CSV with rows separated by a newline character ( \n), as shown below. The result should be a CSV text. Execution Environments You may have noticed that our production code example makes extensive use of placeholders for certain parameters, such as driverName, url the testing environment One for the production environment This is.
https://docs.mulesoft.com/munit/v/1.0/munit-database-server
2017-12-11T03:50:44
CC-MAIN-2017-51
1512948512121.15
[]
docs.mulesoft.com
Screen.weblinks.categories.edit.15 From Joomla! Documentation (Redirected from Screen.weblinkcategories.edit.15) Web Link Categories. - Section. This displays 'N/A' for not applicable, since Sections are not used for Web Links. - Category Order. This field is only used for ordering at the weblinks management. - Access Level. This field is not used for Web Link Categories. - Image. Image for this Page. Image must be located in the folder "images/stories". - Image Position. Position of the Image on the page. Select Left or Right from the drop-down list box. - Description. Optional description for the Web Link Category. This description is just for your information and will not display on the page. - Image Button. This command is not normally used for Web
https://docs.joomla.org/Screen.weblinkcategories.edit.15
2015-04-18T09:15:54
CC-MAIN-2015-18
1429246634257.45
[]
docs.joomla.org
Revision history of "The mechanics of creating a Joomla! 1.5 web site" View logs for this page Diff selection: Mark the radio boxes of the revisions to compare and hit enter or the button at the bottom. Legend: (cur) = difference with latest revision, (prev) = difference with preceding revision, m = minor edit.
https://docs.joomla.org/index.php?title=J1.5:The_mechanics_of_creating_a_Joomla!_1.5_web_site&offset=&limit=50&action=history
2015-04-18T09:37:53
CC-MAIN-2015-18
1429246634257.45
[]
docs.joomla.org
Revision history of "Subpackages/1.6" View logs for this page There is no edit history for this page. This page has been deleted. The deletion and move log for the page are provided below for reference. - 08:29, 10 May 2013 Tom Hutchison (Talk | contribs) deleted page Subpackages/1.6 (1.6 was removed)
https://docs.joomla.org/index.php?title=Subpackages/1.6&action=history
2015-04-18T10:08:54
CC-MAIN-2015-18
1429246634257.45
[]
docs.joomla.org
Difference between revisions of "Converting A Previous Joomla! Version Template" From Joomla! Documentation Revision as of 05:42, 25 March 2014GH (talk| contribs) 12 months ago. (Purge). Contents mySQL Release Release: 5.3.1 is minimal needed Tables HTML Class names Templates Index.php'); Editor views The front end editor views use tabs to separate editing areas and options. If you don't wish to use these you will need to make an override that does not include this block: <ul class="nav nav-tabs"> <li class="active"><a href="#editor" data-<?php echo JText::_('JEDITOR') ?></a></li> <?php if ($params->get('show_urls_images_frontend')) : ?> <li><a href="#images" data-<?php echo JText::_('COM_CONTENT_IMAGES_AND_URLS') ?></a></li> <?php endif; ?> <li><a href="#publishing" data-<?php echo JText::_('COM_CONTENT_PUBLISHING') ?></a></li> <li><a href="#language" data-<?php echo JText::_('JFIELD_LANGUAGE_LABEL') ?></a></li> <li><a href="#metadata" data-<?php echo JText::_('COM_CONTENT_METADATA') ?></a></li> </ul>
https://docs.joomla.org/index.php?title=J3.3:Converting_A_Previous_Joomla!_Version_Template&diff=116638&oldid=116637
2015-04-18T09:05:32
CC-MAIN-2015-18
1429246634257.45
[]
docs.joomla.org
In some cases you may wish to create a report containing the combined output of two impact functions for the same area for the same hazard but different exposures. For example, you may carry out an assessment of the impact of a flood on population and on buildings and combine the results into a single report. The Impact Layer Merge tool allows you to do this. In order to use this tool, please bear in mind the following requirements: To use this tool, follow this procedure: The tool will generate a PDF per aggregation area. The PDFs will be placed in the designated output directory after completion of the merge process. The output will consist of a map page and a table page. These are illustrated below: In the case of impact assessments where no aggregation has been used, only a single PDF report is generated. In the case of impact assessments where aggregation has been used, one PDF is generated per aggregation area. Note After report generation completes, the output directory will be opened automatically. The default template report is located in /resources/qgis-composer-templates/merged-report.qpt. If that template does not satisfy your needs, you can use your own report template. Before using your own report template, make sure that your template contains all of these elements with id: If any of those elements does not exist on the report template, the tools will tell you what element is missing on the template. Note You can arrange those elements in any position you want. In terms of value replacement, there are three groups of elements on the template: 1. Elements that can be changed with the InaSAFE Options tool. To change the value of these elements, please go to InaSAFE Options and change the value of the related field. Those elements are: 2. Elements containing tokens. The id of these elements is not significant, only the token it contains. At render time, these tokens will be replaced. If you want to have a label containing the value of these elements, enclose these elements with [] on a label i.e [impact-title] or [hazard-title]. The elements are listed below: 3. Elements that are direcly updated by the renderer. All of these elements below are generated automatically by the tools.
http://docs.inasafe.org/en/user-docs/application-help/impact_layer_merge_tool.html
2019-11-12T03:48:46
CC-MAIN-2019-47
1573496664567.4
[]
docs.inasafe.org
Read Only Span<T>.Enumerator Struct Definition Provides an enumerator for the elements of a ReadOnlySpan<T>. public: value class ReadOnlySpan<T>::Enumerator public struct ReadOnlySpan<T>.Enumerator type ReadOnlySpan<'T>.Enumerator = struct Public Structure ReadOnlySpan(Of T).Enumerator Type Parameters - T - - Inheritance - Remarks The C# foreach of the C# language and the For Each...Next construct in Visual Basic hides the complexity of enumerators. Instead of directly manipulating the enumerator, using foreach or For Each...Next is recommended. Initially, the enumerator is positioned before the first element in the ReadOnlySpan<T>. At this position, Current is undefined. You must call MoveNext to advance the enumerator to the first item in the ReadOnlySpan<T> before reading the value of Current. Current returns the same value until MoveNext is called. MoveNext sets Current to the next item in the ReadOnlySpan<T>. If MoveNext passes the end of the ReadOnlySpan<T>, MoveNext returns false. When the enumerator is at this state, subsequent calls to MoveNext also return false and Current is undefined. You cannot set Current to the first item in the ReadOnlySpan<T> again; you must create a new enumerator instance instead. Though the ReadOnlySpan<T> is allocated on the stack, the underlying data on which the ReadOnlySpan<T> points to, may not be. Therefore, enumerating through a ReadOnlySpan<T> is intrinsically not a thread-safe procedure. To guarantee thread safety during enumeration, you must implement your own synchronization. Unlike some other enumerator structures in .NET, the ReadOnlySpan<T>.Enumerator: Does not implement the IEnumerator or IEnumerator<T> interface. This is because ReadOnlySpan<T>.Enumerator is a ref struct and cannot be boxed. Does not include a Resetmethod, which can set the enumerator to its initial position before the first element in the span. (The IEnumerator.Reset() method must be implemented as part of the interface, but most implementors either throw an exception or provide no implementation.)
https://docs.microsoft.com/en-au/dotnet/api/system.readonlyspan-1.enumerator?view=netstandard-2.1
2019-11-12T03:20:39
CC-MAIN-2019-47
1573496664567.4
[]
docs.microsoft.com
This page provides information on the V-Ray IES Light. Page Contents The V-Ray IES Light is a V-Ray specific light source plugin that can be used to create physically accurate area lights. Enabled ( ) – Turns the VRayLight on and off. Color – Specifies the color of the light. Intensity (lm) – When enabled, specifies the strength of the light to override the intensity specified in the .ies file.. Affect Diffuse – When enabled, the light affects the diffuse properties of the materials. Affect Specular – When enabled, the light affects the specular of the materials. Shadows – When enabled (the default), the light casts shadows. When disabled, the light does not cast shadows. Caustic Subdivs – Used by V-Ray when calculating Caustics. Lower values produce noisy results but will render faster. Higher values produce smoother results but take more time.
https://docs.chaosgroup.com/plugins/viewsource/viewpagesrc.action?pageId=26525434
2019-11-12T03:07:35
CC-MAIN-2019-47
1573496664567.4
[]
docs.chaosgroup.com
OneRoster® 1.0 format CSV files for SDS You can use CSV (comma separated value) files in the OneRoster® format to synchronize your School Information System (SIS) with Office 365. When using OneRoster 1.0 format CSV Files for School Data Sync, the files must be appropriately formatted. This section describes the formatting requirements for use with SDS. You must have the following the first 6 of the 7 OneRoster 1.0 CSV files named exactly as detailed below. The demographics.csv is strictly optional, and not required for SDS. orgs.csv users.csv courses.csv classes.csv enrollments.csv academicSessions.csv demographics.csv Each CSV file must contain all required fields highlighted in blue. Each CSV may also contain any of the optional fields listed. The tables below list the required and optional attributes on a file by file basis: Note When creating new users a column for password will need to be added after the OneRoster files have been converted to the SDS CSV file format. The demographics.csv file is completely optional for upload into SDS. It’s not required. For more information on OneRoster 1.0 CSV format, please go to IMS OneRoster™: CSV Tables.
https://docs.microsoft.com/en-us/schooldatasync/oneroster-format-csv-files-for-sds
2019-11-12T03:18:30
CC-MAIN-2019-47
1573496664567.4
[array(['images/oneroster-format-csv-files-for-sds-1.png', 'OneRoster-format-CSV-files-for-SDS-1.png'], dtype=object) array(['images/oneroster-format-csv-files-for-sds-2.png', 'OneRoster-format-CSV-files-for-SDS-2.png'], dtype=object) array(['images/oneroster-format-csv-files-for-sds-3.png', 'OneRoster-format-CSV-files-for-SDS-3.png'], dtype=object) array(['images/oneroster-format-csv-files-for-sds-4.png', 'OneRoster-format-CSV-files-for-SDS-4.png'], dtype=object) array(['images/oneroster-format-csv-files-for-sds-5.png', 'OneRoster-format-CSV-files-for-SDS-5.png'], dtype=object) array(['images/oneroster-format-csv-files-for-sds-6.png', 'OneRoster-format-CSV-files-for-SDS-6.png'], dtype=object) array(['images/oneroster-format-csv-files-for-sds-7.png', 'OneRoster-format-CSV-files-for-SDS-7.png'], dtype=object) array(['images/oneroster-format-csv-files-for-sds-8.png', 'OneRoster-format-CSV-files-for-SDS-8.png'], dtype=object)]
docs.microsoft.com
Searching for invoke documents While intended for API monetization, data in the invoke monitor search index and database can still be used for applications that need statistical accounts of service invokes. This data can be used for the auditing and analysis of service invocations. On this page, we will discuss the different methods you can use in order to preview or obtain service invocation data for your integrations. Using the user interface Of all interfaces compatible with Martini, only the Martini Runtime Admin UI demonstrates querying capabilities against the invoke monitor search index. In particular, searches can be done through interface elements located at the static search bar, seen at the top of the Martini Runtime Admin UI. Viewing all documents Go to the static search bar located at the top of the Martini Runtime Admin web interface. Select Monitor from the dropdown, and click the search button to go to the Monitor page. The Monitor page will display all documents residing in the instance's Monitor search index. To update displayed results, refresh the page. Viewing document details To view the details of a specific Monitor document, click the document's ID from the table displayed in the Monitor page. The details for the chosen document will appear on the right side of the page. Simple filter Use the search bar to look for invoke documents containing field values that match the provided input string. Type the text to match in the search bar and then press . Facet filter You can also filter invoke documents using facets. Click the facet field you want to filter by, and then select a value from the appearing dropdown to filter documents using the selected facet. You'll notice next to each facet is a number. This number is the number of documents in the invoke monitor search index that match the facet. Advanced filter If you need finer-grained searches, you can use the Advanced Filter form. This type of search allows you to specify a value-to-match per field. Click the inverted triangle beside the search button to make the form appear. Using RESTful web services As with the other search indices, the invoke monitor search index can be queried and managed via the Solr Search API. Simply ensure the following path parameters are substituted when sending requests: For example: Using one-liners To query the Invoke monitor search index in Gloop, create an invoke step (preferred) or Gloovy step calling any of the search methods belonging to MonitorMethods. Your step should roughly look like this: MonitorMethods has other utility methods for your Invoke monitor-related needs MonitorMethods contains convenience methods for: - Managing and retrieving monitor rules - Fetching the billing details of your users
https://docs.torocloud.com/martini/latest/monitor/searching/
2019-11-12T04:25:20
CC-MAIN-2019-47
1573496664567.4
[array(['../../placeholders/img/server-admin/compressed/monitor-ui.png', 'Accessing the Monitor page'], dtype=object) array(['../../placeholders/img/server-admin/compressed/monitor-document.png', 'Viewing invoke document details'], dtype=object) array(['../../placeholders/img/server-admin/compressed/monitor-filter.png', 'Monitor search bar'], dtype=object) array(['../../placeholders/img/server-admin/compressed/monitor-facet-filter.png', 'Monitor facet filter'], dtype=object) array(['../../placeholders/img/server-admin/compressed/monitor-advanced-filter.png', 'Monitor advanced filter'], dtype=object) ]
docs.torocloud.com
Determining what to track What you track depends on your business processes and rules that deal with data and events. Gather requirements for an application from users, managers, and other administrators who have a stake in the business process and how the application supports it. When analyzing a business process and business rules, identify transition points in the process, where data moves from one state to another. Consider how groups of people in your organization handle the data during state transitions. Because the BMC Remedy AR System application that you develop can control transitions and enforce business rules, you need a clear and correct understand of them. When analyzing your data tracking needs, gather the following information: - What is the life cycle of the data: data capture, data storage, data retrieval, data update, data archival, and data retirement? - What types of information can be tracked together? - Where does the data come from? Other systems? User data entry? - Where could redundant data entry occur? - Where can data be just referenced or displayed instead of entered or modified? Where can data be reused? - What kinds of reports and information do users need from your application? - Following normal business practices, when will the application's data become irrelevant? You can address these questions when designing your application and deciding how many forms define the processes that you identified. The number of forms that you create depends on the smallest unit of data that you want to track and how you want that type of data to relate to other types of data. For example, to keep all data about assets in a single form, your asset form needs fields to accommodate information about manufacturers. Instead, to avoid duplicating information about manufacturers for each asset, your application could have a form for assets, and link it to a separate form for manufacturers through workflow and logical joins.
https://docs.bmc.com/docs/ars1808/determining-what-to-track-820498108.html
2019-11-12T04:39:00
CC-MAIN-2019-47
1573496664567.4
[]
docs.bmc.com
This page provides information on the Grid rollout for a LiquidSim object. Overview The PhoenixFDSimulator works best when the scale of the container matches the real-world size of the simulated effect. For example, If you are simulating a camp fire, your container should be at most a couple of meters wide. Note that it doesn't matter if this is two meters or two thousand millimeters - the way you view the units is irrelevant. Phoenix always converts the units to a common world-size length, so the only important thing is the size of the container. If you are simulating a volcano for example, the container should be several hundred meters wide, or several hundred thousand millimeters. If your scene is structured in a way that makes it hard for you to scale the objects to their real-world size, you can use the Scene Scale parameter to tell Phoenix FD to treat the container as larger / smaller than it actually is when measured in Scene Units. This will influence the dynamics of the simulation, allowing to you achieve the correct behavior for your simulation without the need to tweak the size of the objects in your scene. Using the parameters on this roll-out you can: - Specify the Size and Resolution of the Simulator - Enable / Disable Adaptive Grid which is a performance optimization allowing you to keep the size of the Simulator as small as possible thus reducing RAM usage - Specify which Walls of the Simulator will be considered Open (infinite) or Jammed (ie. as solid obstacles, closed) - Link multiple Simulators in a Cascade setup - for more information, please check the following article: Transferring fluid between Simulators using a Cascade Connection. - Specify a Confine Geometry to limit the fluid calculations only to the volume of the specified object UI Path: ||Select Liquid Simulator | LiquidSim object|| > Modify panel > Grid rollout Parameters General Scene Scale | scenescale – Specifies a multiplier for the original scene units of the scene. Phoenix FD works best when the container size is close to the real-world size of the desired effect. You can use this parameter to make the simulator see the container as bigger or smaller than it actually is in the scene, in case you cannot change the general scene units of 3ds Max. Check the labels to the right of X, Y, Z for the container sizes affected by this parameter. Bigger scale would make the fluid move more slowly because it needs to travel a greater distance, while smaller scale makes the fluid move faster and more chaotic. For more information on how changing the Scene Scale affects the simulation, see the Scene Scale example below. Cell size | cellsz – The size of a single voxel, in scene units. For more information, see the Grid Resolution example below. X, Y, Z | xc, yc, zc – The grid size in cells. The dimensions shown next to X, Y, Z are the grid sizes in the scene, multiplied by the Scene Scale parameter - these sizes show how the solver will see the grid box and you can use the Scene Scale to cheat the solver into simulating as if the grid box was larger or smaller. In case you want to see how big the container for the loaded cache is in the scene without accounting for the Scene Scale, see the Container Dimensions in the Simulation rollout. Increase/Decrease resolution – Changes the resolution of the grid while maintaining its size. For more information, see the Grid Resolution example below. Example: Scene Scale The following video provides examples to show the differences of Scene Scale with values of 0.1, 5.0, and 15.0. Example: Grid Resolution The following video provides examples to show the differences when the Total cells from the Grid's Resolution is at 570,000, 4,000,000, and 16,000,000. Container Walls X, Y, Z | x_bound, y_bound, z_bound – Select between different container wall conditions for the simulation grid. Open – The fluid is allowed to leave the bounding box of the Simulator through this wall. If Fill Up for Ocean is enabled, the Wall is treated as if there is infinite liquid below the Initial Fill Up level. Jammed(-) – The simulation behaves as if there is a solid boundary in the negative direction. When Adaptive is enabled, the grid will not expand in this direction. Jammed(+) – The simulation behaves as if there is a solid boundary in the positive direction. When Adaptive is enabled, the grid will not expand in this direction. Jammed Both – The simulation behaves as if there is a solid boundary in both directions. When Adaptive is enabled, the grid will not expand in this direction. Wrap – The left and right boundaries are connected (toroidal topology). E.g. Fluid leaving the Simulator from the +X wall will enter it again from the -X wall. Confine Geom | usegridgizmo, gridgizmo – You can specify a closed geometry object with normals pointing outwards, and the simulation will run only inside this object. The rest of the cells will be frozen as if a solid body was covering them. This way you can fill irregular shapes with liquid, or generally speed up your simulation by chopping off empty cells when you have an irregular fluid shape, e.g. a rocket launch. While using a Confine Geometry can speed up a simulation, it will not reduce RAM usage. Cascade Source | usecascade, cascade – Specifies the source LiquidSim to connect this simulator to, forming a cascading simulation. This allows you to join several simulators into a structure with a complex shape. This can help you reduce memory usage by using many smaller simulators in place of a single large simulator. For more information, see the Connecting Two Simulators in a Cascade Setup section on the Tips and Tricks page. - The simulators must be run sequentially and each one should be started only after the previous one has finished simulating. The Cascade Source parameter points to the previous simulator in the sequence. - For the simulation to function correctly, you need to have the Velocity Grid Channel and all Particle Groups that are simulated in the Source Simulator exported to its cache files - otherwise the connection will not work properly. - If you intend to use any additional channels such as RGB, particle IDs or Ages, etc, they also need to be exported from the Source simulator's Output rollout before running the current simulator Adaptive Grid Either keep Adaptive Grid disabled or set the Container Walls: Z to Jammed Both when simulating Oceans. The Ocean Level parameter in the Rendering rollout depends on the vertical size of your simulator. Adaptive Grid | adaptive – The grid will resize automatically during the simulation in order to prevent the liquid from leaving the bounds of the Simulator box. Note that only the Open Container Walls will expand and contract when Adaptive Grid is enabled. Extra Margin | adapt_margin – Specifies the number of cells between the end of the grid and the active zone. You can use this to give the fluid a bit more room if the adaptive grid can't keep up with the simulation. No Smaller Than Initial Grid | nbigrid – When enabled, the Adaptive Grid can't contract to a smaller size than what is given as the initial X,Y,Z size for the Simulator. Note that this way the initial grid box is always included, even if the fluid has moved farther from it. If this option is disabled, the grid will always encompass only the active fluid and will move together with it if needed. Expand and Don't Shrink | onlyexpand – When enabled the Adaptive Grid will expand without shrinking. Maximum expansion | maxexp, expx/y/z/neg/pos – Specifies maximum growth sizes for each side of the grid, in cells. Using this, you can stop the expansion in certain directions. Shrink to view | usegridfitcamera, grid_fit_camera – Species a camera whose frustum will be used to determine the maximum expansion. The Adaptive Grid will not resize beyond the frustum. When a Shrink to view Camera is provided, the Adaptive Grid will expand no further than the already specified Maximum Expansion Limits.
https://docs.chaosgroup.com/display/PHX3MAX/Liquid+Grid
2019-11-12T04:38:55
CC-MAIN-2019-47
1573496664567.4
[]
docs.chaosgroup.com
RDM has an index algorithm specifically for use by the in-memory storage engine called an AVL tree. An AVL is a self-balancing binary tree that in RDM is implemented internally to a row instead of externally. There is no data duplication in the AVL as, unlike in a B-tree, external nodes containing copies of indexed columns are not maintained. An AVL is a binary tree, meaning the depth of the tree will be much larger than that of a B-tree. For this reason, the AVL index is better suited for the in-memory storage engine using the expanded row format than the on-disk-based engine using the packed row format. The packed row format does contain an implementation for the AVL index, but this is included primarily for persisting an in-memory image to disk and not intended for general use in disk-based tables. An AVL can be used for any operations that a B-tree index would be used for. The AVL supports lookups, ranges, scanning, and both duplicate or unique constraints. The SQL optimizer will utilize an AVL in the same manner as a B-tree but will use a slightly different weight based on the implementation differences between a B-tree and an AVL.
https://docs.raima.com/rdm/14_1/avl-tree.html
2019-11-12T04:15:17
CC-MAIN-2019-47
1573496664567.4
[]
docs.raima.com
Though Unicode handling in large projects can often be complex, Salt adheres to several basic rules to help developers handle Unicode correctly. (For a basic introduction to this problem, see Ned Batchelder's excellent intoroduction to the topic <>. Salt's basic workflow for Unicode handling is as follows: Salt should convert whatever data is passed on CLI/API to Unicode. Internally, everything that Salt does should be Unicode unless it is printing to the screen or writing to storage. Modules and various Salt pluggable systems use incoming data assuming Unicode. received from the API into Unicode. convert data received into Unicode. (This does not apply if using the cmd execution module, which should handle this for you. outputter) or which write directly to disk, a string should be encoded when appropriate. To handle this conversion, the global variable __salt_system_encoding__ is available, which declares the locale of the system that Salt is running on. When a function in a Salt module returns a string, it should return a unicode type in Python 2. When Salt delivers the data to an outputter or a returner, it is the job of the outputter or returner to encode the Unicode before displaying it on the console or writing it to storage.
https://docs.saltstack.com/en/latest/ref/internals/unicode.html
2019-11-12T03:29:07
CC-MAIN-2019-47
1573496664567.4
[]
docs.saltstack.com
TOPICS× Content manager resources and principles You: - Data schemas: description of the XML content structure. For more on this, refer to Data schemas . - Data entry forms: construction of data entry screens. For more on this, refer to Input forms . - Images: images used in data entry forms. For more on this, refer to Image management . - Stylesheets: formatting of output documents using XSLT language. For more on this, refer to Formatting . - JavaScript templates: formatting of output documents using JavaScript language. For more on this, refer to Publication templates . - JavaScript codes: JavaScript codes for data aggregation. For more on this, refer to Aggregator . - Publication templates: definition of publication templates. For more on this, refer to Publication templates . - Content: content instances to be created and published. For more on this, refer to Using a content template .
https://docs.adobe.com/content/help/en/campaign-classic/using/sending-messages/content-management/content-manager-resources-and-principles.html
2019-11-12T02:57:38
CC-MAIN-2019-47
1573496664567.4
[array(['/content/dam/help/campaign-classic.en/help/delivery/using/assets/d_ncs_content_process.png', None], dtype=object) ]
docs.adobe.com
Configure User Access Control and Permissions Applies to: Windows Admin Center, Windows Admin Center Preview If you haven't already, familiarize yourself with the user access control options in Windows Admin Center Note Group based access in Windows Admin Center is not supported in workgroup environments or across non-trusted domains. Gateway access role definitions There are two roles for access to the Windows Admin Center gateway service: Gateway users can connect to the Windows Admin Center gateway service to manage servers through that gateway, but they can't change access permissions nor the authentication mechanism used to authenticate to the gateway. Gateway administrators can configure who gets access as well as how users authenticate to the gateway. Only gateway administrators can view and configure the Access settings in Windows Admin Center. Local administrators on the gateway machine are always administrators of the Windows Admin Center gateway service. Note Access to the gateway doesn't imply access to managed servers visible by the gateway. To manage a target server, the connecting user must use credentials (either through their passed-through Windows credential or through credentials provided in the Windows Admin Center session using the Manage as action) that have administrative access to that target server. Active Directory or local machine groups By default, Active Directory or local machine groups are used to control gateway access. If you have an Active Directory domain, you can manage gateway user and administrator access from within the Windows Admin Center interface. On the Users tab you can control who can access Windows Admin Center as a gateway user. By default, and if you don't specify a security group, any user that accesses the gateway URL has access. Once you add one or more security groups to the users list, access is restricted to the members of those groups. If you don't use an Active Directory domain in your environment, access is controlled by the Users and Administrators local groups on the Windows Admin Center gateway machine. Smartcard authentication You can enforce smartcard authentication by specifying an additional required group for smartcard-based security groups. Once you have added a smartcard-based security group, a user can only access the Windows Admin Center service if they are a member of any security group AND a smartcard group included in the users list. On the Administrators tab you can control who can access Windows Admin Center as a gateway administrator. The local administrators group on the computer will always have full administrator access and cannot be removed from the list. By adding security groups, you give members of those groups privileges to change Windows Admin Center gateway settings. The administrators list supports smartcard authentication in the same way as the users list: with the AND condition for a security group and a smartcard group. Azure Active Directory If your organization uses Azure Active Directory (Azure AD), you can choose to add an additional layer of security to Windows Admin Center by requiring Azure AD authentication to access the gateway. In order to access Windows Admin Center, the user's Windows account must also have access to gateway server (even if Azure AD authentication is used). When you use Azure AD, you'll manage Windows Admin Center user and administrator access permissions from the Azure Portal, rather than from within the Windows Admin Center UI. Accessing Windows Admin Center when Azure AD authentication is enabled Depending on the browser used, some users accessing Windows Admin Center with Azure AD authentication configured will receive an additional prompt from the browser where they need to provide their Windows account credentials for the machine on which Windows Admin Center is installed. After entering that information, the users will get the additional Azure Active Directory authentication prompt, which requires the credentials of an Azure account that has been granted access in the Azure AD application in Azure. Note Users who's Windows account has Administrator rights on the gateway machine will not be prompted for the Azure AD authentication. Configuring Azure Active Directory authentication for Windows Admin Center Preview Go to Windows Admin Center Settings > Access and use the toggle switch to turn on "Use Azure Active Directory to add a layer of security to the gateway". If you have not registered the gateway to Azure, you will be guided to do that at this time. By default, all members of the Azure AD tenant have user access to the Windows Admin Center gateway service. Only local administrators on the gateway machine have administrator access to the Windows Admin Center gateway. Note that the rights of local administrators on the gateway machine cannot be restricted - local admins can do anything regardless of whether Azure AD is used for authentication. If you want to give specific Azure AD users or groups gateway user or gateway administrator access to the Windows Admin Center service, you must do the following: - Go to your Windows Admin Center Azure AD application in the Azure portal by using the hyperlink provided in Access Settings. Note this hyperlink is only available when Azure Active Directory authentication is enabled. - You can also find your application in the Azure portal by going to Azure Active Directory > Enterprise applications > All applications and searching WindowsAdminCenter (the Azure AD app will be named WindowsAdminCenter- ). turn on Azure AD authentication, the gateway service restarts and you must refresh your browser. You can update user access for the SME. Users and administrators can view their currently logged-in account and as well as sign-out of this Azure AD account from the Account tab of Windows Admin Center Settings. Configuring Azure Active Directory authentication for Windows Admin Center To set up Azure AD authentication, you must first register your gateway with Azure (you only need to do this once for your Windows Admin Center gateway). This step creates an Azure AD application from which you can manage gateway user and gateway administrator access. If you want to give specific Azure AD users or groups gateway user or gateway administrator access to the Windows Admin Center service, you must do the following: - Go to your SME Azure AD application in the Azure portal. - When you click Change access control and then select Azure Active Directory from the Windows Admin Center Access settings, you can use the hyperlink provided in the UI to access your Azure AD application in the Azure portal. This hyperlink is also available in the Access settings after you click save and have selected Azure AD as your access control identity provider. - You can also find your application in the Azure portal by going to Azure Active Directory > Enterprise applications > All applications and searching SME (the Azure AD app will be named SME- ). save the Azure AD access control in the Change access control pane, the gateway service restarts and you must refresh your browser. You can update user access for the Windows Admin Center. Using the Azure tab of Windows Admin Center general settings, users and administrators can view their currently logged-in account and as well as sign-out of this Azure AD account. Conditional access and multi-factor authentication One of the benefits of using Azure AD as an additional layer of security to control access to the Windows Admin Center gateway is that you can leverage Azure AD's powerful security features like conditional access and multi-factor authentication. Learn more about configuring conditional access with Azure Active Directory. Configure single sign-on Single sign-on when deployed as a Service on Windows Server When you install Windows Admin Center on Windows 10, it's ready to use single sign-on. If you're going to use Windows Admin Center on Windows Server, however, you need to set up some form of Kerberos delegation in your environment before you can use single sign-on. The delegation configures the gateway computer as trusted to delegate to the target node. To configure Resource-based constrained delegation in your environment, run the following PowerShell cmdlets. (Be aware that this requires a domain controller running Windows Server 2012 or later). $gateway = "WindowsAdminCenterGW" # Machine where Windows Admin Center is installed $node = "ManagedNode" # Machine that you want to manage $gatewayObject = Get-ADComputer -Identity $gateway $nodeObject = Get-ADComputer -Identity $node Set-ADComputer -Identity $nodeObject -PrincipalsAllowedToDelegateToAccount $gatewayObject In this example, the Windows Admin Center gateway is installed on server WindowsAdminCenterGW, and the target node name is ManagedNode. To remove this relationship, run the following cmdlet: Set-ADComputer -Identity $nodeObject -PrincipalsAllowedToDelegateToAccount $null Role-based access control Role-based access control enables you to provide users with limited access to the machine instead of making them full local administrators. Read more about role-based access control and the available roles. Setting up RBAC consists of 2 steps: enabling support on the target computer(s) and assigning users to the relevant roles. Tip Make sure you have local administrator privileges on the machines where you are configuring support for role-based access control. Apply role-based access control to a single machine The single machine deployment model is ideal for simple environments with only a few computers to manage. Configuring a machine with support for role-based access control will result in the following changes: - PowerShell modules with functions required by Windows Admin Center will be installed on your system drive, under C:\Program Files\WindowsPowerShell\Modules. All modules will start with Microsoft.Sme - Desired State Configuration will run a one-time configuration to configure a Just Enough Administration endpoint on the machine, named Microsoft.Sme.PowerShell. This endpoint defines the 3 roles used by Windows Admin Center and will run as a temporary local administrator when a user connects to it. - 3 new local groups will be created to control which users are assigned access to which roles: - Windows Admin Center Administrators - Windows Admin Center Hyper-V Administrators - Windows Admin Center Readers To enable support for role-based access control on a single machine, follow these steps: - Open Windows Admin Center and connect to the machine you wish to configure with role-based access control using an account with local administrator privileges on the target machine. - On the Overview tool, click Settings > Role-based access control. - Click Apply at the bottom of the page to enable support for role-based access control on the target computer. The application process involves copying PowerShell scripts and invoking a configuration (using PowerShell Desired State Configuration) on the target machine. It may take up to 10 minutes to complete, and will result in WinRM restarting. This will temporarily disconnect Windows Admin Center, PowerShell, and WMI users. - Refresh the page to check the status of role-based access control. When it is ready for use, the status will change to Applied. Once the configuration is applied, you can assign users to the roles: - Open the Local Users and Groups tool and navigate to the Groups tab. - Select the Windows Admin Center Readers group. - In the Details pane at the bottom, click Add User and enter the name of a user or security group which should have read-only access to the server through Windows Admin Center. The users and groups can come from the local machine or your Active Directory domain. - Repeat steps 2-3 for the Windows Admin Center Hyper-V Administrators and Windows Admin Center Administrators groups. You can also fill these groups consistently across your domain by configuring a Group Policy Object with the Restricted Groups Policy Setting. Apply role-based access control to multiple machines In a large enterprise deployment, you can use your existing automation tools to push out the role-based access control feature to your computers by downloading the configuration package from the Windows Admin Center gateway. The configuration package is designed to be used with PowerShell Desired State Configuration, but you can adapt it to work with your preferred automation solution. Download the role-based access control configuration To download the role-based access control configuration package, you'll need to have access to Windows Admin Center and a PowerShell prompt. If you're running the Windows Admin Center gateway in service mode on Windows Server, use the following command to download the configuration package. Be sure to update the gateway address with the correct one for your environment. $WindowsAdminCenterGateway = '' Invoke-RestMethod -Uri "$WindowsAdminCenterGateway/api/nodes/all/features/jea/endpoint/export" -Method POST -UseDefaultCredentials -OutFile "~\Desktop\WindowsAdminCenter_RBAC.zip" If you're running the Windows Admin Center gateway on your Windows 10 machine, run the following command instead: $cert = Get-ChildItem Cert:\CurrentUser\My | Where-Object Subject -eq 'CN=Windows Admin Center Client' | Select-Object -First 1 Invoke-RestMethod -Uri "" -Method POST -Certificate $cert -OutFile "~\Desktop\WindowsAdminCenter_RBAC.zip" When you expand the zip archive, you'll see the following folder structure: - InstallJeaFeatures.ps1 - JustEnoughAdministration (directory) - Modules (directory) - Microsoft.SME.* (directories) - WindowsAdminCenter.Jea (directory) To configure support for role-based access control on a node, you need to perform the following actions: - Copy the JustEnoughAdministration, Microsoft.SME.*, and WindowsAdminCenter.Jea modules to the PowerShell module directory on the target machine. Typically, this is located at C:\Program Files\WindowsPowerShell\Modules. - Update InstallJeaFeature.ps1 file to match your desired configuration for the RBAC endpoint. - Run InstallJeaFeature.ps1 to compile the DSC resource. - Deploy your DSC configuration to all of your machines to apply the configuration. The following section explains how to do this using PowerShell Remoting. Deploy on multiple machines To deploy the configuration you downloaded onto multiple machines, you'll need to update the InstallJeaFeatures.ps1 script to include the appropriate security groups for your environment, copy the files to each of your computers, and invoke the configuration scripts. You can use your preferred automation tooling to accomplish this, however this article will focus on a pure PowerShell-based approach. By default, the configuration script will create local security groups on the machine to control access to each of the roles. This is suitable for workgroup and domain joined machines, but if you're deploying in a domain-only environment you may wish to directly associate a domain security group with each role. To update the configuration to use domain security groups, open InstallJeaFeatures.ps1 and make the following changes: - Remove the 3 Group resources from the file: - "Group MS-Readers-Group" - "Group MS-Hyper-V-Administrators-Group" - "Group MS-Administrators-Group" - Remove the 3 Group resources from the JeaEndpoint DependsOn property - "[Group]MS-Readers-Group" - "[Group]MS-Hyper-V-Administrators-Group" - "[Group]MS-Administrators-Group" - Change the group names in the JeaEndpoint RoleDefinitions property to your desired security groups. For example, if you have a security group CONTOSO\MyTrustedAdmins that should be assigned access to the Windows Admin Center Administrators role, change '$env:COMPUTERNAME\Windows Admin Center Administrators'to 'CONTOSO\MyTrustedAdmins'. The three strings you need to update are: - '$env:COMPUTERNAME\Windows Admin Center Administrators' - '$env:COMPUTERNAME\Windows Admin Center Hyper-V Administrators' - '$env:COMPUTERNAME\Windows Admin Center Readers' Note Be sure to use unique security groups for each role. Configuration will fail if the same security group is assigned to multiple roles. Next, at the end of the InstallJeaFeatures.ps1 file, add the following lines of PowerShell to the bottom of the script: Copy-Item "$PSScriptRoot\JustEnoughAdministration" "$env:ProgramFiles\WindowsPowerShell\Modules" -Recurse -Force $ConfigData = @{ AllNodes = @() ModuleBasePath = @{ Source = "$PSScriptRoot\Modules" Destination = "$env:ProgramFiles\WindowsPowerShell\Modules" } } InstallJeaFeature -ConfigurationData $ConfigData | Out-Null Start-DscConfiguration -Path "$PSScriptRoot\InstallJeaFeature" -JobName "Installing JEA for Windows Admin Center" -Force Finally, you can copy the folder containing the modules, DSC resource and configuration to each target node and run the InstallJeaFeature.ps1 script. To do this remotely from your admin workstation, you can run the following commands: $ComputersToConfigure = 'MyServer01', 'MyServer02' $ComputersToConfigure | ForEach-Object { $session = New-PSSession -ComputerName $_ -ErrorAction Stop Copy-Item -Path "~\Desktop\WindowsAdminCenter_RBAC\JustEnoughAdministration\" -Destination "$env:ProgramFiles\WindowsPowerShell\Modules\" -ToSession $session -Recurse -Force Copy-Item -Path "~\Desktop\WindowsAdminCenter_RBAC" -Destination "$env:TEMP\WindowsAdminCenter_RBAC" -ToSession $session -Recurse -Force Invoke-Command -Session $session -ScriptBlock { Import-Module JustEnoughAdministration; & "$env:TEMP\WindowsAdminCenter_RBAC\InstallJeaFeature.ps1" } -AsJob Disconnect-PSSession $session } Feedback
https://docs.microsoft.com/en-us/windows-server/manage/windows-admin-center/configure/user-access-control
2019-11-12T04:31:11
CC-MAIN-2019-47
1573496664567.4
[]
docs.microsoft.com