content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
Installation¶ Using a Package Manager¶ A package manager (conda, apt, yum, MacPorts, etc) should generally be your first stop for installing Toyplot - it will make it easy to install Toyplot and its dependencies, keep them up-to-date, and even (gasp!) uninstall them cleanly. If your package manager doesn’t support Toyplot yet, drop them a line and let them know you’d like them to add it! If you’re new to Python or unsure where to start, we strongly recommend taking a look at Anaconda, which the Toyplot developers use during their day-to-day work. Using Pip / Easy Install¶ If your package manager doesn’t support Toyplot, or doesn’t have the latest version, your next option should be Python setup tools like pip. You can always install the latest stable version of toyplot and its required dependencies using: $ pip install toyplot … following that, you’ll be able to use all of Toyplot’s features, and export figures to all of Toyplot’s preferred file formats, including HTML, SVG, and PDF. For export to other formats like PNG or MP4, you’ll have to install additional resources listed in the Dependencies section of the manual. From Source¶ Finally, if you want to work with the latest, bleeding-edge Toyplot goodness, you can install it using the source code: $ git clone $ cd toyplot $ sudo python setup.py install The setup script installs Toyplot’s required dependencies and copies Toyplot into your Python site-packages directory, ready to go. Once again, export to other formats like PNG or MP4, wil require additional resources listed in Dependencies.
https://toyplot.readthedocs.io/en/latest/installation.html
2020-07-02T18:23:41
CC-MAIN-2020-29
1593655879738.16
[array(['_images/toyplot.png', '_images/toyplot.png'], dtype=object)]
toyplot.readthedocs.io
The supported method of installing specific versions of some formulae is to see if there is a versioned formula (e.g. gcc@7) available. If the version you’re looking for isn’t available, consider using brew extract. /usr/local brew unlink <formula> This can be useful if a package can’t build against the version of something you have linked into /usr/local. And of course, you can simply brew link <formula> again afterwards! brew info <formula> brew switch <formula> <version> Use brew info <formula> to check what versions are installed but not currently activated, then brew switch <formula> <version> to activate the desired version. This can be useful if you would like to switch between versions of a formula. ./configure --prefix=/usr/local/Cellar/foo/1.2 && make && make install && brew link foo Sometimes it’s faster to download a file via means other than those strategies that are available as part of Homebrew. For example, Erlang provides a torrent that’ll let you download at 4–5× the normal HTTP method. Download the file and drop it in ~/Library/Caches/Homebrew, but watch the file name. Homebrew downloads files as <formula>-<version>. In the case of Erlang, this requires renaming the file from otp_src_R13B03 to erlang-R13B03. brew --cache -s erlang will print the correct name of the cached download. This means instead of manually renaming a formula, you can run mv the_tarball $(brew --cache -s <formula>). You can also pre-cache the download by using the command brew fetch <formula> which also displays the SHA-256 hash. This can be useful for updating formulae to new versions. brew sh # or: eval $(brew --env) gem install ronn # or c-programs This imports the brew environment into your existing shell; gem will pick up the environment variables and be able to build. As a bonus brew’s automatically determined optimization flags are set. brew install --only-dependencies <formula> $ brew irb 1.8.7 :001 > Formula.factory("ace").methods - Object.methods => [:install, :path, :homepage, :downloader, :stable, :bottle, :devel, :head, :active_spec, :buildpath, :ensure_specs_set, :url, :version, :specs, :mirrors, :installed?, :explicitly_requested?, :linked_keg, :installed_prefix, :prefix, :rack, :bin, :doc, :include, :info, :lib, :libexec, :man, :man1, :man2, :man3, :man4, :man5, :man6, :man7, :man8, :sbin, :share, :etc, :var, :plist_name, :plist_path, :download_strategy, :cached_download, :caveats, :options, :patches, :keg_only?, :fails_with?, :skip_clean?, :brew, :std_cmake_args, :deps, :external_deps, :recursive_deps, :system, :fetch, :verify_download_integrity, :fails_with_llvm, :fails_with_llvm?, :std_cmake_parameters, :mkdir, :mktemp] 1.8.7 :002 > export HOMEBREW_NO_EMOJI=1 This sets the HOMEBREW_NO_EMOJI environment variable, causing Homebrew to hide all emoji. The beer emoji can also be replaced with other character(s): export HOMEBREW_INSTALL_BADGE="☕️ 🐸" In Sublime Text 2/3, you can use Package Control to install Homebrew-formula-syntax, which adds highlighting for inline patches. brew.vim adds highlighting to inline patches in Vim. homebrew-mode provides syntax highlighting for inline patches as well as a number of helper functions for editing formula files. pcmpl-homebrew provides completion for emacs shell-mode and eshell-mode. language-homebrew-formula adds highlighting and diff support (with the language-diff plugin).
https://docs.brew.sh/Tips-N%27-Tricks.html
2020-07-02T19:32:56
CC-MAIN-2020-29
1593655879738.16
[]
docs.brew.sh
Expressions And Operators: New The new operator allocates memory for an object that is an instance of the specified class. The object is initialized by calling the class's constructor passing it the optional argument list, just like a function call. If the class has no constructor, the constructor that class inherits (if any) is used. For example: } public function __toString(): string { // instance method return '('.$this->x.','.$this->y.')'; } // ... } <<__EntryPoint>> function main(): void { $p1 = new Point(); // create Point(0.0, 0.0) /* HH_FIXME[4067] implicit __toString() is now deprecated */ echo "\$p1 is $p1\n"; $p2 = new Point(12.3); // create Point(12.3, 0.0) /* HH_FIXME[4067] implicit __toString() is now deprecated */ echo "\$p2 is $p2\n"; $p3 = new Point(5, 6.7); // create Point(5.0, 6.7) /* HH_FIXME[4067] implicit __toString() is now deprecated */ echo "\$p3 is $p3\n"; } $p1 is (0,0) $p2 is (12.3,0) $p3 is (5,6.7) The result is an object of the type specified. The new operator may also be used to allocate memory for an instance of a classname type; for example: final class C { ... } function f(classname<C> $clsname): void { $w = new $clsname(); ... } Any one of the keywords parent, self, and static can be used between the new and the constructor call, as follows. From within a method, the use of static corresponds to the class in the inheritance context in which the method is called. The type of the object created by an expression of the form new static is this. See scope resolution for a discussion of parent, self, and static in this context.
https://docs.hhvm.com/hack/expressions-and-operators/new
2020-07-02T19:44:32
CC-MAIN-2020-29
1593655879738.16
[]
docs.hhvm.com
Sharing a Wish List Customers can manage their wish lists from the dashboard of their accounts. Store administrators can also help customers manage their wish lists from the Admin. Customer Dashboard with Wish List Share Your Wish List In the left panel of your customer account dashboard, choose My Wish List. To add a comment to an item, hover over the image and enter your Comment in the box. To share your wish list, do the following: Click Share My Wish List. Enter the email address of each recipient, separated by a comma. Enter a Message for the body of the email. When you are ready to send the message, click Share Wish List. Customer Dashboard with Wish List Transfer an Item to Your Cart To add a single item, do the following: Hover over the item, and enter the Qty that you want to add to the cart. Click Add to Cart. To transfer all wish list items to the cart, click Add All to Cart.
https://docs.magento.com/user-guide/marketing/wishlist-share.html
2020-07-02T17:59:53
CC-MAIN-2020-29
1593655879738.16
[]
docs.magento.com
The "xzip_extract" Function SyntaxSyntax DescriptionDescription Extracts one or more files from an archive created with xzip_create to the destination folder. Also returns true or false to indicate if the operation succeeded.. If a folder is input, all files from the folder will be extracted, preserving the relative path. Be warned that extraction takes time, and extracting many files at once can cause the game to temporarily appear frozen. It is recommended to extract large archives over a series of Steps and display a loading screen. (See example usage.) tip It is recommended to disable the filesystem sandbox for this script. If the sandbox is enabled, archives can only be created and extracted in working_directory.
https://docs.xgasoft.com/xzip/reference-guide/xzip_extract/
2020-07-02T19:58:41
CC-MAIN-2020-29
1593655879738.16
[]
docs.xgasoft.com
Add). - After you type the main part of a phone number in a phone number field, press the key. - Click Add Pause or Add Wait. - Type the additional numbers. - Press the key > Save. Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/smartphone_users/deliverables/38326/1579804.jsp
2015-04-18T07:34:55
CC-MAIN-2015-18
1429246633972.52
[]
docs.blackberry.com
Compute the n-dimensional inverse fft of a real array. Notes The transform implemented in ifftn is applied along all axes but the last, then the transform implemented in irfft is performed along the last axis. As with irfft, the length of the result along that axis must be specified if it is to be odd.
http://docs.scipy.org/doc/numpy-1.3.x/reference/generated/numpy.fft.irfftn.html
2015-04-18T07:22:12
CC-MAIN-2015-18
1429246633972.52
[]
docs.scipy.org
Difference between revisions of "Menus Menu Item Contact Category" From Joomla! Documentation Latest revision as of 18:46, 6 June 2011 Contact Category Layout Used to show all of the published Contacts in a given Category. Note that Contact Categories are separate from Article Categories. Contacts and Contact Categories are entered by selecting Components/Contacts. See Contact Manager and Category Manager for more information. Parameters - Basic The Contact Category Layout has the following Basic Parameters, as shown below. - Category. Category selected for this Layout. - # Links. Number of contacts to show. - Contact Image. Image for this Page. Image must be located in the folder "images/stories". - Image Align. Align the image on the left or right side of the page. - Limit Box. Hide or Show the Limit Box, shown below. - Show a Feed Link. Hide or Show an RSS Feed Link. (A Feed Link will show up as a feed icon in the address bar of most modern browsers).
https://docs.joomla.org/index.php?title=Help17:Menus_Menu_Item_Contact_Category&diff=prev&oldid=59351
2015-04-18T08:35:18
CC-MAIN-2015-18
1429246633972.52
[]
docs.joomla.org
Revision history of "JDatabaseMySQLExporter::getColumns::getColumns/11.1 to API17:JDatabaseMySQLExporter::getColumns without leaving a redirect (Robot: Moved page)
https://docs.joomla.org/index.php?title=JDatabaseMySQLExporter::getColumns/11.1&action=history
2015-04-18T07:50:50
CC-MAIN-2015-18
1429246633972.52
[]
docs.joomla.org
Revision history of "JSimpleCrypt::decrypt/1.6" View logs for this page There is no edit history for this page. This page has been deleted. The deletion and move log for the page are provided below for reference. - 03:18, 7 May 2013 Wilsonge (Talk | contribs) deleted page JSimpleCrypt::decrypt/1.6 (content was: "__NOTOC__ =={{JVer|1.6}} JSimpleCrypt::decrypt== ===Description=== {{Description:JSimpleCrypt::decrypt}} <span class="editsection" style="font-size:76%;"> <nowi..." (and the only contributor was "Doxiki2"))
https://docs.joomla.org/index.php?title=JSimpleCrypt::decrypt/1.6&action=history
2015-04-18T07:40:59
CC-MAIN-2015-18
1429246633972.52
[]
docs.joomla.org
All face recognition models in OpenCV are derived from the abstract base class FaceRecognizer, which provides a unified access to all face recongition algorithms in OpenCV. class FaceRecognizer : public Algorithm { public: //! virtual destructor virtual ~FaceRecognizer() {} // Trains a FaceRecognizer. virtual void train(InputArray src, InputArray labels) = 0; // Updates a FaceRecognizer. virtual void update(InputArrayOfArrays src, InputArray labels); // Gets a prediction from a FaceRecognizer. virtual int predict(InputArray src) const = 0; // Predicts the label and confidence for a given sample. virtual void predict(InputArray src, int &label, double &confidence) const = 0; // Serializes this object to a given filename. virtual void save(const string& filename) const; // Deserializes this object from a given filename. virtual void load(const string& filename); // Serializes this object to a given cv::FileStorage. virtual void save(FileStorage& fs) const = 0; // Deserializes this object from a given cv::FileStorage. virtual void load(const FileStorage& fs) = 0; // Sets additional information as pairs label - info. void setLabelsInfo(const std::map<int, string>& labelsInfo); // Gets string information by label string getLabelInfo(const int &label); // Gets labels by string vector<int> getLabelsByString(const string& str); };: Note When using the FaceRecognizer interface in combination with Python, please stick to Python 2. Some underlying scripts like create_csv will not work in other versions, like Python 3.: // Let's say we want to keep 10 Eigenfaces and have a threshold value of 10.0 int num_components = 10; double threshold = 10.0; // Then if you want to have a cv::FaceRecognizer with a confidence threshold, // create the concrete implementation with the appropiate parameters: Ptr<FaceRecognizer> model = createEigenFaceRecognizer(num_components, threshold); Sometimes it’s impossible to train the model, just to experiment with threshold values. Thanks to Algorithm it’s possible to set internal model thresholds during runtime. Let’s see how we would set/get the prediction for the Eigenface model, we’ve created above: // The following line reads the threshold from the Eigenfaces model: double current_threshold = model->getDouble("threshold"); // And this line sets the threshold to 0.0: model->set("threshold", 0.0); If you’ve set the threshold to 0.0 as we did above, then: // Mat img = imread("person1/3.jpg", CV_LOAD_IMAGE_GRAYSCALE); // Get a prediction from the model. Note: We've set a threshold of 0.0 above, // since the distance is almost always larger than 0.0, you'll get -1 as // label, which indicates, this face is unknown int predicted_label = model->predict(img); // ... is going to yield -1 as predicted label, which states this face is unknown. Since every FaceRecognizer is a Algorithm, you can use Algorithm::name() to get the name of a FaceRecognizer: // Create a FaceRecognizer: Ptr<FaceRecognizer> model = createEigenFaceRecognizer(); // And here's how to get its name: std::string name = model->name();: // holds images and labels vector<Mat> images; vector<int> labels; // images for first person images.push_back(imread("person0/0.jpg", CV_LOAD_IMAGE_GRAYSCALE)); labels.push_back(0); images.push_back(imread("person0/1.jpg", CV_LOAD_IMAGE_GRAYSCALE)); labels.push_back(0); images.push_back(imread("person0/2.jpg", CV_LOAD_IMAGE_GRAYSCALE)); labels.push_back(0); // images for second person images.push_back(imread("person1/0.jpg", CV_LOAD_IMAGE_GRAYSCALE)); labels.push_back(1); images.push_back(imread("person1/1.jpg", CV_LOAD_IMAGE_GRAYSCALE)); labels.push_back(1); images.push_back(imread("person1/2.jpg", CV_LOAD_IMAGE_GRAYSCALE)); labels.push_back(1); Now that you have read some images, we can create a new FaceRecognizer. In this example I’ll create a Fisherfaces model and decide to keep all of the possible Fisherfaces: // Create a new Fisherfaces model and retain all available Fisherfaces, // this is the most common usage of this specific FaceRecognizer: // Ptr<FaceRecognizer> model = createFisherFaceRecognizer(); And finally train it on the given dataset (the face images and labels): // This is the common interface to train all of the available cv::FaceRecognizer // implementations: // model->train(images,. // Create a new LBPH model (it can be updated) and use the default parameters, // this is the most common usage of this specific FaceRecognizer: // Ptr<FaceRecognizer> model = createLBPHFaceRecognizer(); // This is the common interface to train all of the available cv::FaceRecognizer // implementations: // model->train(images, labels); // Some containers to hold new image: vector<Mat> newImages; vector<int> newLabels; // You should add some images to the containers: // // ... // // Now updating the model is as easy as calling: model->update(newImages,newLabels); // This will preserve the old model data and extend the existing model // with the new features extracted from newImages! Calling update on an Eigenfaces model (see createEigenFaceRecognizer()), which doesn’t support updating, will throw an error similar to: OpenCV Error: The function/feature is not implemented (This FaceRecognizer (FaceRecognizer.Eigenfaces) does not support updating, you have to use FaceRecognizer::train to update it.) in update, file /home/philipp/git/opencv/modules/contrib/src/facerec.cpp, line 305 terminate called after throwing an instance of 'cv::Exception' Please note: The FaceRecognizer does not store your training images, because this would be very memory intense and it’s not the responsibility of te FaceRecognizer to do so. The caller is responsible for maintaining the dataset, he want to work with.: using namespace cv; // Do your initialization here (create the cv::FaceRecognizer model) ... // ... // Read in a sample image: Mat img = imread("person1/3.jpg", CV_LOAD_IMAGE_GRAYSCALE); // And get a prediction from the cv::FaceRecognizer: int predicted = model->predict(img); Or to get a prediction and the associated confidence (e.g. distance): using namespace cv; // Do your initialization here (create the cv::FaceRecognizer model) ... // ... Mat img = imread("person1/3.jpg", CV_LOAD_IMAGE_GRAYSCALE); // Some variables for the predicted label and associated confidence (e.g. distance): int predicted_label = -1; double predicted_confidence = 0.0; // Get the prediction and associated confidence from the model model->predict(img, predicted_label, predicted_confidence); Saves a FaceRecognizer and its model state. Saves this model to a given filename, either as XML or YAML. Saves this model to a given FileStorage.. Sets string information about labels into the model. .. ocv:function:: void FaceRecognizer::setLabelsInfo(const std::map<int, string>& labelsInfo) Information about the label loads as a pair “label id - string info”. Gets string information by label. .. ocv:function:: string FaceRecognizer::getLabelInfo(const int &label) If an unknown label id is provided or there is no label information assosiated with the specified label id the method returns an empty string. Gets vector of labels by string. The function searches for the labels containing the specified substring in the associated string info.
http://docs.opencv.org/modules/contrib/doc/facerec/facerec_api.html?highlight=face%2520recognition
2015-04-18T07:14:13
CC-MAIN-2015-18
1429246633972.52
[]
docs.opencv.org
Installation in, Postgresql and MS SqlServer 2005. It may work on other versions. To use one of the external database,. Mode 2 - Deploy on Tomcat application server Sonar can be packaged as a WAR then deployed into an existing JEE application server. To use this method of installation, you must already know how to deploy a web application on the application server of choice. The only supported JEE application servers are Tomcat 5.x, 6.x and To increase memory heap size, set the CATALINA_OPTS variable before starting Tomcat :More details on this blog. Mode 3 - Run as a service on MS Windows Install/uninstall NT service : Start/stop the service : Mode 4 - Run as a service on Linux The following has been tested on Ubuntu 8.10. you are using a Virtual Host for and that Sonar is running and available on. At this point, edit the HTTPd configuration file for the virtual host. Include the following to expose Sonar via mod_proxy at : By default, mod_proxy uses HTTP protocol to communicate with the Sonar instance. For performance concerns, you might prefer using the AJP13 protocol. This protocol is packet-oriented. A binary format is chosen over the more readable plain text for reasons of performance. To cut down on the expensive process of socket creation, the web server will attempt to maintain persistent TCP connections to the servlet container, and to reuse a connection for multiple request/response cycles. If you want to use this AJP13 protocol you must to activate the mod_proxy_ajp module and then edit the sonar.properties configuration file and uncomment the sonar.ajp13.port property : Once this done, edit the HTTPd configuration file for the virtual host and make the following changes : Apache configuration is going to vary based on your own application's requirements and the way you intend to expose Sonar to the outside world. If you need more details about Apache HTTPd, mod_proxy and mod_proxy_ajp,.
http://docs.codehaus.org/pages/viewpage.action?pageId=198049818
2015-04-18T07:36:09
CC-MAIN-2015-18
1429246633972.52
[]
docs.codehaus.org
Moves the rigidbody to position. Use Rigidbody.MovePosition to move a Rigidbody, complying with the Rigidbody's interpolation setting. If Rigidbody interpolation is enabled on the Rigidbody, calling Rigidbody.MovePosition results in a smooth transition between the two positions in any intermediate frames rendered. This should be used if you want to continuously move a rigidbody in each FixedUpdate. Set Rigidbody.position instead, if you want to teleport a rigidbody from one position to another, with no intermediate positions being rendered. var teleportPoint: Vector3; var rb: Rigidbody; function Start() { rb = GetComponent.<Rigidbody>(); } function FixedUpdate () { rb.MovePosition(transform.position + transform.forward * Time.deltaTime); } using UnityEngine; using System.Collections; public class ExampleClass : MonoBehaviour { public Vector3 teleportPoint; public Rigidbody rb; void Start() { rb = GetComponent<Rigidbody>(); } void FixedUpdate() { rb.MovePosition(transform.position + transform.forward * Time.deltaTime); } }
http://docs.unity3d.com/ScriptReference/Rigidbody.MovePosition.html
2015-04-18T07:13:04
CC-MAIN-2015-18
1429246633972.52
[]
docs.unity3d.com
Hibernate.orgCommunity Documentation Hibernate uses a powerful query language (HQL) that is similar in appearance to SQL. Compared with SQL, however, HQL is fully object-oriented and understands notions like inheritance, polymorphism and association. With the exception of names of Java classes and properties, queries are case-insensitive. So SeLeCT is the same as sELEct is the same as SELECT, but org.hibernate.eg.FOO is not org.hibernate.eg.Foo, and foo.barSet is not foo.BARSET. This manual uses lowercase HQL keywords. Some users find queries with uppercase keywords more readable, but this convention is unsuitable for queries embedded in Java code. The simplest possible Hibernate query is of the form: from eg.Cat This returns all instances of the class eg.Cat. You do not usually need to qualify the class name, since auto-import is the default. For example: from Cat). You can also assign aliases to associated entities or to elements of a collection of values using a join. For example: from Cat as cat inner join cat.mate as mate left outer join cat.kittens as kitten from Cat as cat join cat.mate as mate left join cat.kittens as kitten You may supply extra join conditions using the HQL with keyword. from Cat as cat left join cat.kittens as kitten with kitten.bodyWeight > 10.0 19.1, “Fetching strategies” for more information. from Cat as cat inner join fetch cat.mate left join fetch cat.kittens The fetch construct cannot be used in queries called using iterate() (though scroll() can be used). Fetch should be used together with setMaxResults() or setFirstResult(), as these operations are based on the result rows which usually contain duplicates for eager collection fetching, hence, the number of rows is not what you would expect. Fetch should also not be used together with impromptu with condition. It is possible to create a cartesian product by join fetching more than one collection in a query, so take care in this case. Join fetching multiple collection roles can produce unexpected results for bag mappings, so user discretion is advised when formulating queries in this case. Finally, note that full join fetch and right join fetch are not meaningful. If you are using property-level lazy fetching (with bytecode instrumentation), it is possible to force Hibernate to fetch the lazy properties in the first query immediately using fetch all properties. from Document fetch all properties order by name from Document doc fetch all properties where lower(doc.name) like '%cats%'. from Cat as cat where cat.mate.name like '%s%' There are 2 ways to refer to an entity's identifier property: The special property (lowercase) id may be used to reference the identifier property of an entity provided that the entity does not define a non-identifier property named id. If the entity defines a named identifier property, you can use that property name. References to composite identifier properties follow the same naming rules. If the entity has a non-identifier property named id, the composite identifier property can only be referenced by its defined named. Otherwise, the special id property can be used to reference the identifier property. Please note that, starting in version 3.2.2, this has changed significantly. In previous versions, id always referred to the identifier property regardless of its actual name. A ramification of that decision was that non-identifier properties named id could never be referenced in Hibernate queries. The select clause picks which objects and properties to return in the query result set. Consider the following: select mate from Cat as cat inner join cat.mate as mate The query will select mates of other Cats. You can express this query more compactly as: select cat.mate from Cat cat Queries can return properties of any value type including properties of component type: select cat.name from DomesticCat cat where cat.name like 'fri%' select cust.name.firstName from Customer as cust Queries can return multiple objects and/or properties as an array of type Object[]: select mother, offspr, mate.name from DomesticCat as mother inner join mother.mate as mate left outer join mother.kittens as offspr Or as a List: select new list(mother, offspr, mate.name) from DomesticCat as mother inner join mother.mate as mate left outer join mother.kittens as offspr Or - assuming that the class Family has an appropriate constructor - as an actual typesafe Java object: select new Family(mother, mate, offspr) from DomesticCat as mother join mother.mate as mate left join mother.kittens as offspr You can assign aliases to selected expressions using as: select max(bodyWeight) as max, min(bodyWeight) as min, count(*) as n from Cat cat This is most useful when used together with select new map: select new map( max(bodyWeight) as max, min(bodyWeight) as min, count(*) as n ) from Cat cat This query returns a Map from aliases to selected values. HQL queries can even return the results of aggregate functions on properties: select avg(cat.weight), sum(cat.weight), max(cat.weight), count(cat) from Cat cat The supported aggregate functions are: avg(...), sum(...), min(...), max(...) count(*) count(...), count(distinct ...), count(all...) You can use arithmetic operators, concatenation, and recognized SQL functions in the select clause: select cat.weight + sum(kitten.weight) from Cat cat join cat.kittens kitten group by cat.id, cat.weight select firstName||' '||initial||' '||upper(lastName) from Person The distinct and all keywords can be used and have the same semantics as in SQL. select distinct cat.name from Cat cat select count(distinct cat.name), count(cat) from Cat cat A query like: from Cat as cat returns instances not only of Cat, but also of subclasses like DomesticCat. Hibernate queries can name any Java class or interface in the from clause. The query will return instances of all persistent classes that extend that class or implement the interface. The following query would return all persistent objects: from java.lang.Object o The interface Named might be implemented by various persistent classes: from Named n, Named m where n.name = m.name These last two queries will require more than one SQL SELECT. This means that the order by clause does not correctly order the whole result set. It also means you cannot call these queries using Query.scroll(). The where clause allows you to refine the list of instances returned. If no alias exists, you can refer to properties by name: from Cat where name='Fritz' If there is an alias, use a qualified property name: from Cat as cat where cat.name='Fritz' This returns instances of Cat named 'Fritz'. The following query: select foo from Foo foo, Bar bar where foo.startDate = bar.date returns all instances of Foo with an instance of bar with a date property equal to the startDate property of the Foo. Compound path expressions make the where clause extremely powerful. Consider the following: from Cat cat where cat.mate.name is not null This query translates to an SQL query with a table (inner) join. For example: from Foo foo where foo.bar.baz.customer.address.city is not null would result in a query that would require four table joins in SQL. The = operator can be used to compare not only properties, but also instances: from Cat cat, Cat rival where cat.mate = rival.mate select cat, mate from Cat cat, Cat mate where cat.mate = mate The special property (lowercase) id can be used to reference the unique identifier of an object. See Section 14.5, “Referring to identifier property” for more information. from Cat as cat where cat.id = 123 from Cat as cat where cat.mate.id = 69 The second query is efficient and does not require a table join. Properties of composite identifiers can also be used. Consider the following example where Person has composite identifiers consisting of country and medicareNumber: from bank.Person person where person.id.country = 'AU' and person.id.medicareNumber = 123456 from bank.Account account where account.owner.id.country = 'AU' and account.owner.id.medicareNumber = 123456 Once again, the second query does not require a table join. See Section 14.5, “Referring to identifier property” for more information regarding referencing identifier properties) The special property class accesses the discriminator value of an instance in the case of polymorphic persistence. A Java class name embedded in the where clause will be translated to its discriminator value. from Cat cat where cat.class = DomesticCat You can also use components or composite user types, or properties of said component types. See Section 14.17, “Components” for more information. An "any" type has the special properties id and class that allows you to express a join in the following way (where AuditLog.item is a property mapped with <any>): from AuditLog log, Payment payment where log.item.class = 'Payment' and log.item.id = payment.id The log.item.class and payment.class would refer to the values of completely different database columns in the above query. Expressions used in the where clause include the following: mathematical operators: +, -, *, / binary comparison operators: =, >=, <=, <>, !=, like HQL functions that take collection-valued path expressions: size(), minelement(), maxelement(), minindex(), maxindex(), along with the special elements() and indices functions that can be quantified using some, all, exists, any, in. Any database-supported SQL scalar function like sign(), trunc(), rtrim(), and sin() JDBC-style positional parameters ? named parameters :name, :start_date, and :x1 SQL literals 'foo', 69, 6.66E+2, '1970-01-01 10:00:01.0' Java public static final constants eg.Color.TABBY in and between can be used as follows: from DomesticCat cat where cat.name between 'A' and 'B' from DomesticCat cat where cat.name in ( 'Foo', 'Bar', 'Baz' ) The negated forms can be written as follows: from DomesticCat cat where cat.name not between 'A' and 'B' from DomesticCat cat where cat.name not in ( 'Foo', 'Bar', 'Baz' ) Similarly, is null and is not null can be used to test for null values. Booleans can be easily used in expressions by declaring HQL query substitutions in Hibernate configuration: <property name="hibernate.query.substitutions">true 1, false 0</property> This will replace the keywords true and false with the literals 1 and 0 in the translated SQL from this HQL: from Cat cat where cat.alive = true You can test the size of a collection with the special property size or the special size() function. from Cat cat where cat.kittens.size > 0 from Cat cat where size(cat.kittens) > 0 For indexed collections, you can refer to the minimum and maximum indices using minindex and maxindex functions. Similarly, you can refer to the minimum and maximum elements of a collection of basic type using the minelement and maxelement functions. For example: from Calendar cal where maxelement(cal.holidays) > current_date Cat as mother, Cat as kit where kit in elements(foo.kittens) select p from NameList list, Person p where p.name = some elements(list.names) from Cat cat where exists elements(cat.kittens) from Player p where 3 > all elements(p.scores) from Show show where 'fizard' in indices(show.acts) Note that these constructs - size, elements, indices, minindex, maxindex, minelement, maxelement - can only be used in the where clause in Hibernate3. Elements of indexed collections (arrays, lists, and maps) can [] can can be used: from DomesticCat cat where upper(cat.name) like 'FRI%' Consider can be ordered by any property of a returned class or components: from DomesticCat cat order by cat.name asc, cat.weight desc, cat.birthdate The optional asc or desc indicate ascending or descending order respectively. A query that returns aggregate values can they are supported by the underlying database (i.e., not in MySQL). select cat from Cat cat join cat.kittens kitten group by cat.id, cat.name, cat.other, cat.properties having avg(kitten.weight) > 100 order by count(kitten) asc, sum(kitten.weight) desc Neither the group by clause nor the order by clause can contain arithmetic expressions. Hibernate also does not currently expand a grouped entity, so you cannot write group by cat if all properties of cat are non-aggregated. You have to list all non-aggregated properties explicitly.. from Cat as fatcat where fatcat.weight > ( select avg(cat.weight) from DomesticCat cat ) from DomesticCat as cat where cat.name = some ( select name.nickName from Name as name ) from Cat as cat where not exists ( from Cat as mate where mate.mate = cat ) from DomesticCat as cat where cat.name not in ( select name.nickName from Name as name ) select cat.id, (select max(kit.weight) from cat.kitten kit) from Cat as cat Note that HQL subqueries can occur only in the select or where clauses. Note that subqueries can also utilize row value constructor syntax. See Section 14.18, “Row value constructor syntax” for more information. Hibernate queries can be quite powerful and complex. In fact, the power of the query language is one of Hibernate's main strengths. The following example queries are similar to queries that have been used on recent projects. Please note that most queries you will write will be much simpler than the following examples. The following query returns the order id, number of items, the given minimum total value and the total value of the order for all unpaid orders for a particular customer. The results are ordered the statusChanges collection was mapped HQL now supports update, delete and insert ... select ... statements. See Section.name join usr.messages msg group by usr.id, usr.name having count(msg) >= 1 As this solution cannot: Query q = s.createFilter( collection, "" ); // the trivial filter q.setMaxResults(PAGE_SIZE); q.setFirstResult(PAGE_SIZE * pageNumber); List page = q.list(); Collection elements can be ordered or grouped using a query filter: Collection orderedCollection = s.filter( collection, "order by this.amount" ); Collection counts = s.filter( collection, "select this.type, count(this) group by this.type" ); You can find the size of a collection without initializing it: ( (Integer) session.createQuery("select count(*) from ....").iterate().next() ).intValue(); Components can be used similarly to the simple value types that are used in HQL queries. They can appear in the select clause as follows: select p.name from Person p select p.name.first from Person p where the Person's name property is a component. Components can also be used in the where clause: from Person p where p.name = :name from Person p where p.name.first = :firstName Components can also be used in the order by clause: from Person p order by p.name from Person p order by p.name.first Another common use of components is in row value constructors. HQL supports the use of ANSI SQL row value constructor syntax, sometimes referred to AS tuple syntax, even though the underlying database may not support that notion. Here, we are generally referring to multi-valued comparisons, typically associated with components. Consider an entity Person which defines a name component: from Person p where p.name.first='John' and p.name.last='Jingleheimer-Schmidt' That is valid syntax although it is a little verbose. You can make this more concise by using row value constructor syntax: from Person p where p.name=('John', 'Jingleheimer-Schmidt') It can also be useful to specify this in the select clause: select p.name from Person p Using row value constructor syntax can also be beneficial when using subqueries that need to compare against multiple values: from Cat as cat where not ( cat.name, cat.color ) in ( select cat.name, cat.color from DomesticCat cat ) One thing to consider when deciding if you want to use this syntax, is that the query will be dependent upon the ordering of the component sub-properties in the metadata.
https://docs.jboss.org/hibernate/core/3.3/reference/en/html/queryhql.html
2015-04-18T07:16:39
CC-MAIN-2015-18
1429246633972.52
[]
docs.jboss.org
Revision history of "Subpackage Registry" View logs for this page There is no edit history for this page. This page has been deleted. The deletion and move log for the page are provided below for reference. - 08:12, 20 June 2013 Wilsonge (Talk | contribs) deleted page Subpackage Registry (content was: "{{Description:Subpackage Registry}} This subpackage is available in the following Joomla versions:- <splist showpath=notparent /> <noinclude>Category:Subpackag..." (and the only contributor was "Doxiki2"))
https://docs.joomla.org/index.php?title=Subpackage_Registry&action=history
2015-04-18T08:35:56
CC-MAIN-2015-18
1429246633972.52
[]
docs.joomla.org
Nomenclature and concepts Deb Deb files have a "control" file with a set of fields that contain the dependency data: - Depends - States a required dependency on another plugin with optional version specification - Recommends - State recommended depenendecy - Suggests - States a suggested dependency - Pre-Depends - States a dependency that has to be configure before installation - Build-Depends - Not relevant - Build-Depends-Indep - Not relevant Each field list a set of packages separated with comma (","). Where a dependency is either one of more packages, each option is separated with a bar ("|"). Virtual packages". Replacement Deb also has a "Replaces" flag with two purposes: - "this package replace files in package foo" - "this package replaces this package entirely" References Rpm Reference: - Requires, which tracks the capabilities a package requires - Provides, which tracks the capabilities a package provides for other packages - Conflicts, which describes the capabilities that if installed, conflict with capabilities in a package - Obsoletes, which describes the capabilities that this package will make obsolete Possible mappings provides defines a virtual package which is provided by the package being built requires identifies a package that is required to be installed for the package being built to operate correctly conflicts
http://docs.codehaus.org/display/MOJO/Notes+on+package+dependencies
2015-04-18T07:24:49
CC-MAIN-2015-18
1429246633972.52
[]
docs.codehaus.org
- Reference > - mongo Shell Methods > - Database Methods > - db.fsyncUnlock() db.fsyncUnlock()¶ Definition¶ - db.fsyncUnlock()¶ Unlocks a mongod instance to allow writes and reverses the operation of a db.fsyncLock() operation. Typically you will use db.fsyncUnlock() following a database backup operation. db.fsyncUnlock() is an administrative operation. Wired Tiger Compatibility¶ With WiredTiger the db.fsyncLock() and db.fsyncUnlock() operations cannot guarantee that the data files do not change. As a result, do not use these methods to ensure consistency for the purposes of creating backups.
http://docs.mongodb.org/manual/reference/method/db.fsyncUnlock/
2015-04-18T07:12:02
CC-MAIN-2015-18
1429246633972.52
[]
docs.mongodb.org
. Contents - Aims the overall aims of the language - Use Cases a collection of use cases to shape the syntax - Specification contains the current version of the specification (things we've agreed on) - Discussions things we're still considering describing issues, options to decide or things to design Other resources - Useful Links - Other languages - People add your own page here to describe ideas or thoughts you have
http://docs.codehaus.org/pages/viewpage.action?pageId=7065
2015-04-18T07:25:01
CC-MAIN-2015-18
1429246633972.52
[]
docs.codehaus.org
Description¶ A git alias, also known as a shortcut, creates short commands mapping the longer ones. It demands fewer keystrokes to run a command, which simplifies the developers’ work. Direct git alias command doesn’t exist. This type of command is created through the git config command and the git configuration files. Aliases can be generated in a local or a global scope with other configuration values. Creating Aliases with Git¶ There are two ways of creating Git aliases: using the git config and directly editing the .gitconfig file. Creating Git aliases with git config command¶ In order to create Git aliases with the git config command, follow the steps below: - To create a git alias you have to edit your .gitconfig file in the user directory, in order to make these aliases accessible for all the projects. - Run the git-config command and define the alias. git config --global alias.c commit - After this the line below will be added to the /.gitconfig file. Make sure it was saved. git config --list - Then the alias will be visible. c = commit - Now the alias is accessible. It will work just as you typed the whole command. git c -m "example" - In the end, open the config file and you will see something like this. [alias] c = commit Creating git aliases by directly editing .gitconfig file¶ The second way of creating git aliases is directly editing git config files, like this: [alias] co = checkout Aliases for Git Commands Here are some useful git aliases that just replace the original git command and are designed to make you type less:
https://www.w3docs.com/learn-git/git-alias.html
2021-05-06T09:01:35
CC-MAIN-2021-21
1620243988753.91
[]
www.w3docs.com
Oracle¶ This datasource reads metadata, vendor-data and user-data from Oracle Compute Infrastructure (OCI). Oracle Platform¶ OCI provides bare metal and virtual machines. In both cases, the platform identifies itself via DMI data in the chassis asset tag with the string ‘OracleCloud.com’. Oracle’s platform provides a metadata service that mimics the 2013-10-17 version of OpenStack metadata service. Initially support for Oracle was done via the OpenStack datasource. - Cloud-init has a specific datasource for Oracle in order to: - allow and support future growth of the OCI platform. - address small differences between OpenStack and Oracle metadata implementation. Configuration¶ The following configuration can be set for the datasource in system configuration (in /etc/cloud/cloud.cfg or /etc/cloud/cloud.cfg.d/). The settings that may be configured are: - configure_secondary_nics: A boolean, defaulting to False. If set to True on an OCI Virtual Machine, cloud-init will fetch networking metadata from Oracle’s IMDS and use it to configure the non-primary network interface controllers in the system. If set to True on an OCI Bare Metal Machine, it will have no effect (though this may change in the future). An example configuration with the default values is provided below: datasource: Oracle: configure_secondary_nics: false
https://cloudinit.readthedocs.io/en/latest/topics/datasources/oracle.html
2021-05-06T10:36:06
CC-MAIN-2021-21
1620243988753.91
[]
cloudinit.readthedocs.io
Filter Node¶ The Filter node implements various common image enhancement filters. Inputs¶ - Factor Controls the amount of influence the node exerts on the output image. - Image Standard image input. Eigenschaften¶ - Type The Soften, Laplace, Sobel, Prewitt and Kirsch all perform edge detection (in slightly different ways) based on vector calculus and set theory equations. - Soften Slightly blurs the image. - Sharpen Increases the contrast, especially at edges. - Laplace Softens around edges. - Sobel Creates a negative image that highlights edges. - Prewitt Tries to do Sobel one better. - Kirsch Giving a better blending than Sobel or Prewitt, when approaching an edge. - Shadow Performs a relief, emboss effect, darkening outside edges.
https://docs.blender.org/manual/de/dev/compositing/types/filter/filter_node.html
2021-05-06T09:26:10
CC-MAIN-2021-21
1620243988753.91
[]
docs.blender.org
Onboarding and offboarding Logical Data Centers <[^>]+?>","")" class="contextID"> A Logical Data Center is a logical construct that can be any external data center entity. For example, in the context of the VMware vCloud Director, a Logical Data Center can be an Organizational Virtual Datacenter (Org vDC). A Logical Data Center is similar to and at the same hierarchical level as a network container. Both a Logical Data Center and a network container belong to a logical hosting environment. (A logical hosting environment acts as a separate, abstract entity that contains the data for the Logical Data Center or network container.) You can onboard or offboard Logical Data Centers in the Network Container tab (part of the logical hosting environment) of the Resource Management workspace. Before you begin Ensure that you have registered and configured a VMware vCloud provider. See Registering VMware vCloud providers. To onboard a Logical Data Center - On the BMC Cloud Lifecycle Management Administration console, access the Resource Management > Network > Network Containers workspace. - Select a Network Container (logical hosting environment). - Click the Onboard Logical Data Center icon to display the Onboard Logical Data Centers dialog box. - Select the provider from the Provider drop-down menu to display the list of available Logical Data Centers. - Select a Logical Data Center entry from the list of available entries, and click Add to add the Logical Data Center to the Resource Management > Network Containers workspace. After you have onboarded a Logical Data Center, you can proceed to map tenants to the Logical Data Center. To offboard a Logical Data Center To offboard a current Logical Data Center from a Resource Management > Network Container workspace, highlight the Logical Data Center entry from the selected list, and click Remove.
https://docs.bmc.com/docs/cloudlifecyclemanagement/31/administering/configuring-out-of-the-box-third-party-platform-integrations/managing-a-vmware-vcloud-instance/onboarding-and-offboarding-logical-data-centers
2021-05-06T11:02:07
CC-MAIN-2021-21
1620243988753.91
[]
docs.bmc.com
Mustache Templating¶ The Mustache template format can be understood here. Once templating is done, the returned string is passed to the Markdown renderer. To learn about the Markdown syntax please refer to the markdown Formatting page. Variables¶ The most basic tag type is a simple variable. A {{{name}}} tag renders the value of the name key in the current context. Raw Values¶ By default ermrestJS returns formatted values for a column. If you need to access the raw values returned by Ermrest you should prepend the column name with an underscore **”_”** in the template. # {{{_COUMN_NAME}}} {{{_user}}} Foreign Key Values¶ You can access table’s outbound foreign key data using $fkeys variable. To do so, you can use the constraint name of the foreign key. For instance having ["schema", "constraint"] as schema-constraint pair for the foreign key, you can use $fkey_schema_constraint ( $fkeys.schema.constraint syntax is still supported but it’s deprecated) to access its attributes. The following are available attributes for foreign keys: values: An object containing values of the table that the foreign key refers to. Both formatted and unformatted column values will be available here. For instance $fkey_schema_const.values.col1will give you the formatted value for the col1and $fkey_schema_const.values._col1the unformatted. rowName: Row-name of the foreign key. uri.detailed: a uri to the foreign key in detailedcontext (record app). # Create a link to Foreign key: {{#$fkey_schema_constraint}} [{{rowName}}]({{{uri.detailed}}}) {{/$fkey_schema_constraint}} # Access column values of a foreign key: {{{$fkey_schema_constraint.values.col1}}} - {{{$fkey_schema_constraint.values.col2}}} The current implementation of $fkeys has the following limitations: Using $fkeys you can only access data from tables that are one level away from the current table. This can cause problem when you are using $fkeys in your row_markdown_patternannotation. Let’s say the following is the ERD of your database. And you have defined the row_markdown_patternof table A as {{{$fkey_schema_fk1.values.term}}}. If you navigate to record app for any records of A, the rowname will be displayed as you expect it. But if you go to the table C, the rowname of A won’t be as you expected since we don’t have access to the table B’s data. Therefore it’s advised to use $fkeyonly for the column-displayannotation (or any other annotation that is controlling data for the same table). JSON¶ To access inner properties of a JSON column just use Mustache block scope. {{#_user}} {{{FirstName}}} {{{LastName}}} {{/_user}} NOTE: Internal properties of a JSON column don’t require underscore to be prepended. Encoding variables for URL manipulation¶ To specifically encode values; for example query strings of a url, you can use the encode block in this way. {{#encode}}{{{COLUMN_NAME}}}{{/encode}} Whatever that is present in the opening and closing of the encode block will be URL encoded. Escaping Content¶ To specifically escape values; for example slashes “/” or hyphens “-” etc., you can use the escape block in this way. {{#escape}}{{{COLUMN_NAME}}}{{/escape}} Whatever that is present in the opening and closing of the escape block will be escaped. Escaping is necessary whenever you feel that your content might contain some special characters that might interfere with the markdown compiler. These special characters are as follows: { } [ ] ( ) # * ! . + - ` / > < Examples¶ NOTE: we will be using following object for values { date: "08/25/2016", url: "", name: "BiomassProdBatch for Virus=7782 Target=5HT1B site=USC" } 1. Normal replacement - “{{{name}}}”¶ This is some value in COLUMN **{{{name}}}** # MUSTACHE OUTPUT: "This is some value in COLUMN **BiomassProdBatch for Virus=7782 Target=5HT1B site=USC**"" # MARKDOWN OUTPUT: "<p>This is some value in COLUMN <strong>BiomassProdBatch for Virus=7782 Target=5HT1B site=USC</strong></p>" ‘This is some value in COLUMN BiomassProdBatch for Virus=7782 Target=5HT1B site=USC’ 2. Replacement with URL encoding - “{{#encode}}{{{name}}}{{/encode}}”¶ [{{name}}]({{#encode}}{{{name}}}{{/encode}}) # MUSTACHE OUTPUT: "[BiomassProdBatch for Virus=7782 Target=5HT1B site=USC]()" # MARKDOWN OUTPUT: "<p><a href="">BiomassProdBatch for Virus=7782 Target=5HT1B site=USC</a></p>" 3. Replacement with HTML escaping - “{{name}}”¶ Research **{{name}}** was conducted on {{{date}}} # MUSTACHE OUTPUT: "Research **BiomassProdBatch for Virus=7782 Target=5HT1B site=USC** was conducted on 08/25/2016" # MARKDOWN OUTPUT: "<p>Research <strong>BiomassProdBatch for Virus=7782 Target=5HT1B site=USC</strong> was conducted on 08/25/2016</p>" Research BiomassProdBatch for Virus =7782 Target =5HT1B site =USC was conducted on 08/25/2016 4. Replacement with null check, disabled escaping and url encoding - “{{#name}}…{{/name}}”¶ With null value for title # title = null Research on date {{{date}}} : {{#title}}[{{{title}}}]({{#encode}}{{{name}}}{{/encode}}){{/title}} # MUSTACHE OUTPUT: "Research on date 08/25/2016 : " # MARKDOWN OUTPUT: "<p>Research on date 08/25/2016 :</p>\n" Research on date 08/25/2016 : With non-null value for title and null value for name # title = "BiomassProdBatch for Virus=7782 Target=5HT1B site=USC" Research on date {{{date}}} : {{#title}}[{{{title}}}]({{#encode}}{{{name}}}{{/encode}}){{/title}} # MUSTACHE OUTPUT: "Research on date 08/25/2016 : [BiomassProdBatch for Virus=7782 Target=5HT1B site=USC]()" # MARKDOWN OUTPUT: "<p>Research on date 08/25/2016 : <a href=" for Virus=7782 Target=5HT1B site=USC</a></p>" Research on date 08/25/2016 : BiomassProdBatch for Virus=7782 Target=5HT1B site=USC 5. Replacement with negated-null check - “{{^name}}…{{/name}}”¶ In cases where you need to check whether a value is null, then use this string, you can use this syntax. #title = null; Research on date {{{date}}} : {{^title}}[This is some title]({{#encode}}{{{name}}}{{/encode}}){{/title}} # MUSTACHE OUTPUT: "Research on date 08/25/2016 : [This is some title]()" # MARKDOWN OUTPUT: "<p>Research on date 08/25/2016 : <a href="">This is some title</a></p>" Research on date 08/25/2016 : This is some title 6. Null Handling¶ If the value of any of the columns which are being used in the markdown_pattern are either null or empty, then the pattern will fall back on the show_null display annotation to return respective value. For example, if title property in the json object is not defined or null then following template [{{{title}}}]({{{url}}}) will resolve as null and use the show_null annotation to determine what should be done. To make sure that you handle above case, wrap properties which can be null inside null handling blocks as mentioned in last 2 samples. [{{#title}}{{{title}}}{{/title}}{{^title}}No title defined{{/title}}]({{{url}}}) Limitations¶ - If you’re using Raw values and have logic to check for boolean false values then please don’t try using it. Mustache block for null handling evaluates to false if the value is false, empty, null or zero. This will also not work for raw json values where you plan to check whether an array is null or not. If the array is not null the block converts to an iterator and will fire internal code n times. - If in any part of the mustache template you are using the block syntax ( {{# COL }}), we will not apply this null handling logic. Using Pre-defined Attributes¶ Ermrestjs now allows users to access some pre-defined variables in the template environment for ease. You need to make sure that you don’t use these variables as column-names in your tables to avoid them being overridden in the environment. One of those variable is $moment. $moment Usage¶ $moment is a datetime object which will give you access to date and time when the app was loaded. For instance if the app was loaded at Thu Oct 19 2017 16:04:46 GMT-0700 (PDT), it will contain following properties - date: 19 - day: 4 - month: 10 - year: 2017 - dateString: Thu Oct 19 2017 - hours: 16 - minutes: 4 - seconds: 14 - milliseconds: 873 - timeString: 16:04:46 GMT-0700 (PDT) - ISOString: 2017-10-19T23:04:46.873Z - GTMString: Thu, 19 Oct 2017 23:04:46 GMT - UTCString: Thu, 19 Oct 2017 23:04:46 GMT - LocaleDateString: 10/19/2017 - LocaleTimeString: 4:04:46 PM - LocalString: 10/19/2017, 4:04:46 PM The $moment object can be referred directly in the Mustache environment Examples Todays date is {{{$moment.month}}}/{{{$moment.date}}}/{{{$moment.year}}} Current time is {{{$moment.hours}}}:{{{$moment.minutes}}}:{{{$moment.seconds}}}:{{{$moment.milliseconds}}} UTC datetime is {{{$moment.UTCString}}} Locale datetime is {{{$moment.LocaleString}}} ISO datetime is {{{$moment.ISOString}}} $catalog Usage¶ $catalog is an object that gives you access to the catalog information including version if it is present. The following properties are currently included: { snapshot: <id>@<version>, id: id, version: version }
http://docs.derivacloud.org/ermrestjs/user-docs/mustache-templating.html
2021-05-06T09:32:13
CC-MAIN-2021-21
1620243988753.91
[]
docs.derivacloud.org
Apache Tomcat and display statistics - Monitor Oracle OC4J and display information - Monitor BEA WebLogic (in fact you might have to - see the Troubleshooting section below).:: Troubleshooting groovy.lang.MissingMethodException. java.lang.SecurityException.
http://docs.codehaus.org/pages/viewpage.action?pageId=79495
2014-08-20T07:01:19
CC-MAIN-2014-35
1408500800767.23
[array(['/download/attachments/79147/catalina.gif?version=2&modificationDate=1180430525304&api=v2', None], dtype=object) array(['/download/attachments/79147/oc4jpie.gif?version=1&modificationDate=1180517704400&api=v2', None], dtype=object) array(['/download/attachments/79147/jconsole.gif?version=1&modificationDate=1180267142906&api=v2', None], dtype=object) ]
docs.codehaus.org
The Research In Motion for your device model. To replace the battery, contact your service provider. The battery isn't connected. For assistance, contact your service provider.
http://docs.blackberry.com/en/smartphone_users/deliverables/55418/als1342451143962.html
2014-08-20T07:06:56
CC-MAIN-2014-35
1408500800767.23
[]
docs.blackberry.com
View a list of web pages that you visited recently Next topic: Return to the home page Previous topic: Search for text in a message, in a file, or on a web page Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/smartphone_users/deliverables/22178/View_a_list_of_web_pages_you_visited_recently_60_1065590_11.jsp
2014-08-20T07:01:35
CC-MAIN-2014-35
1408500800767.23
[]
docs.blackberry.com
Administration Guide Local Navigation Remove a BlackBerry Collaboration Service instance from a pool You can remove a BlackBerry® Collaboration Service instance from a pool if your organization no longer requires it or to troubleshoot an issue. - In the BlackBerry Administration Service, on the Servers and components menu, expand BlackBerry Solution topology > BlackBerry Domain > Component view > BlackBerry Enterprise Server. - If you configured BlackBerry® Enterprise Server pairs, expand the pair name. - Click the name of the BlackBerry Enterprise Server instance that uses the BlackBerry Collaboration Service pool. - Click Edit instance. - Click one of the following tabs, depending on the instant messaging server that you installed in your organization's environment: - Remove the BlackBerry Collaboration Service instance from the list of current instances. - Click Save All. Previous topic: Remove a BlackBerry MDS Connection Service instance from a pool Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/admin/deliverables/20839/Remove_a_CollabService_instance_from_pool_444801_11.jsp
2014-08-20T06:56:42
CC-MAIN-2014-35
1408500800767.23
[]
docs.blackberry.com
How to direct test output to the console By default, test output goes to target/surefire-reports/. If you'd like to see it on the console, add to the command line. To make this permanent, configure the Surefire plugin as follows: Or to set the properties without configuring the plugin use global property names: if you want to set the reportFormat it's global property name is surefire.reportFormat How to skip all tests by default Put this in your settings.xml:.
http://docs.codehaus.org/pages/diffpagesbyversion.action?pageId=47251&selectedPageVersions=9&selectedPageVersions=8
2014-08-20T07:03:53
CC-MAIN-2014-35
1408500800767.23
[]
docs.codehaus.org
A COMPLETELY PERSONAL AND QUITE POSSIBLY PIXELATED SUMMARY OF KEY TAKE-AWAY POINTS FROM THE 2012 I-DOCS SYMPOSIUM, by Kerric Harvey 1. Match the tool with the job and start with “the job.” In other words, let technology enable, assist, or augment creativity, not define it. By the same token, avoid gratuitous interactivity—the act of making something “interactive” because you can, not because it advances the narrative, suits the story structure, and/or supports the purpose of the piece. 2. If you find that the technological tool you need doesn’t exist, then invent it—but check what’s “out there” first. 3. Realize that “abdicating” authorial choice is an authorial choice in its own right. For example, if you decide to crowd-source decision-making, this action, in its own right, is a decision for which you are still responsible. You can’t walk away from ultimate responsibility for the authorial role—you can only fracture it, fragment it, or farm it out. Ultimately, however, the moral, legal, ethical, and to overall creative responsibility rests with you, because you kicked it off in the first place. 4. This means that, as creators, we still have to take final responsibility for the choices we make, even if that’s to let other people make a lot of those choices. Bottom Line: To make something “interactive” is not the same thing as making it morally neutral. 5. These last two items (#3 & 4) suggest that adding interactivity, per se, also adds an entirely new layer of ethical complexity to several long-standing and central debates of documentary filmmaking, especially those pertaining to: a) the privacy rights of participants/subjects of the documentary (who are now likely to be its viewers and co-creators as well), and; b) the political security of these same people. This speaks to a fundamental shift in the ethical paradigm surrounding documentary, non-fiction, and/or data-based story-telling when it shifts to an online landscape, because the very means by which material is both created and consumed also supplies tracking, monitoring, and system-based surveillance at every step of the process. Amplification: Social media is not private despite whatever types of devices a user might employ to “make” it so and nothing on it or about it belongs to the “user.” Twitter just sold off two years of every body’s back tweets, for example. 6. A technical note from the writing perspective: We must remember that every moment of viewer/participant “interaction” is also a moment of “interruption” in the story’s narrative flow. This, in turn, means that we will have to develop new ways of exciting and engaging the “viewer” which do not necessarily rely on the classic story arc, with it’s need for sustained and escalating dramatic tension, since there’s no guarantee that there will be “space” in the plot structure to support this. This is especially true of material delivered via mobile platforms. 7. We must not be afraid of accepting an authorial voice for its own sake any more than we automatically assume a strong authorial role is the best option in every instance. In other words, we must avoid categorical assumptions in either direction – pro-authoring or anti-authoring. There are times and places in the story-telling universe in which this is not just appropriate to the intention of the story but also the best way to structure the narrative experience. In short, it’s important to approach each story-telling event separately and on its own terms, choosing the degree of author control which best suits each specific instance. This is especially true when working with local material from a global perspective. Although in this day and age of virtual relationships, instantaneous communication, and increasingly interlocking economic and political trans-nationalism, there is some cachet to the idea of the “global citizen,” it’s crucial to avoid falling prey to a false equivalency between local and global realities. The key differentiator here has to do with the political stakes and personal implications of those stakes for locals as contrasted with “globals.” The consequences of actions and events in a given hotspot, such as Egypt, for instance, are going to affect the people who have to live in an area in different ways and with different levels of intensity than they will those who live elsewhere. This is a simple truth, but one which is easy to overlook in the excitement and momentum of place-specific events which also have an effect in the larger world. 8. Do not confuse “curating” with “authoring.” The first function relates to framing content; the second, to interpreting it. 9. Never confuse a “data set” with a “story”, or “documenting” with “documentary.” 10. Never forget that the choice to exclude narrative material exerts just as much story-telling responsibility and just as much shaping influence on a piece of media as does the decision to include something.
http://i-docs.org/2012/04/04/kerric-harvey-on-i-docs-2012/
2014-08-20T06:48:37
CC-MAIN-2014-35
1408500800767.23
[]
i-docs.org
This section enumerates the changes that have been made to Scheme since the ``Revised4 report'' [6] was published. The report is now a superset of the IEEE standard for Scheme [13]:, and - and / with more than two arguments.. Syntax-rules now allows vector patterns. Multiple-value returns, eval, and dynamic-wind have been added. The calls that are required to be implemented in a properly tail-recursive fashion are defined explicitly. `@' can be used within identifiers. ` |' is reserved for possible future extensions.
https://docs.racket-lang.org/r5rs/r5rs-std/r5rs-Z-H-11.html
2018-03-17T16:05:00
CC-MAIN-2018-13
1521257645248.22
[]
docs.racket-lang.org
Difference between revisions of "Extension Installer/Triggers/onBeforeExtensionInstall" From Joomla! Documentation < Extension Installer | Triggers Latest revision as of 23:49, 5 October 2008 The onBeforeExtensionInstall trigger occurs before the installation of an extension. It has the following parameters: - method The method of installation that is occuring, either 'install' or 'discover_install'. - type The type of extension that is being installed, for example 'component' or 'plugin' - manifest A copy of the manifest of the extension about to be installed. This is only populated when the method is 'install'. - extension A copy of the extension table entry of the extension that is about to be installed. This is only populated when the method is 'discover_install' Note: The extension adapter may shift internally to an update procedure from an install procedure. the same information is passed however the functional actions are different.
https://docs.joomla.org/index.php?title=Extension_Installer/Triggers/onBeforeExtensionInstall&diff=prev&oldid=10987
2015-08-27T22:30:53
CC-MAIN-2015-35
1440644059993.5
[]
docs.joomla.org
Difference between revisions of "JPackageManifest:: construct" From Joomla! Documentation Revision as of 21PackageManifest::__construct Description Constructor. Description:JPackageManifest:: construct [Edit Descripton] public function __construct ($xmlpath='') - Returns - Defined on line 84 of libraries/joomla/installer/packagemanifest.php See also JPackageManifest::__construct source code on BitBucket Class JPackageManifest Subpackage Installer - Other versions of JPackageManifest::__construct SeeAlso:JPackageManifest:: construct [Edit See Also] User contributed notes <CodeExamplesForm />
https://docs.joomla.org/index.php?title=JPackageManifest::_construct/11.1&diff=57416&oldid=50335
2015-08-27T22:29:32
CC-MAIN-2015-35
1440644059993.5
[]
docs.joomla.org
Revision history of "Framework" View logs for this page There is no edit history for this page. This page has been deleted. The deletion and move log for the page are provided below for reference. - 08:03, 2 May 2013 Tom Hutchison (Talk | contribs) deleted page Description:Framework (redirect not needed) - 23:56, 12 September 2011 Wgviana (Talk | contribs) moved page Description:Framework to Chunk:Framework
https://docs.joomla.org/index.php?title=Description:Framework&action=history
2015-08-27T22:30:12
CC-MAIN-2015-35
1440644059993.5
[]
docs.joomla.org
All public logs Combined display of all available logs of Joomla! Documentation. You can narrow down the view by selecting a log type, the username (case-sensitive), or the affected page (also case-sensitive). - 09:26, 21 May 2009 Chris Davenport (Talk | contribs) automatically marked revision 14218 of page Category:JComponentHelper patrolled
https://docs.joomla.org/index.php?title=Special:Log&page=Category%3AJComponentHelper
2015-08-27T21:53:29
CC-MAIN-2015-35
1440644059993.5
[]
docs.joomla.org
Highlight, cut, copy, or paste text - Do one of the following: - On the slide-out keyboard, press and hold the key. To highlight text character by character, on the trackpad, slide your finger left or right. To highlight text line by line, on the trackpad, slide your finger up or down. - On the touch screen keyboard, to highlight the text, touch the beginning and the end of the text. To adjust the highlighted text, slide the cursor frame. - Press the key. - Click Cut or Copy. - Place the cursor where you want to insert the cut or copied text. - Press the key > Paste. Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/smartphone_users/deliverables/38326/1123541.jsp
2015-08-27T21:25:40
CC-MAIN-2015-35
1440644059993.5
[]
docs.blackberry.com
Difference between revisions of "Projects" From Joomla! Documentation Revision as of 14:46, 26 February 2013 - Joomla! Developer Tutorials Project - This project is aimed at creating consistent, easy-to-follow Tutorials for Joomla! Extension Developers, written by Joomla! Developers. - Joomla! Help Screens Project - This project focuses on the administrator Joomla! 2.5 Help screens and Joomla! 3.0 Help screens. Unless a Joomla! Installation is modified, all core component Help screens are being served from this wiki. The summary status of each Joomla! Help screen can be found in the previous links. -.]]
https://docs.joomla.org/index.php?title=JDOC:Projects&diff=82077&oldid=60545
2015-08-27T21:40:28
CC-MAIN-2015-35
1440644059993.5
[]
docs.joomla.org
Cascading for the Impatient Prerequisites $ git clone Each part has its own sub-directory in the repository. In order to follow the tutorial, you will also have to have gradle and Apache Hadoop installed on your computer. You do not need a hadoop cluster, local mode is sufficient. In the tutorials, we will also use Driven as an alternative to viewing the application composition using the .dot file graphics. Driven is the application performance management product designed to help developers accelerate Cascading application development and management. Driven will give us additional visibility about our Cascading tutorial as we run it on the cluster. If there’s an issue, we can immediately identify where it happened and narrow it down to the specific line of code. You do not need to make any changes to your existing Cascading applications to integrate with the Driven application. To use Driven, the Driven Plugin must be on the runtime classpath for your Cascading application. The cloud version of Driven is free for developer use. To get started using Driven, visit Getting Started with Driven to access all the developer features and gain visibility into your Cascading applications (including apps built with Scalding, Cascalog, Lingual, Pattern or any other Cascading dynamic programming language). Gradle Everything has been tested with gradle 1.12. You can check your version of gradle $ gradle -v ------------------------------------------------------------ Gradle 1.12 ------------------------------------------------------------ ... Hadoop For hadoop please install the latest stable version from the 2.x series. At the time of this writing this means Apache hadoop 2.4.1. $ hadoop version Hadoop 2.4.1 ... Cascading is compatible with a number of hadoop distributions and versions. You can see on the compatibility page, if your distribution is supported. IDE support While an IDE is not strictly required to follow the tutorials, it is certainly useful. You can easily create an IntelliJ IDEA compatible project in each part of the tutorial like this: $ gradle ideaModule gradle eclipse Part 1 Implements simplest Cascading app possible Copies each TSV line from source tap to sink tap Roughly, in about a dozen lines of code Physical plan: 1 Mapper Part 2 Implements a simple example of WordCount Uses a regex to split the input text lines into a token stream Generates a DOT file, to show the Cascading flow graphically Physical plan: 1 Mapper, 1 Reducer Part 3 Uses a custom Function to scrub the token stream Discusses when to use standard Operations vs. creating custom ones Physical plan: 1 Mapper, 1 Reducer Part 4 Shows how to use a HashJoin on two pipes Filters a list of stop words out of the token stream Physical plan: 1 Mapper, 1 Reducer Part 5 Calculates the importance of a word in the document (TF-IDF) using an ExpressionFunction Shows how to use a CountBy, SumBy, and a CoGroup Physical plan: 10 Mappers, 8 Reducers Part 6 Includes unit tests in the build Shows how to use other TDD features: checkpoints, assertions, traps, debug Physical plan: 11 Mappers, 8 Reducers Other versions Also, compare these other excellent implementations of the example apps here: Scalding for the Impatient by Sujit Pal in Scalding and Cascalog for the Impatient by Paul Lam in Cascalog.
http://docs.cascading.org/impatient/
2015-08-27T21:20:09
CC-MAIN-2015-35
1440644059993.5
[]
docs.cascading.org
Joomla Installation Resources From Joomla! Documentation Recommended Reading - Technical Requirements - Installation Manual - Upgrade Instructions - Security - View installation page from Joomla 1.5 installer to help choosing hosting company Tutorials List of all articles belonging to the categories "Tutorials" AND "Installation" - Converting an existing website to a Joomla! website - Copying a Joomla website - Copying a website from localhost to a remote host - Delete Installation folder - Delete Installation folder/en - FreeBSD Installation - Installing Joomla locally - Installing Joomla locally/en List of all articles belonging to the categories "FAQ" AND "Installation" - Adding www to a url - J2.5:Global configuration - J3.x:Global configuration - J3.x:Global configuration/en - J3.x:Global configuration/uk - How can you avoid using chmod 0777 to enable installs? - How can you change PHP settings using htaccess? - How much disk space do you need to install Joomla!? - How to check if mod rewrite is enabled on your server - Category:Installation FAQ - Category:Installation FAQ/en - Installing Joomla using an Auto Installer
https://docs.joomla.org/index.php?title=Joomla_Installation_Resources&oldid=36291
2015-08-27T23:06:48
CC-MAIN-2015-35
1440644059993.5
[]
docs.joomla.org
Platform independence The BlackBerry Web Services are platform-independent, so you can integrate new applications that use the BlackBerry Web Services with existing applications and experience fewer compatibility issues. The BlackBerry Web Services are fully scalable, so it is easier to integrate your applications with different versions of the BlackBerry Enterprise Server for Microsoft Office 365 and troubleshoot compatibility issues, regardless of the messaging server. Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/admin/deliverables/49540/dme1351108797922.jsp
2015-08-27T21:29:55
CC-MAIN-2015-35
1440644059993.5
[]
docs.blackberry.com
Income Tax Myths "There's a big reward available to anyone who can prove the income tax law exists." Why Don't I Claim The Reward? Many Internet sites appear to offer a big reward (anywhere from $10,000 to $1,000,000) for anyone who can show people the law requiring Americans to pay income taxes. If I'm so right, why don't I claim these rewards and get rich? I'd love to, but these reward offers are bogus. On close examination, they always contain some catch that makes them unclaimable. The Catch in the Bogus Reward Offers For example, the blogspot page of Ed Brown (the New Hampshire protestor who holed up in his house for months after his criminal conviction) used to have a $1,000,000 reward offer. I can't give you the exact quote anymore, but as I recall it used to offer $1,000,000 in property to anyone who could show the tax law. But it contained the statement "we are fully aware of Title 26." So effectively, the Brown offer was, "I'll pay you $1,000,000 if you can show me the law requiring people to pay income taxes, provided it isn't Title 26." That leaves you where you would be if someone said, "I'll pay you $1,000,000 if you can name the capital of New York state, provided it isn't Albany." Kind of pointless. Another common trick is to require those claiming the reward to disprove some true statements. For example one page offered $10,000 to anyone who could prove a whole list of statements to be false. Well, some of the statements were typical tax protestor absurdities, but others were just true statements, such as "The Constitution gives the federal/national government limited powers." There was no way to win the reward by proving that all the statements are false, since some of them were true. A different catch was present in the Freedom Law School offer. That offer came close to being a reward for showing the income tax law, but the offer was to the first person who could show the law. If you showed them the law, they could just say that someone else had shown them the law already, and how would you disprove that? And of course, all these catches are before we get to the practical problems, such as: (1) obviously no tax protestor really intends to pay these rewards, (2) they probably don't have the money anyway, (3) it would require a long and costly lawsuit to collect, (4) many of these offerors don't reveal a real name and address so that you could sue them if necessary, and (5) some of them (e.g., Ed Brown) are in jail and the IRS is selling off such property as they actually have. [I used to have links to all these reward offers, but some have vanished over the years. Similar analysis likely applies to any current offers.] If anyone had a real, no-catch offer of big money for anyone who can show them the income tax law, I would happily apply. If the offer were big enough (say, $100,000 or more), and they really had the money, I would even be ready to sue them to collect. Are you a rich protestor reading this right now? Are you ready to reveal your real name and address? Do you actually have big money in the bank that you'd offer to pay me if I can show you the law that requires average Americans to pay income taxes? I await your But there's no point going after bogus offers.
http://docs.law.gwu.edu/facweb/jsiegel/Personal/taxes/reward.htm
2015-08-27T21:21:34
CC-MAIN-2015-35
1440644059993.5
[]
docs.law.gwu.edu
Difference between revisions of "Copying a Joomla website" From Joomla! Documentation Revision as of 05:07, 7 September 2008 -))
https://docs.joomla.org/index.php?title=Talk:Copying_a_Joomla_website&diff=10573&oldid=10572
2015-08-27T22:27:06
CC-MAIN-2015-35
1440644059993.5
[]
docs.joomla.org
Difference between revisions of "Components Banners Categories" From Joomla! Documentation Revision as of 17:23, 22 June 2013 Contents Components Help Screens - Components Banners Banners - Components Banners Banners Edit - Components Banners Banners Options - Categories - Select the Categories menu link from the Banner Manager, Banner Clients Manager or the Banner Tracks Manager in the top left Max Levels, State, Access or Language. - Batch. Batch processes the selected banner category. Works with one or multiple banner category selected. - Rebuild. Reconstructs and refreshes the banner category table. Normally, you do not need to rebuild this table. This function is provided in case the data in the table becomes corrupted. - Options. Opens the Options window where settings such as default parameters can be edited. <translate> - Help. Opens this help screen.<
https://docs.joomla.org/index.php?title=Help32:Components_Banners_Categories&diff=100822&oldid=100821
2015-08-27T21:41:34
CC-MAIN-2015-35
1440644059993.5
[array(['/images/a/a7/Help30-colheader-Order-Ascending-DisplayNum.png', 'Help30-colheader-Order-Ascending-DisplayNum.png'], dtype=object)]
docs.joomla.org
Frequently Asked Questions Here are some of the most frequently asked questions we get from publishers. Publishers - Do I have to be approved to become a publisher? - Does it cost anything to be a publisher? - Where do I find my publisher code? Places API - How much does it cost to get places data? - How many calls can I make against the API? - Do you have places and businesses outside the United States? Reviews API Ads or Places that Pay - How do I become approved to receive revenue from "Places that Pay" and advertising? - How much money can I expect to make as publisher? - Do I have to show ads on my site or application? Publishers Do I have to be approved to become a publisher? To become a publisher, simply fill out our registration form. We just need your name, a valid phone number and email account, as well as a little information about your application, then you are automatically registered. No approval is necessary, you will immediately receive a publisher code via email. Does it cost anything to be a publisher? No. It does not cost anything to be a publisher. Where do I find my publisher code? After registering as a publisher you should receive an email that contains a publisher code. If you have not received your code, check your SPAM inbox, then contact [email protected]. Places API How much does it cost to get places data? Nothing. Places data is provided for FREE to developers. How many calls can I make against the API? You are allowed to send as many as 10 million queries per month to the APIs. Do you have places and businesses outside the United States? No. Currently we only provide places and businesses for United States. Reviews API Which sites do you aggregate reviews from? We have reviews today for Demandforce, RatePoint, Service Magic, Judy's Book, Insider Pages and Citysearch. Adding more reviews to our data set is a priority for CityGrid and something we are actively working on. Why do I sometimes see only partial reviews? CityGrid is under contract by some providers to only show part of the reviews, and send users to their sites for full reviews. Ads or Places That Pay How do I become approved to receive revenue from "Places that Pay" and advertising? Email a link to your completed application to [email protected] for review by our partner account management team. How much money can I expect to make? The amount you make is variable and dependent on the quantity and quality of traffic at your site. Higher quality sites will make more money per connection. Do I have to show ads on my site or application? You are not required to show ads on your site. You can remain an "open publisher" and only publish places, reviews, and offers on your site or application, as long as you adhere to our usage requirements and terms and conditions.
http://docs.citygridmedia.com/display/citygridv2/FAQ
2015-08-27T21:22:40
CC-MAIN-2015-35
1440644059993.5
[]
docs.citygridmedia.com
Counts the number of valid days between begindates and enddates, not including the day of enddates. If enddates specifies a date value that is earlier than the corresponding begindates date value, the count will be negative. New in version 1.7.0. See also Examples >>> # Number of weekdays in January 2011 ... np.busday_count('2011-01', '2011-02') 21 >>> # Number of weekdays in 2011 ... np.busday_count('2011', '2012') 260 >>> # Number of Saturdays in 2011 ... np.busday_count('2011', '2012', weekmask='Sat') 53
http://docs.scipy.org/doc/numpy-1.7.0/reference/generated/numpy.busday_count.html
2015-08-27T21:30:27
CC-MAIN-2015-35
1440644059993.5
[]
docs.scipy.org
An attribute on a model; a given field usually maps directly to a single database column. A higher-order view function that provides an abstract/generic implementation of a common idiom or pattern found in view development. See Class-based views. Models store your application’s data. “Model-template-view”; a software pattern, similar in style to MVC, but a better description of the way Django does things. See the FAQ entry. Also known as “managed attributes”, and a feature of Python since version 2.2. This is a neat way to implement attributes whose usage resembles attribute access, but whose implementation uses method calls. An object representing some set of rows to be fetched from the database. See Making queries. A short label for something, containing only letters, numbers, underscores or hyphens. They’re generally used in URLs. For example, in a typical blog entry URL: the last bit (spring) is the slug. A chunk of text that acts as formatting for representing data. A template helps to abstract the presentation of data from the data itself.
https://andrewstestingfiles.readthedocs.io/en/latest/glossary.html
2022-05-16T08:46:00
CC-MAIN-2022-21
1652662510097.3
[]
andrewstestingfiles.readthedocs.io
You're viewing Apigee Edge documentation. View Apigee X documentation. Apigee Edge has several entry points that you might want to secure with TLS. In addition, Edge add-ons, such as the Developer Services portal, have entry points that can be configured to use TLS. The Edge TLS configuration procedure depends on how you deployed Edge: Apigee Edge Cloud or Apigee Edge for Private Cloud. Cloud-based deployment In a Cloud-based deployment of Edge you are only responsible for configuring TLS access to API proxies and your target endpoints. For the Cloud version of the Developer Services portal, you configure TLS on on the Pantheon hosting server. For more, see Using TLS in a Cloud-based Edge installation. Private Cloud deployment For an Apigee Edge for Private Cloud installation of the Developer Services portal, you are completely responsible for configuring TLS. That means you not only have to obtain the TLS certificate and private key, but you also have to configure Edge to use TLS. For more, see Using TLS in a Private Cloud installation. Supported versions of TLS The supported versions of TLS depend on whether you are using Edge in the Cloud or Edge for the Private Cloud: - Edge in the Cloud: Supports TLS version 1.2 only. Support for TLS versions 1.0 and 1.1 for the Cloud have been retired. For more information, see TLS 1.0 and 1.1 retirement. - Edge for the Private Cloud: Supports TLS versions 1.0, 1.1, and 1.2. Where Edge uses TLS The following images shows the places in an Edge installation where you can configure TLS: Apigee Edge for Private Cloud customers typically configure all connections to use TLS. However, for Cloud customers, Apigee handles most of the TLS configuration for you and only have to configure TLS for connections 3 and 4 shown in the figure. The following table describes these TLS connections: The Cloud-based version of Edge is typically configured so that all request from the API client are handled by the Router. Private Cloud customers can use a load balancer before the Router to handle requests. The following image shows a scenario where the API client accesses Edge through a load balancer, rather than accessing the Router directly: In a Private Cloud installation, the presence of a load balancer is dependent on your network configuration of Edge. When using a load balancer, you can configure TLS between the API client and the load balancer and, if necessary, between the load balancer and the Router, as the following table describes: Where the Developer Services portal uses TLS The following image show the two places where the portal uses TLS: Apigee Edge for Private Cloud and Edge Cloud customers configure TLS on both connections. The following table describes these connections in more detail: For more information on configuring TLS for the Cloud-based version and the Apigee Edge for Private Cloud version of the portal, see Using TLS on the portal.
https://docs.apigee.com/api-platform/system-administration/using-ssl-edge?hl=zh-cn
2022-05-16T08:29:56
CC-MAIN-2022-21
1652662510097.3
[]
docs.apigee.com
A typical Hortonworks Data Platform (HDP) install requires access to the Internet in order to fetch software packages from a remote repository. Since corporate networks typically have various levels of firewalls, these firewalls may limit or restrict Internet access, making it impossible for your cluster nodes to access the HDP repository during the install process. The solution for this is to either: Create a local mirror repository inside your firewall hosted on a local mirror server inside your firewall; or Provide a trusted proxy server inside your firewall that can access the hosted repositories. This document will cover these two options in detail, discuss the trade-offs, provide configuration guidelines, and will also provide recommendations for your deployment strategy. In general, before installing Hortonworks Data Platform in a production data center, it is best to ensure that both the Data Center Security team and the Data Center Networking team are informed and engaged to assist with these aspects of the deployment.
https://docs.cloudera.com/HDPDocuments/HDP2/HDP-2.0.0.2/bk_reference/content/deployinghdp_appendix_chap4_1.html
2022-05-16T09:16:09
CC-MAIN-2022-21
1652662510097.3
[]
docs.cloudera.com
If you already have a HPE Consumption Analytics platform account, you can sign in to it using the credentials for your Google account. This allows you to manage just one password (the one for the external account), a capability known as single sign on (SSO). To sign in to HPE Consumption Analytics platform with your Google account - Click the sign in with Google account option on the right hand side of the screen: - If the Google account ID corresponding to your HPE Consumption Analytics platform account is shown, click it and skip steps 3 to 8. If the Google account ID corresponding to your HPE Consumption Analytics platform account is not shown, proceed to step 3. For the sake of privacy, the Google account information has been redacted from the following screen captures. - Click Use another account - In the form that opens in your browser, enter your Google user ID (Gmail address or phone): - Click Next - Enter your Gmail password: - Click Next If you don't have two-step authentication set on your Google account, skip steps 9 and 10.
https://docs.consumption.support.hpe.com/CCS/HPE_Consumption_Analytics_Portal_User_Guides/Getting_started_with_the_HPE_Consumption_Analytics_Portal/0060_Signing_in_with_Google
2022-05-16T09:32:10
CC-MAIN-2022-21
1652662510097.3
[]
docs.consumption.support.hpe.com
Catalog API The Catalog API allows managing and interacting with sources, spaces and datasets. - Space - Source - Folder - Dataset - File - Access Control - Wikis and Tags - List Catalogs - View a Catalog - View the Graph Information of a Catalog - View a Catalog using its Path - View a Catalog Wikis and Tags - Create a Catalog - Promote a File/Folder to a Physical Dataset - Refresh a Catalog - Create and Update Tags and Wiki for a Catalog - Edit a Catalog - Delete a Catalog Was this page helpful? Glad to hear it! Thank you for your feedback. Sorry to hear that. Thank you for your feedback.
https://docs.dremio.com/software/rest-api/catalog/
2022-05-16T07:42:08
CC-MAIN-2022-21
1652662510097.3
[]
docs.dremio.com
This use case scenario provides the Identity REST API command and payload for changing the name of a team. Use this command to change the name of a team: http --auth-type=veracode_hmac PUT " < input.json The team ID is the numeric ID, separated with hyphens, for the target object. For example: 7336556f-9ef2-4a1c-b536-be8608822db6. The API passes the JSON file that you populate with the necessary values as shown in this example payload: { "team_name": "Physical Penetration Testing" }
https://docs.veracode.com/r/c_identity_update_team
2022-05-16T07:45:19
CC-MAIN-2022-21
1652662510097.3
[]
docs.veracode.com
Disabling CSPF path calculations By default, CSPF is enabled for signaled LSP calculations, but you can disable CSPF. For example, to allow a TE tunnel LSP to traverse OSPF areas, disable CSPF. Disabling CSPF means that the full CSPF path is not computed up front. Instead, the vRouter queries the RIB for the next-hop information. This information is needed to reach the egress vRouter. Then, the vRouter sends the RSVP PATH message to the next hop without an Explicit Route Object (ERO). A similar process occurs at each hop along the path. If you configure a full or partial explicit path, the vRouter queries the RIB for the next-hop information to reach the first hop in the explicit path. The vRouter sends the RSVP PATH message with an ERO whose first hop is the actual first next hop, and whose subsequent hops are from the configured path. Each hop along the path removes the first hop from the ERO and performs a RIB lookup on the next hop in the list. To disable constraint-based path selection for a TE tunnel, use the following command: set protocols mpls-rsvp tunnels tunnel name { primary | secondary } no-cspf vyatta@vyatta# set protocols mpls-rsvp tunnels tunnel tunnel1 primary no-cspf vyatta@vyatta# set protocols mpls-rsvp tunnels tunnel tunnel1 secondary no-cspf vyatta@vyatta# commit vyatta@vyatta# show mpls rsvp session detail Ingress (Primary) 6.6.6.6 From: 1.1.1.1, LSPstate: Up, LSPname: tunnel1 Ingress FSM state: Operational Setup priority: 7, Hold priority: 0 CSPF usage: Disabled Reoptimization: Disabled IGP-Shortcut: Disabled, LSP metric: 65 LSP Protection: None Label in: -, Label out: - Tspec rate: 0, Fspec rate: 0 ...
https://docs.vyatta.com/en/supported-platforms/vrouter/configuration-vrouter/mpls/configuring-rsvp-te/disabling-cspf-path-calculations
2022-05-16T09:07:06
CC-MAIN-2022-21
1652662510097.3
[]
docs.vyatta.com
March 20th, Enhancements - truewill now be considered transparent. To preserve the previous look you can make your shader opaque or use a ray switch to make it opaque only to shadows. You can also disable smart opaque with a user variable declared on the options node: options declare disable_smart_opaque constant BOOL disable_smart_opaque ON This is intended for old assets that can't be modified. If possible, we recommend instead to specify opacity values in the shader itself. () - Automatic reinitialization of shapes: Modifying a node parameter in a geometry (aka shape) node will now trigger automatic re-initialization of that node, thus no longer requiring re-exporting that node from the DCC or client code. (#7689) - Default AtRGB/AtRGBA assignment: We now use the default compiler supplied assignment operator for the AtRGBand AtRGBAobjects. This means these objects are now trivially copyable. This should not break preexisting code. (#7969) - Updated to OIIO 2.1.0: Updated to the latest OpenImageIO library in order to bring in several texture related bug fixes. (#7929) API additions - Render hint for controlling interactive outputs: Using the newer render API, there is an additional bool render hint progressive_show_all_outputswhich will force all render outputs to be written to even during early blocky AA passes. By default it is off, and all but the main interactive output are skipped so that interactivity is improved. (#7986) - MaterialX API: A new function AiMaterialxWrite()bakes the shader and other look properties for one or more shapes to a .mtlx file along with the description of the shaders and shading graphs (#7883), and AiMaterialxGetLookNames()gets the list of looks contained in a .mtlx document. (#8087) - Procedural scope from operator context: Operator instances are now given a cook context ( AtCookContext) which they can use to determine if they are being cooked using a procedural scope, i.e. the graph they are in is connected to a procedural, using AiOpCookContextGetCookScope(). Note that the same operator instance can be connected both to one or more procedurals as well as the scene options. An example use case is the MaterialX operator which uses this to make assignments relative to the scope. The operator also omits a node name prefix if a scope is found as the shaders it creates are automatically put in the procedural scope and thus do not require a unique name. (#7967) - Operator post cook callback: A new operator function callback operator_post_cookhas been added which is called once for each operator instance when all the operators have finished cooking. (#7973) - Match nodes against a selection expression: AiOpMatchNodeSelection()is a function that can be used to check if a node's name, type, and parameter match a selection expression. (#7909) - GPU cache API: This new API manages the OptiX™ disk cache, which is automatically generated during JIT compilation prior to GPU rendering. It contains a store of previously compiled OptiX™ programs, so that subsequent renders start faster. The API provides a means to asynchronously pre-populate the cache and customize its location. This pre-population is a somewhat heavy process and it is recommended that the pre-population is triggered after installing a new Arnold version, updating to a new NVIDIA® driver, or changing the hardware configuration of GPUs on the system. (#7926) Incompatible changes - Removed legacy point clouds: An internal point cloud data structure (previously used for SSS) and its associated API have been removed. (#6358) -) - Custom operators: Custom operators need to be recompiled for this version of Arnold to include the new operator_post_cook()callback. (#7926) - Smart opaque: Setups with transparency but with the opaqueflag set to falsewill now be considered transparent. To preserve the previous look you can make your shader opaque or use a ray switch to make it opaque only to shadows. To preserve the previous look you can make your shader opaque or use a ray switch to make it opaque only to shadows. You can also disable smart opaque with a user variable declared on the options node: options declare disable_smart_opaque constant STRING disable_smart_opaque "any-value-will-do" This is intended for old assets that can't be modified. If possible, we recommend instead to specify opacity values in the shader itself. (#5966) - Improved skydome sampling: Skydome samples need to be lowered to 70% or less from their original value (3 instead of 4) to roughly keep the number of shadow rays and associated cost the same. (#6669) Bug fixes - #8198: Crash on operator modifying object nodes - #7401: Triplanar: object coord space not working during displacement - #7951: Arnold Crashes with certain multichannel TX files on Windows - #8139: Triplanar: pref not working during displacement - #8187: Alembic doesn't cleanup children cameras - #6267: Dxy screen differentials in displacement context are zero - #6376: Parallel AiMakeTx can consume too much memory - #6509: Setting a node parameter with the same current value should be ignored - #6671: Add back --monochrome-detect to AiMakeTx when OIIO bug is fixed - #7408: OpenEXR: standard chromaticities metadata was missing in output files - #7694: Artefact when rendering inside a volume mesh - #7741: Arnold crashes when you try to write out render stats - #7750: Connected normal_map and bump2d breaks transmission - #7760: Space transform screen space issue - #7763: Subdivs: per face iterations should be ignored if type is not BYTE - #7781: Crash when using polygon holes - #7784: Operator nodes are not allowed within procedurals - #7790: No diagnostic showing license server used for successful checkout (RLM parity) - #7794: Running multiple AiMakeTx hangs and crashes in 5.2.2.0 - #7803: Contour filter memory leak - #7805: Change error to warning when OSL output keyword is missing - #7808: maketx was not automatically doing --opaque-detect - #7812: Transmission toon artifacts around triangle edges - #7850: Forward references always take precedence over parameter overrides - #7853: Transforming a procedural child node doesn't update bounds properly - #7905: Color shifts in randomwalk sss when far from origin - #7939: Add missing camera projection features - #7947: Variance buffers are not cleared in progressive mode when updating sampling settings - #7952: [alembic] Add triggers for filename, objectpath and fps changes - #7955: Windowed adaptive disabled with 1-pixel filters - #7982: Log reason why custom procedural fails to load - #7987: Add missing attributes in ramp_rgb - #8001: MaterialX: Add namespace prefix for materials when used in the global scope to avoid name clashing - #8006: Curves: opaque override not working for non camera rays - #8012: Ramp_rgb wrong indexing with shuffled positions - #8016: Caustic avoidance in standard_surface messes up closure weights - #8019: Deep driver: append does not work with half data channels - #8026: Luminance conversions should take into account working color space - #8032: Mismatch in python bindings for render API - #8042: Address assignment expression issues#8097Add wrap_mode "none" for uv_transform - #8142: Ramp in interpolation "constant" not including key positions#8170allow smaller SSS radius - #8175: Uniform user attributes not selectable by operators - #7788: skydome_light crashes when ignore_textures is enabled on GTC robot scene - #8009: Reset shader assignment using set_parameter - #8047: AiNodeReset should not reset the node's name - #8089: AtArray interpolation of matrices with one key ignored index - #8191: AiNodeGetParent missing from python bindings
https://docs.arnoldrenderer.com:443/display/A5ARP/5.3.0.0
2022-05-16T07:43:17
CC-MAIN-2022-21
1652662510097.3
[]
docs.arnoldrenderer.com:443
# Object-Capability Model # Intro When thinking about security, it is good to start with a specific threat model. Our threat model is the following: We assume that a thriving ecosystem of Cosmos SDK modules that are easy to compose into a blockchain application will contain faulty or malicious modules. The Cosmos SDK is designed to address this threat by being the foundation of an object capability system. The structural properties of object capability systems favor modularity in code design and ensure reliable encapsulation in code implementation. These structural properties facilitate the analysis of some security properties of an object-capability program or operating system. Some of these — in particular, information flow properties — can be analyzed at the level of object references and connectivity, independent of any knowledge or analysis of the code that determines the behavior of the objects. As a consequence, these security properties can be established and maintained in the presence of new objects that contain unknown and possibly malicious code. These structural properties stem from the two rules governing access to existing objects: - An object A can send a message to B only if object A holds a reference to B. - An object A can obtain a reference to C only if object A receives a message containing a reference to C. As a consequence of these two rules, an object can obtain a reference to another object only through a preexisting chain of references. In short, "Only connectivity begets connectivity." For an introduction to object-capabilities, see this Wikipedia article (opens new window). # Ocaps in practice The idea is to only reveal what is necessary to get the work done. For example, the following code snippet violates the object capabilities principle: The method ComputeSumValue implies a pure function, yet the implied capability of accepting a pointer value is the capability to modify that value. The preferred method signature should take a copy instead. In the Cosmos SDK, you can see the application of this principle in the gaia app. The following diagram shows the current dependencies between keepers. runTx middleware
https://docs.cosmos.network/main/core/ocap.html
2022-05-16T08:33:29
CC-MAIN-2022-21
1652662510097.3
[]
docs.cosmos.network
Making a layout work well on a computer screen and a mobile device is a difficult problem without a general solution. A sensible automatic conversion scheme can provide acceptable results in many cases, but there will always be layouts that require some custom design to look good on a mobile screen. It is assumed that readers are familiar with the j5 Mobile.
https://docs.hexagonppm.com/r/en-US/j5-IndustraForm-Designer-Help/Version-28.0/1047072
2022-05-16T08:53:59
CC-MAIN-2022-21
1652662510097.3
[]
docs.hexagonppm.com
Manage the Datadog resource This article shows how to manage the settings for your Azure integration with Datadog. Resource overview To see details of your Datadog resource, select Overview in the left pane. The details include: - Resource group name - Location/Region - Subscription - Single sign-on link to Datadog organization - Datadog offer/plan - Billing term It also provides links to Datadog dashboards, logs, and host maps. The overview screen provides a summary of the resources sending logs and metrics to Datadog. - Resource type – Azure resource type. - Total resources – Count of all resources for the resource type. - Resources sending logs – Count of resources sending logs to Datadog through the integration. - Resources sending metrics – Count of resources sending metrics to Datadog through the integration. Reconfigure rules for metrics and logs To change the configuration rules for metrics and logs, select Metrics and Logs in the left pane. For more information, see Configure metrics and logs. View monitored resources To see the list of resources emitting logs to Datadog, select Monitored Resources in the left pane. You can filter the list of resources by resource type, resource group name, location, and whether the resource is sending logs and metrics. The column Logs to Datadog indicates whether the resource is sending logs to Datadog. If the resource isn't sending logs, this field indicates why logs aren't being sent to Datadog. The reasons could be: - Resource doesn't support sending logs. Only resources types with monitoring log categories can be configured to send logs to Datadog. - Limit of five diagnostic settings reached. Each Azure resource can have a maximum of five diagnostic settings. For more information, see diagnostic settings. - Error. The resource is configured to send logs to Datadog, but is blocked by an error. - Logs not configured. Only Azure resources that have the appropriate resource tags are configured to send logs to Datadog. - Region not supported. The Azure resource is in a region that doesn't currently support sending logs to Datadog. - Datadog agent not configured. Virtual machines without the Datadog agent installed don't emit logs to Datadog. API Keys To view the list of API keys for your Datadog resource, select the Keys in the left pane. You see information about the keys. The Azure portal provides a read-only view of the API keys. To manage the keys, select the Datadog portal link. After making changes in the Datadog portal, refresh the Azure portal view. The Azure Datadog integration provides you the ability to install Datadog agent on a virtual machine or app service. If a default key isn't selected, the Datadog agent installation fails. Monitor virtual machines using the Datadog agent You can install Datadog agents on virtual machines as an extension. Go to Virtual machine agent under the Datadog org configurations in the left pane. This screen shows the list of all virtual machines in the subscription. For each virtual machine, the following data is displayed: - Resource Name – Virtual machine name - Resource Status – Whether the virtual machine is stopped or running. The Datadog agent can only be installed on virtual machines that are running. If the virtual machine is stopped, installing the Datadog agent will be disabled. - Agent version – The Datadog agent version number. - Agent status – Whether the Datadog agent is running on the virtual machine. - Integrations enabled – The key metrics that are being collected by the Datadog agent. - Install Method – The specific tool used to install the Datadog agent. For example, Chef or Script. - Sending logs – Whether the Datadog agent is sending logs to Datadog. Select the virtual machine to install the Datadog agent on. Select Install Agent. The portal asks for confirmation that you want to install the agent with the default key. Select OK to begin installation. Azure shows the status as Installing until the agent is installed and provisioned. After the Datadog agent is installed, the status changes to Installed. To see that the Datadog agent has been installed, select the virtual machine and navigate to the Extensions window. You can uninstall Datadog agents on a virtual machine by going to Virtual machine agent. Select the virtual machine and Uninstall agent. Monitor App Services using the Datadog agent as an extension You can install Datadog agents on app services as an extension. Go to App Service extension in left pane. This screen shows the list of all app services in the subscription. For each app service, the following data elements are displayed: - Resource Name – Virtual machine name. - Resource Status – Whether the app service is stopped or running. The Datadog agent can only be installed on app services that are running. If the app service is stopped, installing the Datadog agent is disabled. - App service plan – The specific plan configured for the app service. - Agent version – The Datadog agent version number. To install the Datadog agent, select the app service and Install Extension. The latest Datadog agent is installed on the app service as an extension. The portal confirms that you want to install the Datadog agent. Also, the application settings for the specific app service are updated with the default key. The app service is restarted after the install of the Datadog agent completes. Select OK to begin the installation process for the Datadog agent. The portal shows the status as Installing until the agent is installed. After the Datadog agent is installed, the status changes to Installed. To uninstall Datadog agents on the app service, go to App Service Extension. Select the app service and Uninstall Extension Reconfigure single sign-on If you would like to reconfigure single sign-on, select Single sign-on in the left pane. To establish single sign-on through Azure Active directory, select Enable single sign-on through Azure Active Directory. The portal retrieves the appropriate Datadog application from Azure Active Directory. The app comes from the enterprise app name you selected when setting up integration. Select the Datadog app name as shown below: Change Plan To change the Datadog billing plan, go to Overview and select Change Plan. The portal retrieves all the available Datadog plans for your tenant. Select the appropriate plan and click on Change Plan. Disable or enable integration You can stop sending logs and metrics from Azure to Datadog. You'll continue to be billed for other Datadog services that aren't related to monitoring metrics and logs. To disable the Azure integration with Datadog, go to Overview. Select Disable and OK. To enable the Azure integration with Datadog, go to Overview. Select Enable and OK. Selecting Enable retrieves any previous configuration for metrics and logs. The configuration determines which Azure resources emit metrics and logs to Datadog. After completing the step, metrics and logs are sent to Datadog. Delete Datadog resource Go to Overview in left pane and select Delete. Confirm that you want to delete Datadog resource. Select Delete. If only one Datadog resource is mapped to a Datadog organization, logs and metrics are no longer sent to Datadog. All billing stops for Datadog through Azure Marketplace. If more than one Datadog resource is mapped to the Datadog organization, deleting the Datadog resource only stops sending logs and metrics for that Datadog resource. Because the Datadog organization is linked to other Azure resources, billing continues through the Azure Marketplace. Next steps For help with troubleshooting, see Troubleshooting Datadog solutions. Feedback Submit and view feedback for
https://docs.microsoft.com/en-us/azure/partner-solutions/datadog/manage
2022-05-16T08:13:45
CC-MAIN-2022-21
1652662510097.3
[]
docs.microsoft.com
getCTM method [This documentation is preliminary and is subject to change.] Gets the transformation matrix that transforms from the current user units to the viewport coordinate system for the nearestViewportElement object. Syntax ISVGMatrix retVal = object.getCTM(); Standards information - Scalable Vector Graphics: Basic Data Types and Interfaces, Section 4.5.23 Parameters This method has no parameters. Build date: 1/26/2012
https://docs.microsoft.com/en-us/previous-versions/hh778051(v=vs.85)
2022-05-16T10:17:38
CC-MAIN-2022-21
1652662510097.3
[]
docs.microsoft.com
Transmit binary data Here's how to transmit binary data from your RockBLOCK Line endings Note that RockBLOCK expects commands to be terminated with a carriage return (\r) character. This is hex 0x0D. Using a line-feed (\n) hex 0x0A character will not work! With your RockBLOCK connected to a suitable power supply, check that your serial communications are established (default 19200, 8N1) and issue the commands: /* Issue AT command */ AT\r /* Receive response */ OK\r /* Turn off Flow Control */ AT&K0\r /* Receive response */ OK\r /* Define binary message length in MO buffer not including the mandatory 2 Byte checksum*/ AT+SBDWB=[<SBD Message Length>]\r /* Receive response */ READY\r /* Stream binary message and 2 byte checksum.*/ <Binary Message>+<Checksum>\r /* Receive Response */ <Status>\r OK\r /* Initiate an Extended SBD Session */ AT+SBDIX\r /* Receive response */ +SBDIX: <MO status>, <MOMSN>, <MT status>, <MTMSN>, <MT length>, <MT queued>\r /* See SBDIX Key for information on each parameter */ Check SBDWB status Before initiating an SBD session, make sure your SBDWB status is 0.. SBDIX Key The SBDIX command denotes the status of your sent, received and queued messages: <MO status> - Any returned number with a value of 0 to 2 indicates your message has been successfully transmitted. Any number above 2 indicates that the message has not been successfully transmitted. <MOMSN> - This number denotes the MO message number and cycles between 0 and 65535. <MT status> 0 - No messages waiting to be received. 1 - New message successfully received. 2 - Error during mailbox check / message reception. <MTMSN> - This number denotes the MT message number and cycles between 0 and 65535. <MT length> - The size (in bytes) of the MT message. <MT queued> -. After sending a message using SBDWB followed by a successful SBDIX, we recommend clearing the MO buffer with the command SBDD=0. If you don't clear the MO buffer, and also don't overwrite it with a new MO message, the modem will try to re-send this same message in the next transmission. Clearing the MO buffer is necessary if you're alternating between sending/not sending MO messages. AT commands in this chapter Updated about 2 years ago
https://docs.rockblock.rock7.com/docs/transmit-binary-data
2022-05-16T08:20:50
CC-MAIN-2022-21
1652662510097.3
[]
docs.rockblock.rock7.com
New in version 3.0. This file stores metadata for SSTable. There are 4 types of metadata: Validation metadata - used to validate SSTable correctness. Compaction metadata - used for compaction. Statistics - some information about SSTable which is loaded into memory and used for faster reads/compactions. Serialization header - keeps information about SSTable schema. The file is composed of two parts. First part is a table of content which allows quick access to a selected metadata. Second part is a sequence of metadata stored one after the other. Let’s define array template that will be used in this document. struct array<LengthType, ElementType> { LengthType number_of_elements; ElementType elements[number_of_elements]; } Table of content using toc = array<be32<int32_t>, toc_entry>; struct toc_entry { // Type of metadata // | Type | Integer representation | // |----------------------|------------------------| // | Validation metadata | 0 | // | Compaction metadata | 1 | // | Statistics | 2 | // | Serialization header | 3 | be32<int32_t> type; // Offset, in the file, at which this metadata entry starts be32<int32_t> offset; } The toc array is sorted by the type field of its members. struct validation_metadata { // Name of partitioner used to create this SSTable. // Represented by UTF8 string encoded using modified UTF-8 encoding. // You can read more about this encoding in: // // Modified_UTF-8_String partitioner_name; // The probability of false positive matches in the bloom filter for this SSTable be64<double> bloom_filter_fp_chance; } // Serialized HyperLogLogPlus which can be used to estimate the number of partition keys in the SSTable. // If this is not present then the same estimation can be computed using Summary file. // Encoding is described in: // using compaction_metadata = array<be32<int32_t>, be8>; This entry contains some parts of EstimatedHistogram, StreamingHistogram and CommitLogPosition types. Let’s have a look at them first. // Each bucket represents values from (previous bucket offset, current offset]. // Offset for last bucket is +inf. using estimated_histogram = array<be32<int32_t>, bucket>; struct bucket { // Offset of the previous bucket // In the first bucket this is offset of the first bucket itself because there's no previous bucket. // The offset of the first bucket is repeated in second bucket as well. be64<int64_t> prev_bucket_offset; // This bucket value be64<int64_t> value; } struct streaming_histogram { // Maximum number of buckets this historgam can have be32<int32_t> bucket_number_limit; array<be32<int32_t>, bucket> buckets; } struct bucket { // Offset of this bucket be64<double> offset; // Bucket value be64<int64_t> value; } struct commit_log_position { be64<int64_t> segment_id; be32<int32_t> position_in_segment; } struct statistics { // In bytes, uncompressed sizes of partitions estimated_histogram partition_sizes; // Number of cells per partition estimated_histogram column_counts; commit_log_position commit_log_upper_bound; // Typically in microseconds since the unix epoch, although this is not enforced be64<int64_t> min_timestamp; // Typically in microseconds since the unix epoch, although this is not enforced be64<int64_t> max_timestamp; // In seconds since the unix epoch be32<int32_t> min_local_deletion_time; // In seconds since the unix epoch be32<int32_t> max_local_deletion_time; be32<int32_t> min_ttl; be32<int32_t> max_ttl; // compressed_size / uncompressed_size be64<double> compression_rate; // Histogram of cell tombstones. // Keys are local deletion times of tombstones streaming_histogram tombstones; be32<int32_t> level; // The difference, measured in milliseconds, between repair time and midnight, January 1, 1970 UTC be64<int64_t> repaired_at; // Minimum and Maximum clustering key prefixes present in the SSTable (valid since the "md" SSTable format). // Note that: // - Clustering rows always have the full clustering key. // - Range tombstones may have a partial clustering key prefix. // - Partition tombstones implicitly apply to the full, unbound clustering range. // Therefore, an empty (min|max)_clustering_key denotes a respective unbound range, // derived either from an open-ended range tombstone, or from a partition tombstone. clustering_bound min_clustering_key; clustering_bound max_clustering_key; be8<bool> has_legacy_counters; be64<int64_t> number_of_columns; be64<int64_t> number_of_rows; // Version MA of SSTable 3.x format ends here. // It contains only one commit log position interval - [NONE = new CommitLogPosition(-1, 0), upper bound of commit log] commit_log_position commit_log_lower_bound; // Version MB of SSTable 3.x format ends here. // It contains only one commit log position interval - [lower bound of commit log, upper bound of commit log]. array<be32<int32_t>, commit_log_interval> commit_log_intervals; } using clustering_bound = array<be32<int32_t>, clustering_column>; using clustering_column = array<be16<uint16_t>, be8>; struct commit_log_interval { commit_log_position start; commit_log_position end; } struct serialization_header { vint<uint64_t> min_timestamp; vint<uint32_t> min_local_deletion_time; vint<uint32_t> min_ttl; // If partition key has one column then this is the type of this column. // Otherwise, this is a CompositeType that contains types of all partition key columns. type partition_key_type; array<vint<uint32_t>, type> clustering_key_types; columns static_columns; columns regular_columns; } using columns = array<vint<uint32_t>, column>; struct column { array<vint<uint32_t>, be8> name; type column_type; } // UTF-8 string using type = array<vint<uint_32_t>, be8>; Type is just a byte buffer with an unsigned variant integer (32-bit) length. It is a UTF-8 string. All leading spaces, tabs and newlines are skipped. Null or empty string is a bytes type. First segment of non-blank characters should contain only alphanumerical characters and special chars like '_', '-', '+', '.', '&'. This is the name of the type. If type name does not contain any ‘.’ then it gets “org.apache.cassandra.db.marshal.” prepended to itself. Then an “instance” static field is taken from this class. If the first non-blank character that follows type name is ‘(’ then “getInstance” static method is invoked instead. Remaining string is passed to this method as a parameter. There are following types:.
https://docs.scylladb.com/architecture/sstable/sstable3/sstables-3-statistics/
2022-05-16T08:50:58
CC-MAIN-2022-21
1652662510097.3
[]
docs.scylladb.com
Saving configuration The running configuration can be saved by using the save command in configuration mode. By default, configuration is saved to the config.boot file in the /config configuration directory. The save command writes only committed changes. If you try to save uncommitted changes, the system warns you that it is saving only the committed changes. Save configuration to the default configuration file Save the configuration to the config.boot file in the default directory by entering save in configuration mode. vyatta@vyatta# save Done [edit] vyatta@vyatta# Saving configuration to another file Save the configuration to a different file in the default directory by specifying a different file name. vyatta@vyatta# save testconfig Saving configuration to '/config/testconfig'... Done [edit] vyatta@vyatta# Saving the configuration to a file provides the ability to have multiple configuration files for different situations (for example, test and production). You can also save a configuration file to a location path other than the standard /config configuration directory by specifying a different path. You can save to a hard drive, compact Flash, or USB device by including the directory on which the device is mounted in the path.
https://docs.vyatta.com/en/getting-started/quick-start-configuration-scenarios/configuration-basics-in-the-cli/saving-configuration
2022-05-16T08:35:20
CC-MAIN-2022-21
1652662510097.3
[]
docs.vyatta.com
Agora supports the media device test and selection feature, allowing you to check if a camera or an audio device (a headset, microphone, or speaker) works properly. You can use the media device test and selection feature in the following scenarios: Before proceeding, ensure that you have implemented basic real-time functions in your project. See Start a Call or Start Live Interactive Streaming for details. Refer to the following steps to test the microphone and camera: AgoraRTC.getDevicesmethod to get available devices. AgoraRTC.createStreammethod to create an audio or video stream. In this method, fill the microphoneIdor cameraIdparameter with the device ID to specify the device to be tested. Stream.initmethod to initialize the stream. In the onSuccesscallback function, play the stream. Stream.getAudioLevelmethod to get the current audio level. The audio level is greater than 0 if the microphone. AgoraRTC.getDevices AgoraRTC.createStream Stream.init Stream.play Stream.getAudioLevel stream.init, see the API Reference for details. getDevicesmethod every time before testing the devices.
https://docs-preprod.agora.io/en/live-streaming/test_switch_device_web?platform=Web
2022-05-16T09:17:08
CC-MAIN-2022-21
1652662510097.3
[]
docs-preprod.agora.io
You're viewing Apigee Edge documentation. View Apigee X documentation. This topic lists some basic characteristics of API proxies, along with links to more information. APIs are entry points for one application to use the capabilities of another. You implement API proxies to create APIs In Apigee Edge, you implement API proxies by configuring API proxy logic as a sequence of steps that execute in response to a request from client code. You expose an API proxy to clients by defining endpoints that include a URL with resource paths, an HTTP verb, body requirements, and so on. Though it's called an API proxy, from the client code's perspective, it's the API. For an overview of API proxies, see Understanding APIs and API proxies. You arrange the sequence of API proxy logic using flows In any application, data flows through the application guided by condition logic. In Apigee Edge, the path of processing is made up of flows. A flow is a sequence of stages (or "steps") that make up an API proxy's processing path. Flows are how Apigee Edge provides places for you to apply logic and behavior at specific places from client to backend resource, then back to client. For more on flows, see Controlling how a proxy executes with flows You access state data through flow variables created by API proxies An API proxy has access to variables that represent execution state. You can access these variables from the XML that configures your API proxies and policies. You can also access them when you're extending an API proxy with a procedural language, such as Java, JavaScript, or Python. These variables are held by Apigee Edge. Some exist by default, usually because they're common to what API proxies do (such as because they're part of an HTTP request). You can also create your own variables to satisfy a logic requirement. For more about variables, see Managing proxy state with flow variables. You can have API proxies execute conditionally Just as in most programming languages, in API proxies you can have code execute conditionally. Conditions are often based on API proxy state, which you can access through flow variables. For example, you can have a condition that checks for the user agent, then processes the request accordingly. For more on conditional execution, see Flow variables and conditions. You implement most logic in an API proxy using policies Most of the logic you add to an API proxy is packaged as policies. A policy is an Apigee Edge component that encapsulates logic for a functional area, such as security or traffic management. You configure a policy with XML that sets properties for the underlying logic. You arrange policies in a sequence of "steps" within a flow, so that your API proxy executes the logic in the best order for your proxy's goals. For more about policies, see What's a policy?. You can include reusable sets of functionality When your API proxy includes logic that will be used from multiple places in your code -- such as other API proxies -- you can collect that logic for calls from multiple places. For example, you can group security logic in a shared flow that other API proxies call, reducing duplication across API proxies. For more on shared flows, see Reusable shared flows. For more on API proxy chaining, see Chaining API proxies together. You can debug a proxy with the Trace tool Apigee Edge includes a Trace tool you can use to examine your API proxy's execution flow when debugging and testing. The tool visually presents each API proxy step that executes for a request. As in a debugger, at each step you can view the list of variable values that make up API proxy state. For more about debugging with Trace, see Using the Trace tool. You handle API proxy errors as faults By configuring a fault handler, you can customize the error returned to an API client. Fault handlers give you control over error messages whether the error originates from your own code or from an included component (such as a policy). For more, see Handling faults.
https://docs.apigee.com/api-platform/fundamentals/structure-api-proxies?hl=es-AR
2022-05-16T09:27:16
CC-MAIN-2022-21
1652662510097.3
[]
docs.apigee.com
5.0.2.3 #6511 UDIMs: incorrect character substitutions #6481 crash during accel construction with invalid or invisible geometry #6500 error building BVH over procedurals with no visible objects #6516 Properly warn when there are more than 15 per-light AOVs #6527maketx and AiMakeTx are extremely slow or crash with certain large images 5.0.2.2 - #6257 s/twrap not working with <attr> expressions set in the image node filename - #6411 Excessive memory use with compressed UVs - #6417 Small PolyMesh memory leak at render shutdown - #6422 OSL raytype mismatch - #6440 Handle invalid characters when image tokens are resolved - #6441 Backfacing normals not supported in normal_map - #6442 Chained bumps using bump2d isn't working - #6453 Custom procedurals cannot create nodes before proc_init - #6454 Crash With Linear Subdivision, Vertex Normals, and Deformation Blur 5.0.2.1 - #6409 ASS metadata load is too slow - #6412 Crash when setting an array user parameter with its current value 5.0.2.0 Enhancements - New sub-surface scattering algorithm:for blending two surfaces together, may require redialing materials to achieve a similar look, and is more sensitive to non-closed meshes, "mouth bags", and internal geometry potentially casting shadows. This new algorithm is exposed in the standard_surfaceshader via the new parameters subsurface_type(with enum values diffusionand randomwalk) and subsurface_anisotropy(Henyey-Greenstein's eccentricity gfrom standard_surfaceand flakesshand flake_flip_flop. An arbitrary number of layers of flakes can be used ( flake_layers). The flakes at a deep layer are covered by the ones closer to the surface and more tinted by pigments (specified by the transmission_color parameter). - Subdivisionand can be turned off for specific meshes with polymesh.subdiv_frustum_ignore true. The global options.subdiv_frustum_paddingadds a world space padding to the frustum that can be increased as needed to minimize artifacts from out-of-view objects in cast shadows, reflections, etc. Note that motion blur is not yet taken into account and moving objects might require some additional padding. - Improved accuracy of UV coords: Mesh UV coordinates are now handled with higher numerical precision, fixing jagged artifacts that sometimes appeared when using high-resolution UDIM textures over very wide UV ranges. - Improved volume sampling of low-spread lights: Low-spread quad and disk lights should now produce less noise and show increased performance when participating in atmospheric scattering effects. - Improved procedural namespace memory usage: Reduced memory used by procedural namespacing by about 28KB per procedural primitive, so that procedurals now use very little additional memory. In a scene with 100K procedurals, this gave about a 2.7GB reduction in memory use. - Faster opacity masks: Texture map-based opacity masks will now be faster to render. Testing indicates around 3-13% faster. - Faster IPR: The message logging system has been optimized, resulting in about 10% faster performance in IPR, like while moving the camera. In addition, the IPR mode in kickhas been made substantially more responsive. - Faster AiNodeDestroy: we have optimized removal time for nodes contained in procedurals, greatly reducing the shutdown time at the end of a render in complex scenes. - Faster triplanarshader: Texture filtering in the triplanarshader has been improved, giving better antialiasing and up to a 2x speedup, specially when using high-resolution texture maps. - Celullar option in triplanarshader: The triplanarshader now supports projection through Voronoi cells using the new cellparameter. The rotation angle of the projected texture for each cell can be controlled with the cell_rotateparameter. Cells can be smoothly blended using the cell_blendparameter. - Improved flakesshader: The sizeparameter is replaced by the densityparameter, which makes it easy to control the size and number of flakes. Alpha channel can be used as a mask. The new flakesshader supports non-disc shapes and 3D flakes, which are useful to render gemstone inclusions like goldstone, for example. - Improved shadow_matteshader: We have revamped and simplified the shader to make it easier to use, and fixed a number of long-standing issues: Indirect illumination now fills the global diffuse_indirectand specular_indirectAOVs, so we have removed the shader’s (confusingly named) indirect_diffuseand indirect_specularAOVs. Self-reflections are no longer rendered. A new specular_IORparameter was added that controls Fresnel reflection. Parameters offscreen_colorand background_typewere removed. The new enum parameter backgroundcan be set to either scene_background(default) or background_color, which allows to connect a specific texture in the background_colorparameter slot. Parameter alpha_maskwas added to control whether the alpha must be opaque or if it has to contain the shadow mask. - Support for more OSL attributes: OSL shaders now support getattribute()lookups of standard camera attributes (e.g. camera:fov, camera:resolution, etc) as well as the geometry attributes geom:type, geom:name, geom:bounds, and geom:objboundson objects. - Transmit AOVs and Alpha: The standard_surfaceshader with transmission can now pass through AOVs, by enabling the transmit_aovsparameter.AOV. Other AOVs can also be passed straight through (without any opacity blending), which can be used for creating masks for example. - Improved multi-threaded render time stats: Render times when using more than one thread did not really work. We have improved this so that render times are now much more reliable and useful and can now be confidently used to determine what parts of Arnold are the most expensive. In particular, the subdivision and displacement times will now show how much total time was used as well as what fraction of that time was spent with threads unable to do useful work. This "threads blocked" time can often be lowered by using larger buckets or the random bucket_scanningordering. - Custom procedural namespaces: Procedurals can now declare a custom namespace using the new namespaceparameter. This custom namespace can be used instead of the procedural name, to reference contents through absolute or relative paths. Multiple procedurals can share the same namespace by using the same custom name. Also, they can declare an empty name and they will use the global namespace. (#6085) - Added -turn_smoothoption to kick: When using kick -turn, the -turn_smoothflag can be added to smoothly start and stop the movement as the original position is reached with a cubic ramp. - Added -laovsoption to kick: Using kick -laovs file.asswill display a list of all the built-in AOVs and all the AOVs registered by this .ass file. - Added cputime heatmap view to kick: When using kickyou can now toggle between viewing kicks default output and a cputime heatmap with the Tkey. The mapping of the heat map can be scaled with the - Support for wasd keys in kick -ipr m: Running kickin Maya-style IPR mode with -ipr mnow supports "wasd" style keyboard movement. This was previously only available in Quake-style mode, -ipr q. maketxversion info: The custom maketxbinary that ships with Arnold now reports, both in the command-line and in the embedded EXR headers, that it was built specifically for "OpenImageIO-Arnold", to distinguish it from the official "OpenImageIO" one. maketxcolor spaces: The custom maketxthat ships with Arnold now supports OCIO and SynColor (when available) through the colorengine, colorconfigand colorconvertflags. See maketx --help. AiMakeTxreports info messages: AiMakeTxnow also prints out OIIO informational messages (generated in verbose mode, for instance) in addition to errors. AiMakeTxreleases input file lock sooner: AiMakeTxwill now close the input texture as soon as possible instead of waiting for all the maketx jobs to finish. AiMakeTxand maketxoptimized flags: Both AiMakeTxand maketxnow always run with the flags --monochrome-detect --opaque-detect --constant-color-detect --fixnan box3 --oiio, which can result in smaller .tx files that are faster to load and take less memory. - Report when textures are changed during render: The log files now report when texture modifications during a render cause a texture read error, which can happen in certain pipelines. - OCIO color space family support: The OCIO color manager now implements color space enumeration by family. This is useful for UI drop down organization. - OCIO view/display enumeration: The OCIO color manager can now enumerate view/display combinations using the "View (Display)" family. This lets client programs filter color spaces when only a display transform is appropriate. - .ass metadata from compressed files: We now support loading metadata from .ass.gzfiles through the AiMetadataStoreLoadFromASS()API function. - Upgraded to OIIO 1.7.17: OpenImageIO has been upgraded to OIIO 1.7.17. API additions - Color space family enumeration: Existing color space families for the current config can be enumerated using the new API methods AiColorManagerGetNumFamiliesand AiColorManagerGetFamilyNameByIndex. The addition of these new API methods requires any existing custom color managers (which we know are very rare) to be recompiled. AiAOVSampleIteratorGetPixel(): Custom filters can now determine what pixel is being filtered with the new API method AiAOVSampleIteratorGetPixel(). transparentLPE label: When setting the transparentLPE label on a BSDF, the surface will act as if it is transparent and pass through AOVs. This would typically be used for transmission BSDFs, as it is in the standard_surfaceshader. - Deprecated API warnings: Defining AI_ENABLE_DEPRECATION_WARNINGSwill cause the compiler to emit warnings if deprecated Arnold API functions are used. - Random walk SSS closure: The new random walk SSS algorithm is exposed in the C++ API as AiClosureRandomWalkBSSRDF(). The corresponding OSL closure is randomwalk_bssrdf. Incompatible changes autotiledisabled by default: We found that the autotilecode in OIIO does not scale with high-resolution textures. In order to avoid very slow loading of untiled textures (such as JPEG) when autotile was enabled, we have now changed the autotiledefault setting to 0, which effectively disables it. In very rare cases, when rendering with a large number of high-resolution untiled textures, this change might degrade performance as the texture cache will blow up. The real solution is to never use untiled files, and instead convert all untiled textures to .tx tiled files. maketxdependencies: The custom maketxthat ships with Arnold now depends dynamically on libai.soand optionally on syncolor_shader.so, therefore to work correctly it needs to be run from its installation folder, like kick, or alternatively the new dependencies should be copied to where maketxis running from. - Light groups and volume shading: Just like with surface shapes, Arnold now obeys light group assignment on volume shapes and surfaces with volumetric interiors. While in many cases desirable, this can produce an unexpected change in the final image in scenes with light group assignments. flakesshader: It was not easy to control the number of flakes with the sizeand scaleparameters because they were mutually dependent. Now this can be easily done using the new densityparameter. The shape of each flake has been changed from disc to Voronoi cell, which is more suitable to render inclusions of gem stones. The shader output type has been changed from RGB to RGBA to support a mask. motionvectorAOV: The motion vector scaling factor in the built-in motionvectorAOV has changed. This was required to fix a bug that caused zero motion vectors for certain shutter positions. The output from the motion_vectorshader is unchanged: it can be used as a workaround in your old scenes if you require the previous scaling. shadow_mattechanges: The shader AOVs indirect_diffuseand indirect_specularwere removed, since the shader now fills the corresponding built-in AOVs. Parameters offscreen_colorand background_typewere removed. Specular reflection is now affected by Fresnel. To roll back to the previous specular behaviour, set specular_IORto a high value like 100, which effectively disables the Fresnel effect. See notes in the enhancements section above. Bug fixes 5.0.2.0 - #3319 Alpha not fully opaque in output images - #4559 Non-linkable light colors should have the linkable metadata disabled - #5974 Distant light not motion blurring direction - #5981 Incorrect melanin absorption values for wide gamut rendering color spaces - #6086 procedural memory overhead - #6087 maketx fails when run on Windows with read only (mandatory) user profiles - #6089 Deep EXR output of light path expressions missing volumes - #6094 artifacts in latlong skydome_light - #6095 Fresnel discontinuity in diffuse term when texture is connected to specular_color - #6097 Subdiv: duplicate vertex index in face causes crash - #6104 Quad lights do not work with projected textures - #6116 Warn that images will be watermarked if license authorization fails - #6124 mesh_light crashing when provided non-mesh node - #6128 curvature shading differences when seen through glossy transmission/reflection - #6135 crash during accel construction if there are over 65k overlapping primitives - #6136 Indirect sample clamp not visible in AOVs - #6138 light groups not supported by volume shapes and surface shader interiors - #6141 "A" AOV is black when 8 light AOVs are used - #6143 AiShaderGlobalsGetVertexUVs not working for uvsets in free render mode - #6151 Black AOV output due to conflicting AOV type redefinition - #6162 AiShaderGlobalsGetPositionAtTime, AiTraceBackground, and AiVolumeSample crash in background shading context - #6166 AiM4Scaling does not work in python - #6168 Overriding view direction for metal BSDF not supported - #6170 Crash creating motion blurred min_pixel_width inside a procedural - #6177 wireframe artifacts when not in raster-space - #6182 MotionVector AOV empty for negative motion start / end - #6184 MtoA crash when saving a scene using motion blur - #6186 triplanar uses wrong mipmap level for some parts of the object - #6187 triplanar uses overly high res mipmaps - #6192 Crash when removing and recreating a procedural instance - #6202 counter overflow crash in big scanline EXR images - #6203 Camera corruption after substantial foward zooming with kick -ipr m - #6219 photometric_light filename does not support environment variable expansion - #6222 Metallic BSDF albedo is always white - #6242 Crash when removing a procedural node - #6245 shadow_matte AOVs - #6248 Remove offscreen_color from shadow_matte - #6249 Remove shadow_matte background_type - #6250 Re-introduce background_color in shadow_matte - #6251 shadow_matte self-reflections / self-shadowing - #6254 trace_set: crash when destroying shader - #6255 sss_irradiance_shader fails with closure-based shaders - #6256 Add alpha_mask in shadow_matte - #6286 Volume shader: crash when connected as atmosphere shader - #6287 Crash with invalid options.atmosphere and options.background shaders - #6288 Miscellaneous crashes (empty nurbs, empty implicit, atmosphere shadow_matte) - #6292 Deep Driver: write errors hang the render - #6293 Deep driver: append does not work with overscan renders - #6294 Bucket call back: user bucket coords should be snapped to bucket grid - #6295 Crash when saving empty ginstance with open_procs enabled - #6297 Camera differential evaluation can cause crashes or hangs with OSL shaders linked to camera - #6313 crash when 4-channel opacity texture has no alpha=0 - #6331 Crash on Windows when loading plugins without .dll extension - #6353 barndoor light filter result not order-independent - #6174 "host app" metadata item not readable by AiMetadataStoreLoadFromASS - #6258 Range shader can sometimes ouput infinite or nan - #6303 Support upper and mixed case versions of .ass / .ass.gz extensions
https://docs.arnoldrenderer.com/pages/viewpage.action?pageId=70746361
2022-05-16T09:39:52
CC-MAIN-2022-21
1652662510097.3
[]
docs.arnoldrenderer.com
Released Enhancements Improved sampling of photometric lights: Photometric lights now take advantage of the same techniques used for point lights. This can show significant reductions in noise, especially for large lights illuminating surfaces at grazing angles (rim lighting, for example). (#7646) Separate azimuthal roughness in standard_hair: You can now specify different roughness values for the azimuthal and longitudinal distributions on the standard_hairshader. A new roughness_azimuthalparameter is used when roughness_anisotropicis enabled. The roughnessparameter is then used to control the longitudinal roughness. When roughness_anisotropic= false, roughnesscontrols both the azimuthal and longitudinal distributions as before. (#7400) License manager priorities: Order in which license managers are checked can now be specified by the environment variable ARNOLD_LICENSE_MANAGERwhich contains a comma separated list of the following tokens ( clm, rlmor none). If ARNOLD_LICENSE_MANAGERis not set, Arnold will use the default priorities of rlm,clm. (#7384)For example, to alter the default order, and use first Autodesk Licensing (CLM) , and if it fails use RLM: $ export ARNOLD_LICENSE_MANAGER=clm,rlm To use Autodesk licensing only: $ export ARNOLD_LICENSE_MANAGER=clm To disable all license managers (you'll always get watermarks!): $ export ARNOLD_LICENSE_MANAGER=none Linkable toon uv/angle_threshold: The NPR toon shader's uv_thresholdand angle_thresholdparameters are now linkable. (#7713) Adaptive Subdivs Interruption: It is now faster to interrupt ongoing adaptive subdivision (#7722) Cryptomatte 1.1.0: Upgraded Cryptomatte to 1.1.0 which contains the fix: "Preview AOVs are now always displayed in display drivers, even when preview_in_exr is disabled". (#7631) Upgrade OIIO and OSL: OIIO has been upgraded to 2.0.1 and OSL to 1.10.1 (#6040, #7283) Profiling node stages: Profiling of node_init and node_update have been added so that it's possible to see which nodes are consuming significant time in these stages. (#7313) Incompatible changes - Round corners default radius: The default value of round_corner.radiushas been changed from 1.0 to 0.01 (#7624) - Photometric lights with radius: Just like their point_lightcounterparts, points within the sphere defined by photometric_light.radiuswill no longer be illuminated. (#7646) - Minimum OSX version requirement raised from 10.8 to 10.9 In order to upgrade to a more recent version of OIIO, the minimum required OSX version has been raised from 10.8 (Mountain Lion) to 10.9 (Mavericks). (#7318) - Maketx and unassociated alpha tiffs: OIIO 2.0.0 corrects an issue with tiff textures with unassociated alpha that will produce different results. If you need the old behavior you can use maketx --ignore-unassoc(#7749) Bug fixes - #7359 Multiple concurrent users on the same node each check out a license with CLM - #6925 Uncaught exception during OptiX device selection - #7436 Toon Render: random issues with keylight initialization - #7568 Improve the performance of parameter selection matching used by the operator runtime - #7588 Shadow artifacts in Overlapped Polygons - #7593 Randomly corrupted matrix on procedural lights - #7596 Noice: AOV variance is not found if added using LPE AOVs - #7605 photometric_lights crash when radius is non zero - #7623 Round Corners: not working with transmission - #7624 Round corners: switch default radius to 0.01 - #7629 crash when writing to an invalid profile.json - #7632 uv_camera crashes with tiny triangles without normals - #7643 noice requiring more output paths than inputs - #7663 Alembic Scalar property being translated as Array - #7682 alembic crash with bucket callback in Arnold < 5.2 - #7690 AiTextureInvalidate() crashes when the path contains unicode characters - #7691 Threaded subdivs crash on render abort - #7698 Disable OptiX when bad NVML is found - #7712 degenerate camera_projection produces invalid texture lookups - #7722 Threaded Subdivs: fix interrupt request latency regression - #7723 Crash in materialx due to invalid/unrecognized node types - #7745 shadow_matte overrides indirect AOVs even when disabled
https://docs.arnoldrenderer.com:443/display/A5ARP/5.2.2.0
2022-05-16T09:19:32
CC-MAIN-2022-21
1652662510097.3
[]
docs.arnoldrenderer.com:443
On a mobile device, forms are displayed using a 2-column layout: the left-hand column displays the field label, and the righthand column shows the field input or value. Referring to the image above: Labels are in the left column Values are in the right column By default, all fields in an IndustraForm design are included in the mobile layout. The mobile field order is taken from the desktop layout, row by row, from left to right. Mobile labels default to using the label field to the left of the input field in the desktop layout, when available. If no label is available, the field ID for the input cell is used. The label can be set manually, as is described in the next section. Often this is sufficient, but for instances where it does not produce a good layout, or the IndustraForm creator wants something specific for mobile, the layout can be customized. The following sections describe the tools available for tailoring IndustraForm designs for mobile use.
https://docs.hexagonppm.com/r/en-US/j5-IndustraForm-Designer-Help/Version-28.0/1047073
2022-05-16T09:27:25
CC-MAIN-2022-21
1652662510097.3
[]
docs.hexagonppm.com
For providers that doesn’t support LoadBalancer as a Service functionality we included a working example of how it might look like in your setup. Provided example is not a requirement and you can always use your own solution. For those examples we use gobetween project. It is free, open-source, modern & minimalistic L4 load balancer solutions that’s easy to integrate into terraform. Yes, it is. We provide this only as an example how it might look like, and at the same time trying to stay minimal on resources. As provider you’re using doesn’t support LBaaS, it’s completely up to you how you organize your frontend loadbalancing and HA for your kube-apiservers. Possibilities to achieve truly HA loadbalancing is to bootstrap 2 of those LBs and use one of the following: As our example in terraform is exactly this — just an example, you are free to use whatever else solution. Gobetween is not a requirement. The only requirement would be to provide apiEndpoint.host (and optional apiEndpoint.port) in configuration, or terraform outputs kubeone_api.values.endpoint. No, provided example loadbalancer solution only cares about kubernetes API availability, it’s not universal solution for all your workloads.
https://docs.kubermatic.com/kubeone/v1.4/examples/ha_load_balancing/
2022-05-16T09:03:34
CC-MAIN-2022-21
1652662510097.3
[]
docs.kubermatic.com
End of life notices - This will be the last release in which ZTS builds are supported. In the future, ZTS builds may not be provided, and support may be completely pulled from the codebase. - New Relic no longer supports PHP 5.3 or PHP 5.4. New Relic highly encourages upgrading to a supported version of PHP. If you would like to continue running the New Relic PHP agent with PHP 5.3 or 5.4, we recommend using version 9.16 of the agent. However, please note that we can only offer limited support in this case. - Ubuntu LTS versions earlier than 14.04, Ubuntu non-LTS versions earlier than 19.04, and Debian versions earlier than 7 "wheezy" are no longer supported. - The following frameworks or framework versions are no longer supported and may be removed from future agent builds: - Cake PHP 1.x - Joomla 1.5, 1.6, and 2.x - Kohana - Silex 1.x and 2.x - Symfony 1.x and 2.x New features - The agent now supports 64-bit PHP 8.0! Compatibility note: When PHP 8.0 detects the New Relic agent, it disables JIT. mysqli_commitis now instrumented. Bug fixes - Fixed a memory leak that occurred when short lived segments throw exceptions. - Fixed a build up of duplicate distributed tracing headers that sometimes occurred when using file_get_contents. - Fixed an issue where Laravel Lumen transactions were not being properly named.
https://docs.newrelic.com/jp/docs/release-notes/agent-release-notes/php-release-notes/php-agent-9170300
2022-05-16T08:19:56
CC-MAIN-2022-21
1652662510097.3
[]
docs.newrelic.com
A. We will take motor vehicle industry as an example. We will try to forecast aggregated demand for all passenger cars, including station wagons for one year ahead.. TIM requires no setup of its mathematical internals and works well in the business user mode. All that is required from a user is to let TIM know:41:32,425 - tim_client.api_client:save_json:74 - Saving JSONs functionality has been disabled [INFO] 2020-10-29 22:41:32,429 - tim_client.api_client:json_saving_folder_path:89 - JSON destination folder changed to logs. configuration_backtest = { 'usage': { 'predictionTo': { 'baseUnit': 'Sample', # units that are used for specifying the prediction horizon length (one of 'Day', 'Hour', 'QuarterHour', 'Sample') 'offset': 12 # number of units we want to predict into the future (24 hours in this case) }, 'backtestLength': rate and contains data from 1967-01 to 2019-08. Aggregated demand of motor vehicles labeled DAUTONSA. No predictors included. Timestamp is the first column and each value of the timestamp is the period it corresponds to i.e. ‘DAUTONSA’ in the row with timestamp 2011-01 corresponds to the whole demand during period between 2011-01-01 and 2011-01-31. In this example we will simulate a year ahead scenario (12 samples - 12 months). Each month we wish to have forecasts for 12 months ahead starting from the next month. We suppose that the demand of the preceding month is already known. data = tim_client.load_dataset_from_csv_file('data.csv', sep=',') # loading data from data.csv data # quick look at the data 632[:, "DATE"], y=data.loc[:, "DAUTONSA"],()
https://docs.tangent.works/TIM-Tangent-Information-Modeler/TIM-Forecasting/Solution-templates/MotorVehicleSales/MotorVehicleSales.html
2022-05-16T09:40:27
CC-MAIN-2022-21
1652662510097.3
[]
docs.tangent.works
Client-server pattern WorldQL is generic enough to serve multiple purposes: - It can compliment existing game servers and be used to build horizontal scaling solutions. This is how our Minecraft scalability solution works. In this case, the Minecraft servers themselves are WorldQL clients, not the players. - It can replace traditional game servers and serve data directly to players. This standalone technique is used in our example browser apps in which the browser itself is the client. Throughout this documentation, we use some potentially confusing terminology. Here's a quick glossary: - WorldQL client refers to a consumer of the WorldQL server's API. This WorldQL client could be a game server in itself (eg. a Minecraft server running the Mammoth plugin, which is actually called "WorldQLClient". - Player refers to the client of an end user. This is to avoid confusion between a WorldQL client that's another game server and one that's an actual end-user. "Player"s are always a WorldQL client, but not all WorldQL clients are players. WorldQL follows a simple authoritative server model. While the client is free to make predictions to hide latency, the server ultimately holds authority over the game state. This means that WorldQL cannot be used to build peer-to-peer games.
https://docs.worldql.com/architecture/client-server
2022-05-16T08:28:18
CC-MAIN-2022-21
1652662510097.3
[]
docs.worldql.com
Difference between revisions of "Reservation payment" Revision as of 15:34, 4 January 2018 Payments with reservation / escrow A reservation (escrow) payment scenario means that upon charging the funding source of the payer, the money is not immediately handed over to the merchant. Instead, it becomes reserved (or blocked) for a given period of time. During this time window, the amount is unavailable for both the payer and the payee - consider it similar to a pending card transaction in a bank system. During the reservation period, the merchant has the right to decide wether to finalize, partially finalize or cancel the purchase. This comes in handy in situations where goods or services are not immediately available at the time of purchase, or might become unavailable by the time of shipping/fulfillment. Prerequisites In order to implement reservation payments, you need to familiarize yourself with the simple Responsive Web Payment scenario. The preparation of payments, redirection to the Barion Smart Gateway and handling the Callback mechanism are identical to immediate payments. There are only additions to the immediate scenario. Key differences when using reservation The finishing step The most important thing to note when implementing a reservation payment process is that there is an extra step required to complete the payment: finishing the reservation. This must be done by the merchant, before the reservation time runs out. This can be done by calling the v2/Payment/FinishReservation API endpoint. The time window available is also set by the merchant during the preparation of the payment - see the ReservationPeriod parameter in the v2/Payment/Start API endpoint. Pre-deducted fees The Barion system deducts fees from the merchant after each payment. In case of immediate payments, this mechanism is quite straightforward: when the payment is completed, the money is transferred to the merchant and the fee is deducted and transferred to the Barion system. When using reservation, this process becomes a bit more complicated. In order to prepare a reservation payment, the merchant must ensure that they have enough money to cover all the necessary fees for the payment - even if in the end the payment is finished with a smaller amount than prepared! To make sure the merchant does not spend the fees in the meantime, the Barion system reserves the fee amount in the merchant's wallet, and only deducts or releases it when the payment was successfully completed and finished. If the merchant does not have enough money to cover the fees, the /Payment/Start endpoint will return a DoNotHaveEnoughMoneyToPreparePayment error code. The lifecycle of a reservation payment 1. The merchant prepares the payment Preparing the payment is identical to the Responsive Web Payment scenario, the two differences are that the payment type must be set to Reservation, and the caller must provide a reservation period. 2. The customer completes the payment When the payer's funding source gets charged successfully, the amount gets reserved. The location of the money depends on the funding source: - when paying with Barion wallet, the amount stays reserved in the wallet of the payer - when paying with bank card, the amount is transferred to the merchant, and stays reserved in the wallet of the merchant At this moment the Reservation period timer starts and the payment enters into Reserved status. A callback is sent to the merchant. NOTE: the Barion Smart Gateway user interface is identical in all payment scenarios. The customer might not be (as they are not needed to be) aware that there is a reservation taking place in the background! Should this be an issue, the merchant should clarify this in their own respective environment. 3.a. The merchant finishes the reservation Unless the reservation period timer passes, the merchant has to finish the reservation in order to claim the amount. When finishing a reservation, the merchant must provide the amount they want to finish the payment with. In layman's terms the merchant is stating that "of this reserved amount of X, I would like to receive Y in the end", where 0 <= Y <= X. This is done per payment transaction. The merchant can finish all transactions contained in a payment in one single API call, or in separate calls. However, each payment transaction can only be finished once. The finishing amount must not exceed the prepared amount for a payment transaction. The merchant can also finish a payment with the total amount of zero, if they want to cancel or storno the payment. The reservation outcome is different depending on the finishing amount. - if the finishing amount equals to the prepared amount (everything was okay with the payment, all goods have been delivered or services were fulfilled), then the reserved amount is released and becomes available in the wallet of the merchant - if the finishing amount is less than the prepared amount, but is greater than zero (some goods could not be delivered, the customer did not want all products in the end or a service was only partially fulfilled), the finishing amount is released and becomes available in the wallet of the merchant, and the remaining amount is refunded to the customer - if the finishing amount is zero (the customer declined the order or the goods/services are not available), the whole amount is refunded to the customer Refunds during finish - if the customer paid with their Barion balance, the amount is given back to them and becomes available in their Barion wallet - if the customer paid with a bank card, the amount is refunded to that bank card NOTE: bank card refunds can take up to 30 days depending on the bank system! IMPORTANT: if the bank card becomes unavailable (it is either expired or blocked by the owner) by the time of finishing the reservation, then the refund will fail and the merchant receives the full prepared amount - in this case, it is the merchant's full responsibility to arrange the return of the remaining amount to the customer! If the merchant finishes the reservation successfully, the payment enters into Succeeded status. A callback is sent to the merchant. 3.b. Reservation timer elapses If there was no finishing before the reservation timer passes, all unfinished transactions in the payment are automatically finished with an amount of zero. In this case, the unfinished amount is refunded to the customer (in the same way as the merchant would have finished the payment). - if there were any finished transactions in the payment, then the payment enters into PartiallySucceededstatus - if there was no finished transactions at all, the payment enters into Expiredstatus In either case, a callback is sent to the merchant.
https://docs.barion.com/index.php?title=Reservation_payment&diff=prev&oldid=1365
2019-06-16T00:47:25
CC-MAIN-2019-26
1560627997508.21
[]
docs.barion.com
Contents Now Platform Capabilities Previous Topic Next Topic Getting started with workflows Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Getting started with workflows The graphical Workflow Editor provides a drag-and-drop interface for automating multi-step processes across the platform. Each workflow consists of a sequence of activities, such as generating records, notifying users of pending approvals, or running scripts. The workflow starts when a triggering event occurs. Common triggers include a record being inserted into a specific table, or a particular field in a table being set to a specified value. For example, you might create a workflow that runs whenever a user requests approval for an item they want to order from the catalog. When an activity completes, the workflow transitions to next activity. An activity might have several different possible transitions to various activities, depending on the outcome of the activity. Continuing the example above, if the user's request is approved, the activity might transition to an activity that notifies someone to order the item; if the user's request is denied, the activity might transition to notifying the user that their request has been denied. The graphical Workflow Editor represents workflows visually as a type of flowchart. It shows activities as boxes labelled with information about that activity and transitions from one activity to the next as lines connecting the boxes. At each step in a workflow: An activity is processed and an action defined by that activity occurs. At the completion of an action by an activity, the workflow checks the activity's conditions. For each matching condition, the workflow follows the transition to the next activity. When the workflow runs out of activities, the workflow is complete. Workflow. Sample activity For more information on available activities and their behaviors, see Workflow activities. Transitions After the workflow condition is evaluated, the workflow transition determines which activity is performed when the workflow condition is met. This is a transition that always leads from the Change Approved script to the Change Task activity: Figure 2. Sample transition Exit conditions After a workflow activity is performed, the workflow condition is evaluated to determine which transition is activated. The condition determines behavior based on a change being approved or rejected: Figure 3. Sample exit conditions Workflow example During workflow editing or while an unpublished workflow is running, only the person who checked out the workflow can view the changes. After a workflow is published, it is available to other users. The workflow moves through the process as defined in the Workflow Editor. The entire workflow is represented in one screen. For example, this is the Standard Change workflow: Figure 4. Sample change workflow On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/kingston-servicenow-platform/page/administer/workflow/reference/getting-started-workflows.html
2019-06-16T01:28:42
CC-MAIN-2019-26
1560627997508.21
[]
docs.servicenow.com
Contents IT Service Management Previous Topic Next Topic Adjust a contract Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Adjust a contract After creating a contract, you can adjust it as necessary. Before you beginRole required: contract_manager or admin About this task For example, you may need to change the start date, end date, or payment amount for a contract. If a contract has a rate card, the rate card start date, end date, and base cost can also be adjusted. To adjust a contract, the State should be Active. If the end date of a contract rate card changes, the end date of any associated assets changes to match. Procedure Navigate to Contract Management > Contract > All. Select a contract in Active state. Click Adjust. Double-click in any field to edit information. Table 1. Adjust contract values Field Description Contract Start Date Date on which the contract takes effect. Contract End Date Date on which the contract expires. Contract Payment Amount Total amount paid for the contract. If the contract has one or more rate cards, this field shows the total of all rate card base costs. Rate Card Name Name of the rate card. Start date Date on which the rate card values take effect. End date Date on which the rate card values expire. Base cost Amount that must be paid before taxes. Click Apply changes to contract and rate cards. Renew a contractAfter creating a contract, you can renew it, if necessary.Extend a contractAfter creating a contract, you can extend it, if necessary. Extending the end date retains contract information and history.Cancel a contractYou can cancel a contract when the State is Active.Related TasksCreate a contractVerify contract administrator assignment for notificationSend the contract for approvalCreate a contract rate cardMonitor a contractRelated ConceptsTerms and conditions On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/madrid-it-service-management/page/product/contract-management/task/t_AdjustAContract.html
2019-06-16T01:04:24
CC-MAIN-2019-26
1560627997508.21
[]
docs.servicenow.com
Diagnostic Warning - Torn Lifestyle¶ Warning Description¶ When multiple Registration instances with the same lifestyle map to the same component, the component is said to have a torn lifestyle. The component is considered torn because each Registration instance will have its own cache of the given component, which can potentially result in multiple instances of the component within a single scope. When the registrations are torn the application may be wired incorrectly which could lead to unexpected behavior. How to Fix Violations¶ Make sure the creation of Registration instances of your custom lifestyle goes through the Lifestyle.CreateRegistration method instead directly to Lifestyle.CreateRegistrationCore. When to Ignore Warnings¶ This warning most likely signals a bug in a custom Lifestyle implementation, so warnings should typically not be ignored. The warning can be suppressed on a per-registration basis as follows: var fooRegistration = container.GetRegistration(typeof(IFoo)).Registration; var barRegistration = container.GetRegistration(typeof(IBar)).Registration; fooRegistration.SuppressDiagnosticWarning(DiagnosticType.TornLifestyle); barRegistration.SuppressDiagnosticWarning(DiagnosticType.TornLifestyle);
https://simpleinjector.readthedocs.io/en/latest/tornlifestyle.html
2019-06-16T00:54:35
CC-MAIN-2019-26
1560627997508.21
[]
simpleinjector.readthedocs.io
EmR1607 Rule Text Please see the EmR1607 info page for dates, other information, and more related documents. Department of Agriculture, Trade and Consumer Protection (ATCP) Administrative Code Chapter Group Affected: Chs. ATCP 90-139; Trade and Consumer Protection Administrative Code Chapter Affected: Ch. ATCP 101 (Revised) Related to: Vegetable contractors and the agricultural producer security fund assessment Comment on this emergency rule Related documents: EmR1607 Initial Regulatory Flexibility Analysis EmR1607 Fiscal Estimate
https://docs.legis.wisconsin.gov/code/register/2016/722A1/register/emr/emr1607_rule_text/emr1607_rule_text
2019-06-16T00:28:59
CC-MAIN-2019-26
1560627997508.21
[]
docs.legis.wisconsin.gov
HREF Attribute | href Property Note: This documentation is preliminary and is subject to change. Sets or retrieves the destination URL or anchor point. Syntax Possible Values The property is read/write. The property has no default value. Expressions can be used in place of the preceding value(s), as of Microsoft® Internet Explorer 5. For more information, see About Dynamic Properties. Remarks HREF attributes on anchors can be used to jump to bookmarks or any object's identification attribute. If HREF is present but is blank (HREF="" OR HREF=), executing the link could possibly display the directory containing the current page, or it could generate an error, depending on other elements on the web page and the server environment. When an anchor is specified, the link to that address is represented by the text between the opening and closing anchor tags. For more information on standard Internet protocols such as ftp, http, and mailto, see Predefined Protocols. Noteand is defined in World Wide Web Consortium (W3C) Document Object Model (DOM) Level 1 .. Applies To
https://docs.microsoft.com/en-us/previous-versions/ms533863%28v%3Dvs.85%29
2019-06-16T02:24:11
CC-MAIN-2019-26
1560627997508.21
[]
docs.microsoft.com
Error: "Guided Access app unavailable. Please contact your administrator" When a single app mode configuration is applied to a device and the specified app is not already installed, the device will display the following message: Guided Access app unavailable. Please contact your administrator The device will remain non-functional until single app lock is disabled on the device. iOS does not currently allow apps to be installed or updated while single app lock is enabled, so the single app mode configuration must be disabled before installing the necessary app.
https://docs.simplemdm.com/article/88-error-guided-access-app-unavailable-please-contact-your-administrator
2019-06-16T00:45:27
CC-MAIN-2019-26
1560627997508.21
[]
docs.simplemdm.com
You are browsing documentation for a version other than the latest stable release. Switch to the latest stable release, 2.0. This chapter introduces the technical details of raw flash support in Mender. Support for raw flash memory under Linux is in general more complicated than working with block devices. It is advised to have a fully working bootloader, kernel and rootfs before introducing Mender.. The image can be run by calling a mender-qemu helper script provided in meta-mender-qemu layer: QEMU_SYSTEM_ARM=$HOME/qemu-install/bin/qemu-system-arm \ VEXPRESS. Using raw flash devices under Linux is more complicated compared to typical block devices such as hard disks or eMMC flash. Block devices typically come with a partition table, using either the MBR or GPT formats, that enables discovery and identification of existing partitions. In the case of raw flash devices, no partition tables are in use. Instead, the kernel must be informed about existing partitions, their start locations and sizes. This can achieved using a number of different methods: arch/<ARCH>/mach-<MACH>(deprecated) Raw flash boards currently supported by Mender use the kernel command line to pass information about MTD partitions. Using vexpress-a9 QEMU target as an example, the flash area is partitioned like this: u-boot u-boot-env ubi Having enabled CONFIG_CMD_MTDPARTS in U-Boot we can see the following output after issuing the mtdparts command at U-Boot prompt: => mtdparts device nor2 <40000000.flash>, # parts = 3 #: name size offset mask_flags 0: u-boot 0x00100000 0x00000000 1 1: u-boot-env 0x00100000 0x00100000 0 2: ubi 0x07e00000 0x00200000 0 active partition: nor2,0 - (u-boot) 0x00100000 @ 0x00000000 defaults: mtdids : nor2=40000000.flash mtdparts: mtdparts=40000000.flash:1m(u-boot)ro,1m(u-boot-env),-(ubi) Note that the mtdparts command line argument is using the same device name as produced by board devicetree bindings. Booting the kernel, the following log listing MTD partitions will be visible: [ 0.844595] Concatenating MTD devices: [ 0.844712] (0): "40000000.flash" [ 0.844814] (1): "40000000.flash" [ 0.844891] into device "40000000.flash" [ 0.845949] 3 cmdlinepart partitions found on MTD device 40000000.flash [ 0.846161] Creating 3 MTD partitions on "40000000.flash": [ 0.846579] 0x000000000000-0x000000100000 : "u-boot" [ 0.852186] 0x000000100000-0x000000200000 : "u-boot-env" [ 0.855636] 0x000000200000-0x000008000000 : "ubi" MTD partitions can be viewed in the running system by inspecting /proc/mtd: root@vexpress-qemu-flash:~# cat /proc/mtd dev: size erasesize name mtd0: 00100000 00040000 "u-boot" mtd1: 00100000 00040000 "u-boot-env" mtd2: 07e00000 00040000 "ubi" The ubinize and mkfs.ubifs arguments are a little complicated to get right. One can use mtdinfo in a running system to obtain a set a reasonable defaults. Using the vexpress-a9 QEMU target as an example: root@vexpress-qemu-flash:~# mtdinfo -u /dev/mtd2 mtd2 Name: ubi Type: nor Eraseblock size: 262144 bytes, 256.0 KiB Amount of eraseblocks: 504 (132120576 bytes, 126.0 MiB) Minimum input/output unit size: 1 byte Sub-page size: 1 byte Character device major/minor: 90:4 Bad blocks are allowed: false Device is writable: true Default UBI VID header offset: 64 Default UBI data offset: 128 Default UBI LEB size: 262016 bytes, 255.9 KiB Maximum UBI volumes count: 128 Note that these settings will generally be different depending on the type of flash memory. Once determined, the parameters for mkfs.ubifs and ubinize must be set in the Yocto configuration using MKUBIFS_ARGS and UBINIZE_ARGS variables respectively. Since these parameters are a specific for given board, it is possible they may already be set by a corresponding machine configuration. To enable UBI support in U-Boot and integrate it with the kernel you will need to enable at least these configuration options in U-Boot: CONFIG_CMD_UBI CONFIG_CMD_UBIFS CONFIG_MTD_DEVICE CONFIG_MTD_PARTITIONS CONFIG_CMD_MTDPARTS CONFIG_LZO Optionally, to match the kernel configuration, you may need to set CONFIG_MTD_CONCAT to enable automatic concatenation of neighboring flash devices into a single one. Using vexpress-a9 as an example, a minimal boot script is then: "kernel_addr_r=0x60100000\0" \ "fdt_addr_r=0x60000000\0" \ "fdtfile=vexpress-v2p-ca9.dtb\0" \ "mtdparts=40000000.flash:1m(u-boot)ro,1m(u-boot-env)ro,-(ubi)\0" \ "ubiargs=ubi.mtd=2 root=ubi0:rootfs rootfstype=ubifs ubi.fm_autoconvert=1\0" \ "ubiboot=" \ "echo Booting from NOR...; " \ "ubi part ubi && " \ "ubifsmount ubi0:rootfs && " \ "ubifsload ${kernel_addr_r} /boot/zImage && " \ "ubifsload ${fdt_addr_r} /boot/${fdtfile} && " \ "setenv bootargs ${mtdparts} ${ubiargs} ${defargs} && " \ "bootz ${kernel_addr_r} - ${fdt_addr_r}\0" The script: ubidevice defined by mtdparts, creating the ubi0device. rootfsfrom ubi0. zImageat kernel load address 0x60100000 (start of RAM + 1MB offset). bootargsto include: mtdparts- MTD partitions, locations, sizes and naming. ubiargs- MTD device carrying UBI, root file system location contents on ubi0:rootfsvolume and sets rootfs type to UBIFS. defargs- additional default arguments, such as console, panic settings and similar. To enable UBI support, inherit the mender-full-ubi class in your local.conf and take a look at the various UBI related variables in mender-install-ubi.bbclass. Mender support will create a UBI image file ( ubimg in ${DEPLOYDIR}/images) including the following volumes: rootfsa- contents of root filesystem A rootfsb- contents of root filesystem B data- contents of data partition The ubimg image file can be used for populating the UBI partition with the ubiformat tool. By default a *.ubifs root filesystem image will be used when generating a Mender artifact. By inheriting mender-install-ubi (included in mender-full-ubi) the following configuration settings will be set automatically: MENDER_STORAGE_DEVICE- defaults to ubi0 MENDER_ROOTFS_PART_A- defaults to ubi0_0 MENDER_ROOTFS_PART_B- defaults to ubi0_1 MENDER_ROOTFS_PART_A_NAMEand MENDER_ROOTFS_PART_B_NAME- defaulting to ubi0:rootfsaand ubi0:rootfsb Also, you will need to set the following configuration options: MENDER_STORAGE_TOTAL_SIZE_MB- size of your flash MENDER_DATA_PART_SIZE_MB- desired size of data partition MENDER_PARTITION_ALIGNMENT_KB- partition alignment, set to match erase block size When using UBI you may need to set MENDER_STORAGE_RESERVED_RAW_SPACE to account for space lost to UBI metadata. These settings affect the calculated rootfs size. Note that the calculated rootfs size (i.e. volume size) is different from the actual amount of data that can be stored in rootfs. The difference is caused by compression. To set an upper boundary on the amount of rootfs data, you can define IMAGE_ROOTFS_MAXSIZE. For U-Boot, on top of options listed in U-Boot you will need to enable the options required by Mender listed in U-Boot integration. The U-Boot boot process remains very much the same as described in integration points with the addition of a call to mender_setup script (using vexpress-a9 as an example): "set_ubiargs=setenv ubiargs ubi.mtd=${mender_mtd_ubi_dev_name} " \ "root=${mender_kernel_root} rootfstype=ubifs ubi.fm_autoconvert=1\0" \ "ubiboot=" \ "echo Booting from NOR...; " \ "run mender_setup; " \ "run set_ubiargs; " \ "ubi part ${mender_mtd_ubi_dev_name} && " \ "ubifsmount ${mender_uboot_root_name} && " \ "ubifsload ${kernel_addr_r} /boot/zImage && " \ ... Note that U-Boot places some constraints on parameter expansion, for this reason the parameter ubiargs is no longer set by default environment. Instead it is set by the intermediate set_ubiargs script.
https://docs.mender.io/1.5/devices/raw-flash
2019-06-16T01:00:32
CC-MAIN-2019-26
1560627997508.21
[]
docs.mender.io
Deprecation: #67288 - Deprecate DbalDatabaseConnection::MetaType() method¶ See Issue #67288 Description¶ The following public function has been marked as deprecated as the bugfix requires a signature change: Dbal\DatabaseConnection->MetaType() Impact¶ Using this function will throw a deprecation warning. Due to missing information the field type cache will be bypassed and the DBMS will be queried for the necessary information on each call.
https://docs.typo3.org/c/typo3/cms-core/master/en-us/Changelog/7.4/Deprecation-67288-DeprecateDbalMetaType.html
2019-06-16T01:50:20
CC-MAIN-2019-26
1560627997508.21
[]
docs.typo3.org
mender. mender. Clone mender-convert from the official repository: git clone -b 1.1.0 Change directory to where you downloaded mender-convert: cd mender-convert Then, run the following command to build the container based on Ubuntu 18.04 with all required dependencies for mender-convert: ./docker-build Move your golden disk image into an input subdirectory: mkdir -p input mv <PATH_TO_MY_GOLDEN_IMAGE> input/golden-image-1.img Then adjust to the correct paths below and run the conversion: DEVICE_TYPE="raspberrypi3" RAW_DISK_IMAGE="input/golden-image-1.img" ARTIFACT_NAME="golden-image-1-mender-integ" MENDER_DISK_IMAGE="golden-image-1-mender-integ.sdimg" TENANT_TOKEN="<INSERT-TOKEN-FROM Hosted Mender>" ./docker-mender-convert from-raw-disk-image \ --raw-disk-image $RAW_DISK_IMAGE \ --mender-disk-image $MENDER_DISK_IMAGE \ --device-type $DEVICE_TYPE \ --artifact-name $ARTIFACT_NAME \ --bootloader-toolchain arm-buildroot-linux-gnueabihf \ --server-url "" \ --tenant-token $TENANT_TOKEN The conversion may take 10 minutes, depending on the resources available on your machine. After a successful conversion, your images can be found in output/. The above invocation will use configuration defaults for use with the Hosted Mender in production environment. If on premise server is used, then adjust --server-url accordingly, and replace --tenant-token option with --server-cert option, specifying a valid path for the server certificate. If instead, you with to use the Mender demo environment, execute the command with these parameters: bash DEVICE_TYPE="raspberrypi3" RAW_DISK_IMAGE="input/golden-image-1.img" ARTIFACT_NAME="golden-image-1-mender-integ" MENDER_DISK_IMAGE="input/golden-image-1-mender-integ.sdimg" DEMO_HOST_IP="192.168.10.2" ./docker-mender-convert from-raw-disk-image \ --raw-disk-image $RAW_DISK_IMAGE \ --mender-disk-image $MENDER_DISK_IMAGE \ --device-type $DEVICE_TYPE \ --artifact-name $ARTIFACT_NAME \ --bootloader-toolchain arm-buildroot-linux-gnueabihf \ --demo \ --demo-host-ip $DEMO_HOST_IP
https://docs.mender.io/2.0/artifacts/debian-family
2019-06-16T01:00:06
CC-MAIN-2019-26
1560627997508.21
[]
docs.mender.io
Lab 16: External Automation in InfoPath 2003 Microsoft Corporation April 2004 Applies to: Microsoft® Office InfoPath™ 2003 Summary: Learn how to write scripts that automatically open, update, and save forms without user interaction. (3 printed pages) Download the odc_INF03_Labs.exe sample file. Contents Prerequisites Scenario Lab Objective Conclusion Prerequisites - A familiarity with Microsoft® JScript® Scenario In order to save money and improve efficiency, some processes at Contoso Corporation require the ability to modify a Microsoft® Office InfoPath™ 2003 form without user interaction. The information technology (IT) department at Contoso must write scripts that automate these changes. Lab Objective - In this lab, learn how to modify a form using external automation. Exercises Exercise 1: Update Data in an Existing Form A customer recently changed its name from "Company A" to "Company B." The IT department must write scripts that automatically update the sales report form to reflect this name change. To update data in an existing form Copy the training files (Form1.xml and Lab16Template.xsn) to c:\Lab16 folder. Create a new file in the folder named update.js, which contains the following code /; Save the file. Open a command prompt, and then browse to c:\lab16\. At the command prompt, type cscript update.js to run the script contained in the update.js file. Form2.xml is created, and contains the new company name, Company B. Conclusion.
https://docs.microsoft.com/en-us/previous-versions/office/ms788215(v=office.11)
2019-06-16T01:15:28
CC-MAIN-2019-26
1560627997508.21
[]
docs.microsoft.com
Reading and Writing Data¶ Contents A ts.flint.FlintContext is similar to a pyspark.sql.SQLContext in that it is the main entry point to reading Two Sigma data sources into a ts.flint.TimeSeriesDataFrame. Converting other data sources to TimeSeriesDataFrame¶ You can also use a ts.flint.FlintContext to convert an existing pandas.DataFrame or pyspark.sql.DataFrame to a ts.flint.TimeSeriesDataFrame in order to take advantage of its time-aware functionality: >>> df1 = flintContext.read.pandas(pd.read_csv(path)) >>> df2 = (flintContext.read ... .option('isSorted', False) ... .dataframe(sqlContext.read.parquet(hdfs_path))) Writing temporary data to HDFS¶ You can materialize a pyspark.sql.DataFrame to HDFS and read it back later on, to save data between sessions, or to cache the result of some preprocessing. >>> import getpass >>> filename = 'hdfs:///user/{}/filename.parquet'.format(getpass.getuser()) >>> df.write.parquet(filename) The Apache Parquet format is a good fit for most tabular data sets that we work with in Flint. To read a sequence of Parquet files, use the flintContext.read.parquet method. This method assumes the Parquet data is sorted by time. You can pass the .option('isSorted', False) option to the reader if the underlying data is not sorted on time: >>> ts_df1 = flintContext.read.parquet(hdfs_path) # assumes sorted by time >>> ts_df2 = (flintContext.read ... .option('isSorted', False) ... .parquet(hdfs_path)) # this will sort by time before load
https://ts-flint.readthedocs.io/en/latest/context.html
2019-06-16T01:04:31
CC-MAIN-2019-26
1560627997508.21
[]
ts-flint.readthedocs.io
ClickOnce Deployment for Windows Forms The following topics describe ClickOnce, a technology used for easily deploying Windows Forms applications to client computers. Related Sections Choosing a ClickOnce Deployment Strategy Presents several options for deploying ClickOnce applications. Choosing a ClickOnce Update Strategy Presents several options for updating ClickOnce applications. Securing ClickOnce Applications Explains the security implications of ClickOnce deployment. Troubleshooting ClickOnce Deployments Describes various problems that can occur when deploying ClickOnce applications, and documents the top-level error messages that ClickOnce might generate. ClickOnce and Application Settings Describes how ClickOnce deployment works with application settings, which stores application and user settings for future retrieval. Trusted Application Deployment Overview Describes a ClickOnce feature that allows trusted applications to run with a higher level of permission on client computers. ClickOnce and Authenticode Describes how Authenticode technology is used in trusted application deployment. Walkthrough: Manually Deploying a ClickOnce Application Demonstrates using command-line and SDK tools to deploy a ClickOnce application without using Visual Studio. How to: Add a Trusted Publisher to a Client Computer for ClickOnce Applications Demonstrates the one-time configuration of client computers required for trusted application deployment. How to: Specify an Alternate Location for Deployment Updates Demonstrates configuring a ClickOnce application, using SDK tools, to check a different location for new versions of an application. Walkthrough: Downloading Assemblies on Demand with the ClickOnce Deployment API Demonstrates using API calls to retrieve an assembly the first time your application attempts to load it. How to: Retrieve Query String Information in an Online ClickOnce Application Demonstrates retrieving parameters from the URL used to run a ClickOnce application. ClickOnce Cache Overview Describes the cache used to store ClickOnce applications on the local computer. Accessing Local and Remote Data in ClickOnce Applications Describes how to access local data files and remote data sources from a ClickOnce application. How to: Include a Data File in a ClickOnce Application Demonstrates how to mark a file so that it is available in the ClickOnce data directory. See also Feedback Send feedback about:
https://docs.microsoft.com/en-us/dotnet/framework/winforms/clickonce-deployment-for-windows-forms
2019-06-16T01:13:07
CC-MAIN-2019-26
1560627997508.21
[]
docs.microsoft.com
Configure kiosks and digital signs on Windows desktop editions Warning Some information relates to prereleased product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here. Some desktop devices in an enterprise serve a special purpose, such as a PC in the lobby that customers can use to view your product catalog or a PC displaying visual content as a digital sign. Windows 10 offers two different locked-down experiences for public or specialized use: Kiosk configurations are based on Assigned Access, a feature in Windows 10 that allows an administrator to manage the user's experience by limiting the application entry points exposed to the user. There are several kiosk configuration methods that you can choose from, depending on your answers to the following questions. Important Single-app kiosk mode is not supported over a remote desktop connection. Your kiosk users must sign in on the physical device that is set up as a kiosk. Summary of kiosk configuration methods Feedback Send feedback about:
https://docs.microsoft.com/en-us/windows/configuration/kiosk-methods
2019-06-16T01:06:57
CC-MAIN-2019-26
1560627997508.21
[]
docs.microsoft.com
Using Bonsai.io: Creating an account on Bonsai is the first step towards provisioning and scaling an Elasticsearch cluster on demand. The following guide has everything you need to know about: - Creating your first cluster - Managing Your Account - Updating your Organization - Adding, Editing and Removing Team Members - Updating Billing - Adding Coupon Codes - Revoking Sessions - Creating a New Account - Canceling Your Account Navigate to the sign up page, where you’ll provide an email address: We validate emails using RFC 2822 and RFC 3696. If you have a non-conforming email address, let us know at [email protected]. If everything is good, you should receive an email from us shortly with a confirmation link. Confirm your email Navigate to your email client. You should receive the following email from our system: Click on the button, which will take you to a form to complete your sign up. Complete your account details Haven’t We Seen You Before? If you have signed up for Bonsai in the past using this email address, you will receive an email directing you to log in using it. After clicking on the Confirm email button from the confirmation email, you will be presented with a simple form to complete your account. Protip: Use a strong password. Create a Cluster Once your account has been created, you’ll be able to create your first cluster. You can provision a free starter cluster without entering payment details. If you wish to provision or upgrade to a larger cluster with more resources, you’ll need to add a credit card (see below). This form will allow you to specify some details about the cluster you want: The dashboard lets you customize your cluster a bit before it’s provisioned. As you make changes, the summary on the right hand side of the screen will update. There are several attributes that can be configured, which are discussed in detail below. Cluster Name Users are able to manage multiple clusters through their Bonsai dashboard.. Cluster Region We manage our infrastructure on Amazon Web Services, and run clusters in several of their regions. Users are able to provision clusters in the following regions: - us-east-1 (Virginia) - us-west-2 (Oregon) - eu-west-1 (Ireland) - ap-southeast-2 (Sydney) - eu-central-1 (Frankfurt). These regions are supported due to broad demand. We can support other regions as well, but pricing will vary. Shoot us an email if you’d like to learn more about getting Bonsai running in a region not listed above. Also note that you will want to select a region that’s as close to where your application is hosted as possible to minimize latency. Doing so will ensure the fastest search and best user experience for your application. Elasticsearch Version We support a number of Elasticsearch versions in different regions, however not all regions will support all versions. The available options will appear in this dashboard, and if a version is not available in the region you have selected, it will be grayed out. We can accommodate specific versions of Elasticsearch on Enterprise deployments. If you need Bonsai to run a version of Elasticsearch in a particular region not available on the Create a Cluster dashboard, email us to discuss options. Cluster Plan The capabilities of your cluster are determined by the plan level. Bonsai meters on a number of different attributes like disk space and concurrency, and these limits scale with plan level. Generally: - Free and Staging plans have the lowest limits - Production grade plans have limits suitable for production applications - Enterprise grade plans have no metering at all You can upgrade your plan at any time, and your cluster will scale automatically. In some cases, such as upgrading to a single tenant or Enterprise grade plan, we will need to schedule a data migration, which may require a few minutes of read-only mode. If this is a possibility, we will work out the details with you beforehand. Free to Start! No credit card is required to spin up a free cluster on Bonsai. Free clusters are designed for small apps with minimal usage, and their limits reflect this. If you decide that you’d like to scale up to something a little beefier, you’ll need to supply a credit card. Limitations While the first cluster can be free, customers are limited to a single free cluster for each production-grade (paid) cluster. When you’re ready to proceed, click on “Create Cluster.” Congratulations! Your Elasticsearch cluster will instantly be up and running, and you will be automatically routed to your cluster dashboard. Viewing Your Cluster When you visit your dashboard, you’ll see your new cluster listed: At a glance, you will see the cluster, the region in which it’s located, the plan that it has, and some indicators about usage. Initially, these indicators will be empty because you haven’t used the cluster yet. Click on the new cluster to bring up the cluster dashboard. You’ll be greeted with an overview page that gives you a ton of information about your cluster: The cluster dashboard is covered in detail in Exploring Your Cluster Dashboard, but for now, just look at the “Full-Access URL.” This URL is a pointer to your Elasticsearch cluster. It has a form like: Credentials are not your login The username/password credentials for your Elasticsearch cluster are not the same credentials you use to log in to Bonsai. The cluster credentials are randomly generated pairs, like “pthstckm4:jj5i6zh3j” Bonsai clusters are configured to support HTTPS and HTTP Basic Auth out of the box for security. The URL contains a randomly-generated username and password combination that must be included in all requests to the cluster in order for Elasticsearch to process a request. You can test that the cluster is available by using a tool like curl: $ curl "" { "name": "e_dJ9r" } HTTP 401: Authentication required If you’re seeing this message, then you’re not including the correct authentication credentials in your request. Double check that the request includes the credentials shown in your dashboard and try again. strickland-propane-3917077713” If your URL is leaked somehow, you can regenerate the credentials. See Exploring Your Cluster Dashboard for more information. Now that you have created your cluster and verified that it is up and running, you can take a look at your cluster in greater detail. Managing Your Account Need to change your email, password, update a credit card, or provide access to your clusters to a coworker? Use the sections below to manage all things account-related. To access your Account Settings, click on the upper right-hand dropdown and select Account Settings. Account Profile Your profile – both for you and your organization – provides basic details to help us serve you best. Providing some basic contact information also helps to ensure we can reach out in the event that something goes wrong Account Password You can change your login password here. You will need your current password in order to create a new password. Adding Team Members Bonsai accounts support multiple team members, each with specialized roles. Account owners can invite new users to join the organization. At least one Billing and one Admin role are needed for each account, so the first user in the account defaults to having both the Admin and Billing role. To add a new team member, check the boxes for the role(s) you would like that user to have, provide their email address and click on “Send Team Invite.” After sending the invite, you will see the invitee listed along with their roles. You can also remove a team member from your organization by clicking on the Remove button. You will not be able to remove the last Admin or last Billing member on the account, as at least one of those roles is needed. If there was a mistake, or a role needs to change, you can click on the pencil icon to edit the user’s role. This will allow you to update their role(s), which will take effect immediately. Update Billing If you haven’t added a credit card yet, or would like to update your payment profile, you can do so under the Billing tab. Click on the “Add a credit card” button to add a credit card. That will bring up a new form: Fill the details out and click Save to proceed. Your Financial Data, Secured. When you add a credit card in Bonsai, the information is encrypted and passed to our payment processor. We have verified that this service is Level 1 PCI Compliant. This level of compliance is the highest level of security a business can offer. We do not host or store any financial data, and your credit card details can not be accessed by anyone on our team or within the payment processor. You can associate multiple credit cards to your account. If you only have one card, it will automatically be the default. If you have multiple cards, you have the option to update the default and remove non-default cards: To update an existing card, for example, if you have a new expiration date, simply click on “Add a credit card,” and enter the new details. Save it, then set it to be the new default. Then you can delete the older card. Coupon Codes If you have a coupon code, you can add it in the Billing section. Simply enter the code and click on “Add coupon code”: If the code is accepted, it will be listed, along with a description of what it does: If the code is not valid, there will be an error message: If you have a code that isn’t working but should, shoot us an email at [email protected] and we’ll check it out. Sessions This screen shows all the locations where your account is logged in. If you would like to revoke access to a session, simply click on the “Revoke” button next to the desired section. This will force the browser at that location to require a new login. The Current session is the one you’re using to view this screen. It can’t be revoked, but you can log out, which will accomplish the same thing. Create a New Account You can create a new account from the upper right-hand dropdown by selecting Add Account: Give your new account a name and click Create Account: You will then be prompted to create your free sandbox cluster in the new account. You can switch between your accounts from the drop-down menu: Cancel Account If you want to cancel your account, you’ll need to first make sure that any active clusters you have are deprovisioned first. If you have any active clusters on your account, you’ll see a notice like this:.
https://docs.bonsai.io/article/192-using-bonsai-io
2019-06-16T01:34:49
CC-MAIN-2019-26
1560627997508.21
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5bd08cb42c7d3a01757a5894/images/5c6c6cb5042863543ccd30fd/file-V5sedm6W1P.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5bd08cb42c7d3a01757a5894/images/5c6c6cde2c7d3a66e32ea86d/file-ZGR0vDByUq.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5bd08cb42c7d3a01757a5894/images/5c6c6d0a2c7d3a66e32ea872/file-OdzbvGhiSR.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5bd08cb42c7d3a01757a5894/images/5cb757fd0428631459c0d109/file-2IiyuS0P8x.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5bd08cb42c7d3a01757a5894/images/5c6c6d5b042863543ccd3109/file-pPm3qohq1B.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5bd08cb42c7d3a01757a5894/images/5c6c6d6d042863543ccd310c/file-nDCSWkBuy5.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5bd08cb42c7d3a01757a5894/images/5c6c6d85042863543ccd310d/file-KXOcnAsFjq.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5bd08cb42c7d3a01757a5894/images/5c6c6daf042863543ccd3111/file-boygKSz2e0.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5bd08cb42c7d3a01757a5894/images/5c6c6dcc042863543ccd3113/file-Sb1onFp6Xw.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5bd08cb42c7d3a01757a5894/images/5c6c6ee32c7d3a66e32ea8a2/file-TF4kNWL0Zi.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5bd08cb42c7d3a01757a5894/images/5cb759110428631459c0d125/file-AqoNWOaHW3.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5bd08cb42c7d3a01757a5894/images/5c6c6f11042863543ccd312a/file-VVxRwRyhA6.png', None], dtype=object) ]
docs.bonsai.io
Introduction to a multitenant SaaS app that uses the database-per-tenant pattern with SQL Database The Wingtip SaaS application is a sample multitenant app. The app uses the database-per-tenant SaaS application pattern to service multiple tenants. The app showcases features of Azure SQL Database that enable SaaS scenarios by using several SaaS design and management patterns. To quickly get up and running, the Wingtip SaaS app deploys in less than five minutes. Application source code and management scripts are available in the WingtipTicketsSaaS-DbPerTenant GitHub repo. Before you start, see the general guidance for steps to download and unblock the Wingtip Tickets management scripts. Application architecture The Wingtip SaaS app uses the database-per-tenant model. It uses SQL elastic pools to maximize efficiency. For provisioning and mapping tenants to their data, a catalog database is used. The core Wingtip SaaS application uses a pool with three sample tenants, plus the catalog database. The catalog and tenant servers have been provisioned with DNS aliases. These aliases are used to maintain a reference to the active resources used by the Wingtip application. These aliases are updated to point to recovery resources in the disaster recovery tutorials. Completing many of the Wingtip SaaS tutorials results in add-ons to the initial deployment. Add-ons such as analytic databases and cross-database schema management are introduced. As you go through the tutorials and work with the app, focus on the SaaS patterns as they relate to the data tier. In other words, focus on the data tier, and don't overanalyze the app itself. Understanding the implementation of these SaaS patterns is key to implementing these patterns in your applications. Also consider any necessary modifications for your specific business requirements. SQL Database Wingtip SaaS tutorials After you deploy the app, explore the following tutorials that build on the initial deployment. These tutorials explore common SaaS patterns that take advantage of built-in features of SQL Database, Azure SQL Data Warehouse, and other Azure services. Tutorials include PowerShell scripts with detailed explanations. The explanations simplify understanding and implementation of the same SaaS management patterns in your applications. Next steps Feedback Send feedback about:
https://docs.microsoft.com/en-us/azure/sql-database/saas-dbpertenant-wingtip-app-overview
2019-06-16T00:42:40
CC-MAIN-2019-26
1560627997508.21
[array(['media/saas-dbpertenant-wingtip-app-overview/app-architecture.png', 'Wingtip SaaS architecture'], dtype=object) ]
docs.microsoft.com
How to: Localize an Application This tutorial explains how to create a localized application by using the LocBaml tool. Note The LocBaml tool is not a production-ready application. It is presented as a sample that uses some of the localization APIs and illustrates how you might write a localization tool. Overview Build and Deploy. the LocBaml Tool Sample. Develop your application to the point where you want to start localization. Specify the development language in the project file so that MSBuild generates a main assembly and a satellite assembly (a file with the .resources.dll extension) to contain the neutral language resources. After running updateuid, your files should contain Uids. For example, in the Pane1.xaml file of HelloApp, you should find the following: <StackPanel x: <TextBlock x:Hello World</TextBlock> <TextBlock x:Goodbye World</TextBlock> </StackPanel> Create the Neutral Language Resources Satellite Assembly After the application is configured to generate a neutral language resources satellite assembly, you build the application. This generates the main application assembly, as well as the neutral language resources satellite assembly that is required by LocBaml for localization. To build the application: Compile HelloApp to create a dynamic-link library (DLL): msbuild helloapp.csproj The newly created main application assembly, HelloApp.exe, is created in the following folder: C:\HelloApp\Bin\Debug\ The newly created neutral language resources satellite assembly, HelloApp.resources.dll, is created in the following folder: C:\HelloApp\Bin\Debug\en-US\ Build the LocBaml Tool All the files necessary to build LocBaml are located in the WPF samples. Download the C# files from the. Note If you need a list of the options when you are running the tool, type LocBaml.exe and press ENTER. Use LocBaml to Parse a File Now that you have created the LocBaml tool, you are ready to use it to parse HelloApp.resources.dll to extract the text content that will be localized. Copy LocBaml.exe to your application's bin\debug folder, where the main application assembly was created. To parse the satellite assembly file and store the output as a .csv file, use the following command: LocBaml.exe /parse HelloApp.resources.dll /out:Hello.csv Note If the input file, HelloApp.resources.dll, is not in the same directory as LocBaml.exe move one of the files so that both files are in the same directory. When you run LocBaml to parse files, the output consists of seven fields delimited by commas (.csv files) or tabs (.txt files). The following shows the parsed .csv file for the HelloApp.resources.dll: The seven fields are: BAML Name. The name of the BAML resource with respect to the source language satellite assembly. Resource Key. The localized resource identifier. Category. The value type. See Localization Attributes and Comments. Readability. Whether the value can be read by a localizer. See Localization Attributes and Comments. Modifiability. Whether the value can be modified by a localizer. See Localization Attributes and Comments. Comments. Additional description of the value to help determine how a value is localized. See Localization Attributes and Comments. Value. The text value to translate to the desired culture. The following table shows how these fields map to the delimited values of the .csv file: Notice that all the values for the Comments field contain no values; if a field doesn't have a value, it is empty. Also notice that the item in the first row is neither readable nor modifiable, and has "Ignore" as its Category value, all of which indicates that the value is not localizable. To facilitate discovery of localizable items in parsed files, particularly in large files, you can sort or filter the items by Category, Readability, and Modifiability. For example, you can filter out unreadable and unmodifiable values.). LocBaml.exe /generate HelloApp.resources.dll /trans:Hello.csv /out:c:\ /cul:en-US Note If the input file, Hello.csv, is not in the same directory as the executable, LocBaml.exe, move one of the files so that both files are in the same directory. Replace the old HelloApp.resources.dll file in the C:\HelloApp\Bin\Debug\en-US\HelloApp.resources.dll directory with your newly created HelloApp.resources.dll file. "Hello World" and "Goodbye World" should now be translated in your application. To translate to a different culture, use the culture of the language that you are translating to. The following example shows how to translate to French-Canadian: LocBaml.exe /generate HelloApp.resources.dll /trans:Hellofr-CA.csv /out:c:\ /cul:fr-CA In the same assembly as the main application assembly, create a new culture-specific folder to house the new satellite assembly. For French-Canadian, the folder would be fr-CA. Copy the generated satellite assembly to the new folder. To test the new satellite assembly, you need to change the culture under which your application will run. You can do this in one of two ways: Change your operating system's regional settings (Start | Control Panel | Regional and Language Options). In your application, add the following code to App.xaml.cs: <Application xmlns="" xmlns: </Application> using System.Windows; using System.Globalization; using System.Threading; namespace SDKSample { public partial class App : Application { public App() { // Change culture under which this application runs CultureInfo ci = new CultureInfo("fr-CA"); Thread.CurrentThread.CurrentCulture = ci; Thread.CurrentThread.CurrentUICulture = ci; } } } Imports System.Windows Imports System.Globalization Imports System.Threading Namespace SDKSample Partial Public Class App Inherits Application Public Sub New() ' Change culture under which this application runs Dim ci As New CultureInfo("fr-CA") Thread.CurrentThread.CurrentCulture = ci Thread.CurrentThread.CurrentUICulture = ci End Sub End Class End Namespace. See also Feedback Send feedback about:
https://docs.microsoft.com/en-us/dotnet/framework/wpf/advanced/how-to-localize-an-application
2019-06-16T00:49:35
CC-MAIN-2019-26
1560627997508.21
[]
docs.microsoft.com
RadDataForm - Provide the Source This article will guide you through the process of adding a RadDataForm instance to a page in a {N} + Angular. Example 1: Declare the object that we will use as a source for RadDataForm export class Person { public name: string; public age: number; public email: string; public city: string; public street: string; public streetNumber: number; constructor(name: string, age: number, email: string, city: string, street: string, streetNumber: number) { this.name = name; this.age = age; this.email = email; this.city = city; this.street = street; this.streetNumber = streetNumber; } } Installation Run the following command to add the plugin to your application: tns plugin add nativescript-ui-dataform Add RadDataForm to the Page Before proceeding, make sure that the NativeScriptUIDataFormModule from the nativescript-ui-dataform plugin has been imported in an ngModule in your app as explained here. After that simply add the RadDataForm tag to the HTML and set its source accordingly: Example 2: Add RadDataForm to a page <RadDataForm tkExampleTitle tkToggleNavButton [source]="person"></RadDataForm> Note the data binding of the source property of RadDataForm to the person property of our component. Let's add this property in the @Component '.ts' file and initialize it in the ngOnInit method: Example 3: Define the property used for binding import { Component, OnInit } from "@angular/core"; import { Person } from "../data-services/person"; @Component({ moduleId: module.id, selector: "tk-dataform-getting-started", templateUrl: "dataform-getting-started.component.html" }) export class DataFormGettingStartedComponent implements OnInit { private _person: Person; constructor() { } ngOnInit() { this._person = new Person("John", 23, "[email protected]", "New York", "5th Avenue", 11); } get person(): Person { return this._person; } } for Angular repo on GitHub. You will find these and many other practical examples with NativeScript UI. Related articles you might find useful:
https://docs.nativescript.org/angular/ui/professional-ui-components/ng-DataForm/GettingStarted/dataform-start-source
2019-06-16T01:25:38
CC-MAIN-2019-26
1560627997508.21
[array(['../../../img/ns_ui/dataform-start-source-android.png', 'DataForm in Android NativeScriptUI-DataForm-Getting-Started-Android'], dtype=object) array(['../../../img/ns_ui/dataform-start-source-ios.png', 'DataForm in iOS NativeScriptUI-DataForm-Getting-Started-iOS'], dtype=object) ]
docs.nativescript.org
Configure CloudWatch Log inputs for the Splunk Add-on for AWS Splunk strongly recommends against using the CloudWatch Logs inputs to collect VPC Flow Logs data (source type: aws:cloudwatchlogs:vpcflow) since the input type will be deprecated in upcoming releases. Configure Kinesis inputs to collect VPC Flow Logs instead. The add-on includes index-time logic to perform the correct knowledge extraction for these events through the Kinesis input as well. Configure a CloudWatch Logs input for the Splunk Add-on for Amazon Web Services on your data collection node through Splunk Web (recommended), or in local/aws_cloudwatch_logs_tasks.conf. - Configure a CloudWatch Logs input using Splunk Web (recommended) - Configure a CloudWatch Logs input using configuration file Configure a CloudWatch Logs input using Splunk Web To configure inputs using Splunk Web, click on Splunk Add-on for AWS in the left navigation bar on Splunk Web home, then choose one of the following menu paths depending on the data type you want to collect: - Create New Input > VPC Flow Logs > CloudWatch Logs. - Create New Input > Others > CloudWatch Logs. Configure a CloudWatch Logs input using configuration file To configure the input using configuration file, create $SPLUNK_HOME/etc/apps/Splunk_TA_aws/local/aws_cloudwatch_logs_tasks.conf using the following template. [<name>] account = <value> groups = <value> index = <value> interval = <value> only_after = <value> region = <value> sourcetype = <value> stream_matcher = <value> Here is an example stanza that collects VPC Flow Log data from two log groups. [splunkapp2:us-west-2] account = splunkapp2 groups = SomeName/DefaultLogGroup, SomeOtherName/SomeOtherLogGroup index = default interval = 600 only_after = 1970-01-01T00:00:00 region = us-west-2 sourcetype = aws:cloudwatchlogs:vpcflow stream_matcher = eni.* This documentation applies to the following versions of Splunk® Supported Add-ons: released how can I 'assume role' in menu Cloudwatch Log Input?. If not, how can I pull cloudwatch log if I would like to use 'Assume Role'. When will Kinesis Inputs be support for the AWS GovCloud Region (us-gov-west-1)? I currently get an error that it is not supported when I attempt to add the input. Hi Badrinath, We plan to deprecate the CloudWatch Logs input because AWS doesn't intend that interface for high throughput, and it throttles quickly. Thanks! Hi Team We are consuming data from many custom cloudwatch log groups created for our applications, could you please let us know why this feature would not be supported in future releases and what is the alternate option we have apart from Kinesis Stream. It’s the CloudWatch Logs input that will be deprecated in future releases. The doc has been updated to avoid any ambiguity regarding this. Hi, I want to confirm this.. There are 2 inputs in this add-on - 'Cloudwatch' and 'Cloudwatch Logs'. Which of these 2 input will be deprecated in future releases specifically? Will it be 'Cloudwatch' or 'Cloudwatch Logs'? Dear Vineet, the Delay parameter is not exposed on the configuration UI, but you can view its description in this file: $SPLUNK_HOME/etc/apps/Splunk_TA_aws/README/aws_cloudwatch_logs_tasks.conf.spec Again, it is not advisable to use the CloudWatch Log input since Splunk will no longer support it in future AWS releases. Thanks! Hi Hunter, Thanks for the information. I have another question. What is the purpose for "delay" field? There is not description about this parameter so any information will be useful. Dear Vineet, please note the following: Warning: Splunk strongly recommends against using this modular input since it will be deprecated in upcoming releases. Currently, log stream wildcard is not supported, but wildcard is supported in Cloud Watch metrics and dimensions. Thanks! I see new version 4.1.2 has been released for this app on Nov 19. I upgraded my installation from 4.1.1 to 4.1.2. But still the wildcard for log groups is not accepted. It is real inconvenience to manually add log groups, especially when we have lot of groups to capture data from. Do you have any idea when will it be fixed? Hi there, yes you are right that the delay field cannot be changed via gui. it can be changed by editing the config file manually. My problem is I cannot find any information on what the field actually does. I am having trouble with cloudwatch logs data turning up late and I was looking at the delay field as a possible cause. Thanks for your feedback, Bill! However, seems there is no Delay field in the corresponding UI. Could you please recheck? Thanks! Any information on what the delay field is for, and how can I change it? Hi Michael Wildcards for log group names are not supported in this release (4.1.1), but we plan to add this feature for the next release. I recommend that you can follow this add-on in Splunkbase. You will get notice as soon as the new version released. Actually, if you have large number of VPC flow logs, I recommend you configure them through the Kinesis input, Kinesis has better performance than Cloudwatch logs. Hello, we recently experienced an issue where enabling the Cloudwatch Logs input caused our daily indexing to jump from ~2G a day to nearly 30G a day. The issue appears to be with VPC Flow Logs the default expiration of data is 6 months. Changing this to 1 day fixed our out of control daily indexing issue. Currently, Wild cards for log group names are not accepted. I have tried regex as well as "*". This makes it extremely inconvenient since we have many log group names that appear and disappear a lot. Allowing wild cards would be extremely helpful.. In terms of creating and configuring the IAM roles on the AWS side, see the "IAM Roles" section of the "AWS Identity and Access Management" manual in the AWS documentation. Hopefully this helps, and please feel free to reach out via the "Was this topic useful" tab of this manual.
https://docs.splunk.com/Documentation/AddOns/released/AWS/CloudWatchLogs
2019-06-16T01:14:53
CC-MAIN-2019-26
1560627997508.21
[array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)]
docs.splunk.com
scikit-allel - Explore and analyse genetic variation¶ This package provides utilities for exploratory analysis of large scale genetic variation data. It is based on numpy, scipy and other established Python scientific libraries. - GitHub repository: - Documentation: - Download: If you have any questions, find a bug, or would like to suggest a feature, please raise an issue on GitHub. This site provides reference documentation for scikit-allel. For worked examples with real data, see the following articles: Installation¶ This package requires numpy, scipy, matplotlib, seaborn, pandas, scikit-learn, h5py, numexpr, bcolz and petl. Install these dependencies first, then use pip to install scikit-allel: $ pip install -U scikit-allel Contents¶ - Data structures - Statistics and plotting - Input/output utilities - Release notes Acknowledgments¶ Development of this package is supported by the MRC Centre for Genomics and Global Health.
https://scikit-allel.readthedocs.io/en/v0.18.1/
2019-06-16T00:38:07
CC-MAIN-2019-26
1560627997508.21
[]
scikit-allel.readthedocs.io
Env Detection¶ Env detection is the primary feature of Tox-Travis. Based on the matrix created in .travis.yml, it decides which Tox envs need to be run for each Travis job. Usage¶¶ To customize what environments tox will run on Travis, add a section to tox.ini telling it what environments to run under which versions of Python: [tox] envlist = py{27,34}-django{17,18}, docs [travis] python = specified in the Travis build matrix. If you are using multiple Travis factors, then you can use those factors to decide what will run. For example, see the following .travis.yml and tox.ini: sudo: false language: python python: - "2.7" - "3.4" env: - DJANGO="1.7" - DJANGO="1.8" matrix: include: - os: osx language: generic install: pip install tox-travis script: tox [tox] envlist = py{27,34}-django{17,18}, docs [travis] os = linux: py{27,34}-django{17,18}, docs osx: py{27,34}-django{17,18} python = 3.4: py34, docs [travis:env] DJANGO = 1.7: django17 1.8: django18, docs Travis will run 5 different jobs, which will each run jobs as specified by the factors given. os: linux (default), language: python, python: 2.7, env: DJANGO=1.7 This will run the env py27-django17, because py27is the default, and django17is specified. os: linux (default), language: python, python: 3.4, env: DJANGO=1.7 This will run the env py34-django17, but not docs, because docsis not included in the DJANGO 1.7 configuration. os: linux (default), language: python, python: 2.7, env: DJANGO=1.8 This will run the env py27-django18, because py27is the default. docsis not run, because Python 2.7 doesn’t include docsin the defaults that are not overridden. os: linux (default), language: python, python: 3.4, env: DJANGO=1.8 This will run the envs py34-django18and docs, because all specified factors match, and docsis present in all related factors. os: osx, language: generic This will run envs py27-django17, py34-django17, py27-django18, and py34-django18, because the osfactor is present, and limits it to just those envs. Unignore Outcomes¶ By default, when using ignore_outcome in your Tox configuration, any build errors will show as successful on Travis. This might not be desired, as you might want to control allowed failures inside your .travis.yml. To cater this need, you can set unignore_outcomes to True. This will override ignore_outcome by setting it to False for all environments. Configure the allowed failures in the build matrix in your .travis.yml: matrix: allow_failures: - python: 3.6 env: DJANGO=master And in your tox.ini: [travis] unignore_outcomes = True
https://tox-travis.readthedocs.io/en/stable/envlist.html
2019-06-16T00:51:28
CC-MAIN-2019-26
1560627997508.21
[]
tox-travis.readthedocs.io
Search Form Style sets the styling for the interactive locator elements on your site. It uses pre-built jQuery Theme Roller style CSS designs to create a simple way to style interactive elements such as the autocomplete feature (available in WPSLP Experience or with MySLP Professional) on the address box. The default styling is set to “None” which provides not special styling rules from interactive JavaScript elements in the locator interface. This allows the site designer to create their own styles for the website in which the locator will be placed. The base service also includes the Base jQuery theme. Setting A New Style For WordPress plugin users go to the Store Locator Plus selection on the sidebar menu. For MySLP users go to Advanced Options. Select Settings from the tab list. Search will be the default sub-tab. Scroll down to the Appearance section and expand it by clicking the word “Appearance” if necessary. Select a new style from the Search Form Style drop-down menu. Available For WPSLP and MySLP Search Form Style is a feature that is included in the base plugin of Store Locator Plus for WordPress and is available under Advanced Options for all levels of the MySLP service. Premier members on WPSLP and Enterprise Level users for MySLP will have multiple options for the Search Form Style.
https://docs.storelocatorplus.com/blog/tag/search/
2017-06-22T18:40:01
CC-MAIN-2017-26
1498128319688.9
[array(['https://i1.wp.com/docs.storelocatorplus.com/wp-content/uploads/2017/05/Search-Form-Style-Default-None-2017-05-23_16-20-30.png?resize=571%2C196&ssl=1', 'Search Form Style Default None 2017-05-23_16-20-30.png'], dtype=object) array(['https://i0.wp.com/docs.storelocatorplus.com/wp-content/uploads/2017/05/Selecting-Base-Search-Form-Style.png?resize=701%2C410&ssl=1', 'Selecting Base Search Form Style'], dtype=object) array(['https://i1.wp.com/docs.storelocatorplus.com/wp-content/uploads/2017/05/jQuery-Base-default-styling-2017-05-23_16-23-52.png?resize=595%2C182&ssl=1', 'jQuery Base default styling 2017-05-23_16-23-52.png'], dtype=object) ]
docs.storelocatorplus.com
Gulp Command gulp - Build Monster UI Synopsis gulp [--pro=<name>] gulp build-dev [--pro=<name>] gulp build-prod [--pro=<name>] Description Start running tasks to build the project for a development or production environment, the result of which is located in the dist folder, at the root level of the project. Commands gulp Compile SCSS to CSS, launch Web server (browsersync) and serve project at, include a CSS watcher that make changes immediate in the browser (livereload), UI reloads automatically on file save. gulp build-dev Only compile SCSS to CSS. gulp build-prod Compile SCSS to CSS, merge all templates in a single templates.js file, run require, minify main.js and templates.js and minify CSS files. Options --pro=<name> Some applications might have a pro version. If that is the case and you want to build it with the 'pro' assets, you need to set the pro option and specify the name of the application.
https://docs.2600hz.com/ui/docs/gulpCommand/
2017-06-22T18:19:25
CC-MAIN-2017-26
1498128319688.9
[]
docs.2600hz.com
- Security > - Authentication > - Authentication Mechanisms > - x.509 > -. Prerequisites¶ Important A full description of TLS/SSL, PKI (Public Key Infrastructure) certificates, in particular x.509 certificates, and Certificate Authority is beyond the scope of this document. This tutorial assumes prior knowledge of TLS/SSL as well as access to valid x.509 certificates. Client x.509 Certificate¶ The client certificate must have the following properties: A single Certificate Authority (CA) must issue the certificates for both the client and the server. Client certificates must contain the following fields: keyUsage = digitalSignature extendedKeyUsage = clientAuth. Procedures¶ Configure Replica Set/Sharded Cluster¶ and mongos instances. Note If you are configuring a standalone mongod, omit the --clusterAuthMode option. Use Command-line Options¶ You can configure the MongoDB server from the command line, e.g.: mongod --clusterAuthMode x509 --sslMode requireSSL --sslPEMKeyFile <path to SSL certificate and key PEM file> --sslCAFile <path to root CA PEM file>. Use Configuration File¶ You may also specify these options in the configuration file. Starting in MongoDB 2.6, you can specify the configuration for MongoDB in YAML format, e.g.: security: clusterAuthMode: x509 net: ssl: mode: requireSSL PEMKeyFile: <path to TLS/SSL certificate and key PEM file> CAFile: <path to root CA PEM file> For backwards compatibility, you can also specify the configuration using the older configuration file format, e.g.: clusterAuthMode = x509 sslMode = requireSSL sslPEMKeyFile = <path to TLS/SSL certificate and key PEM file> sslCAFile = <path to the root CA PEM file> Include any additional options, TLS/SSL or otherwise, that are required for your specific configuration. Add x.509 Certificate subject as a User. You can retrieve the RFC2253formatted subjectfrom the client certificate with the following command: openssl x509 -in <pathToClient PEM> -inform PEM -subject -nameopt RFC2253 The command returns the subjectstring as well as certificate: subject= CN=myName,OU=myOrgUnit,O=myOrg,L=myLocality,ST=myState,C=myCountry -----BEGIN CERTIFICATE----- # ... -----END CERTIFICATE----- Add the RFC2253compliant value of the subjectas a user. Omit spaces as needed. For example, in the mongoshell, to add the user with both the readWriterole in the testdatabase and the userAdminAnyDatabaserole which is defined only in the admindatabase: Manage Users and Rolesongoshell to the mongodset up for SSL: mongo --ssl --sslPEMKeyFile <path to CA signed client PEM file> --sslCAFile <path to root CA PEM file> To perform the authentication, use the db.auth()method in the $externaldatabase. For the mechanismfield, specify "MONGODB-X509", and for the userfield, specify the user, or the subject, that corresponds to the client certificate. db.getSiblingDB("$external").auth( { mechanism: "MONGODB-X509", user: "CN=myName,OU=myOrgUnit,O=myOrg,L=myLocality,ST=myState,C=myCountry" } )
https://docs.mongodb.com/v3.4/tutorial/configure-x509-client-authentication/
2017-06-22T18:29:14
CC-MAIN-2017-26
1498128319688.9
[]
docs.mongodb.com
Kurento Java Tutorial - RTP Receiver¶ This web application consists of a simple RTP stream pipeline: an RtpEndpoint is configured in KMS to listen for one incoming video stream. This stream must be generated by an external program. Visual feedback is provided in this page, by connecting the RtpEndpoint to a WebRtcEndpoint in receive-only mode. The Java Application Server connects to all events emitted from KMS and prints log messages for each one, so this application is also a good reference to understand what are those events and their relationship with how KMS works. Check Endpoint Events for more information about events that can be emitted by KMS. Note This application uses HTTPS. It will work fine is you run it in localhost and accept a security exception in the browser, but you should secure your application if running remotely. For more info, check Configure a Java server to use HTTPS. Quick start¶ Follow these steps to run this demo application: Install and run Kurento Media Server: Installation Guide. Install Java JDK and Maven: sudo apt-get update sudo apt-get install default-jdk maven Run these commands: git clone cd kurento-tutorial-java/kurento-rtp-receiver/ git checkout 6.10.0 mvn -U clean spring-boot:run -Dkms.url=ws://localhost:8888/kurento Open the demo page with a WebRTC-compliant browser (Chrome, Firefox): Click on Start to begin the demo. Copy the KMS IP and Port values to the external streaming program. As soon as the external streaming program starts sending RTP packets to the IP and Port where KMS is listening for incoming data, the video should appear in the page. Click on Stop to finish the demo. Understanding this example¶ To implement this behavior we have to create a Media Pipeline, composed of an RtpEndpoint and a WebRtcEndpoint. The former acts as an RTP receiver, and the later is used to show the video in the demo page. This is a web application, and therefore it follows a client-server architecture. At the client-side, the logic is implemented in JavaScript. At the server-side, we use a Spring-Boot based application server consuming the Kurento Java Client API, to control Kurento Media Server capabilities. All in all, the high level architecture of this demo is three-tier. To communicate these entities, two WebSockets channels are used: - A WebSocket is created between the Application Server and the browser client, to implement a custom signaling protocol. - Another WebSocket is used to perform the communication between the Application Server and the Kurento Media Server. For this, the Application Server uses the Kurento Java Client library. This communication takes place using the Kurento Protocol (see Kurento Protocol). The complete source code for this tutorial can be found in GitHub. Application Server Logic¶ This demo has been developed using Java in the server side, based on the Spring Boot framework, which embeds a Tomcat web server within the resulting program, and thus simplifies the development and deployment process. Note You can use whatever Java server side technology you prefer to build web applications with Kurento. For example, a pure Java EE application, SIP Servlets, Play, Vert.x, etc. Here we chose Spring Boot for convenience. This graph shows the class diagram of the Application Server: Client-Side Logic¶ We use a specific Kurento JavaScript library called kurento-utils.js to simplify the WebRTC interaction between browser and application server. This library depends on adapter.js, which is a JavaScript WebRTC utility maintained by Google that abstracts away browser differences. These libraries are linked in the index.html page, and are used in the index.js file.
https://doc-kurento.readthedocs.io/en/stable/tutorials/java/tutorial-rtp-receiver.html
2019-04-18T15:36:01
CC-MAIN-2019-18
1555578517682.16
[]
doc-kurento.readthedocs.io
To prepare hosts to participate in NSX-T, you can manually install NSX-T kernel modules on Ubuntu KVM hosts. This allows you to build the NSX-T control-plane and management-plane fabric. NSX-T kernel modules packaged in DEB files run within the hypervisor kernel and provide services such as distributed routing, distributed firewall, and bridging capabilities. You can download the NSX-T DEBs manually and make them part of the host image. Be aware that download paths can change for each release of NSX-T. Always check the NSX-T downloads page to get the appropriate DEBs. Prerequisites Verify that the required third-party packages are installed. See Install Third-Party Packages on a KVM Host. Procedure - Log in to the host as a user with administrative privileges. - (Optional) Navigate to the /tmp directory. cd /tmp - Download and copy the nsx-lcp file into the /tmp directory. - Untar the package. tar -xvf nsx-lcp-<release>-ubuntu-trusty_amd64.tar.gz - Navigate to the package directory. cd nsx-lcp-trusty_amd64/ - Install the packages. sudo dpkg -i *.deb - Reload the OVS kernel module. /etc/init.d/openvswitch-switch force-reload-kmod If the hypervisor uses DHCP on OVS interfaces, restart the network interface on which DHCP is configured. You can manually stop the old dhclient process on the network interface and restart a new dhclient process on that interface. - To verify, you can run the dpkg –l | grep nsx command. user@host:~$ dpkg -l | grep nsx ii nsx-agent <release> amd64 NSX Agent ii nsx-aggservice <release> all NSX Aggregation Service Lib ii nsx-cli <release> all NSX CLI ii nsx-da <release> amd64 NSX Inventory Discovery Agent ii nsx-host <release> all NSX host meta package ii nsx-host-node-status-reporter <release> amd64 NSX Host Status Reporter for Aggregation Service ii nsx-lldp <release> amd64 NSX LLDP Daemon ii nsx-logical-exporter <release> amd64 NSX Logical Exporter ii nsx-mpa <release> amd64 NSX Management Plane Agent Core ii nsx-netcpa <release> amd64 NSX Netcpa ii nsx-sfhc <release> amd64 NSX Service Fabric Host Component ii nsx-transport-node-status-reporter <release> amd64 NSX Transport Node Status Reporter ii nsxa <release> amd64 NSX L2 Agent Any errors are most likely caused by incomplete dependencies. The apt-get install -f command can attempt to resolve dependencies and re-run the NSX-T installation. What to do next Add the host to the NSX-T management plane. See Join the Hypervisor Hosts with the Management Plane.
https://docs.vmware.com/en/VMware-NSX-T-Data-Center/2.0/com.vmware.nsxt.install.doc/GUID-3FACAE6B-ADCD-40AD-B25A-EF0ACAF8A24B.html
2019-04-18T14:51:26
CC-MAIN-2019-18
1555578517682.16
[]
docs.vmware.com
In addition to our pre-configured checks, you can build your own custom rules around the unique fraud risks faced by your business. These allow you to define which attributes and behaviours will result in a payment being either refused, or set aside for a Manual Review. Here we describe the steps required for building your own custom risk rules: You can also check out an example of a custom risk rule. Step 1: Enable custom risk fields First, you'll need to enable the fields you'll use in your custom risk rules. - Log in to your Customer Area. - From your company-level account, navigate to Risk > Custom risk fields. - Turn on Show inactive to view all the risk fields you can use. - Toggle on the fields that you'll use in your custom risk rules. You can include payments, risk data, basket, promotional, or airline fields that you're sending with each payment. Step 2: Create a rule Next you'll want to create a custom risk rule. Each rule needs at least one condition, containing: - A Field Name. This is one of the fields you are submitting with the payment (for example, the amount of the transaction). - An Operator. This compares the Field Name to the Field Value (for example, GREATER THAN (>)). - A Field Value. This is the criteria you'll use to trigger your rule (for example, 40000). You can also build more customized rules by joining multiple conditions together, using AND and OR. To create a custom risk rule: - Go to your Customer Area, and select a merchant-level account. - Navigate to Risk > Risk profiles. - Under Custom Rules, click + New Custom Risk Rule. - Enter a Rule Name. For each condition select a Field Name and Operator, and enter a Field Value. You can enter multiple Field Values in a condition by separating them with a comma. - Add any additional conditions to the rule by clicking AND or OR. - Click Save. Step 3: Assign an action Finally, you'll want to choose what action is taken when your rule is triggered. You can either refuse the transaction, or send it to Case Management for a manual review. Refuse the transaction To refuse the transaction, you'll need to assign a fraud score to your rule. Enter a value under Score. When your rule is triggered, this is how much the fraud score will increase for the transaction. The transaction will be refused if the overall fraud score is 100 or higher. For more information on fraud scores, see How does the fraud score work? Send transaction to manual review To forward the transaction for manual review, turn on Override. For more information on how to manually review transactions, see Case Management - Manual Review. Example Below is an example of a custom risk rule. It will be triggered when someone attempts to purchase flight tickets that are either: - one-way (numberOfLegs | == | 1) - AND to a destination that is Lagos or Abuja (destination_code | IS IN | LOS,ABV). - AND business class (class_of_travel | == | Business) OR - one-way (numberOfLegs | == | 1) - AND to a destination that is Lagos, Abuja, or Port Harcourt (destination_code | IS IN | LOS,ABV,PHC). - AND with a price above 1,000.00 (amount | > | 100000) - AND is in Euros, Pounds, or US Dollars (currency | IS IN | EUR,GBP,USD)
https://docs.adyen.com/developers/risk-management/revenueprotect-engine-rules/custom-risk-rules
2019-04-18T14:40:41
CC-MAIN-2019-18
1555578517682.16
[array(['/developers/files/36177586/47482326/1/1549640942623/create-risk-rule-in-customer-area.png', None], dtype=object) array(['/developers/files/36177586/47482327/1/1549640942635/assign-action-to-risk-rule-in-customer-area.png', None], dtype=object) array(['/developers/files/36177586/47482328/1/1549640942648/custom-risk-rules-example.png', None], dtype=object) ]
docs.adyen.com
openpyxl - A Python library to read/write Excel 2010 xlsx/xlsm files¶ Introduction¶ Openpyxl is a Python library for reading and writing. Support¶ This is an open source project, maintained by volunteers in their spare time. This may well mean that particular features or functions that you would like are missing. But things don’t have to stay that way. You can contribute the project Development yourself or contract a developer for particular features. Professional support for openpyxl is available from Clark Consulting & Research and Adimian. Donations to the project to support further development and maintenance are welcome. Bug reports and feature requests should be submitted using the issue tracker. Please provide a full traceback of any error you see and if possible a sample file. If for reasons of confidentiality you are unable to make a file publicly available then contact of one the developers. Sample code:¶, changes without tests will not be accepted.) There are plenty of examples in the source.5, so if it does not work on your environment, let us know :-) Installation¶ Install openpyxl using pip. It is advisable to do this in a Python virtualenv without system packages: $ pip install openpyxl Note There is support for the popular lxml library which will be used if it is installed. This is particular useful when creating large files. Warning To be able to include images (jpeg, png, bmp,...) into an openpyxl file, you will also need the “pillow” library that can be installed with: $ pip install pillow or browse, pick the latest version and head to the bottom of the page for Windows binaries. Information for Developers¶ - Development - Testing on Windows API Documentation¶ Indices and tables¶ Release Notes¶ - 2.4.8 (2017-05-30) - 2.4.7 (2017-04-24) - 2.4.6 (2017-04-14) - 2.4.5 (2017-03-07) - 2.4.4 (2017-02-23) - 2.4.3 (unreleased) - 2.4.2 (2017-01-31) - 2.4.1 (2016-11-23) - 2.4.0 (2016-09-15) - 2.4.0-b1 (2016-06-08) - 2.4.0-a1 (2016-04-11) - 2.3.5 (2016-04-11) - 2.3.4 (2016-03-16) - 2.3.3 (2016-01-18) - 2.3.2 (2015-12-07) - 2.3.1 (2015-11-20) - 2.3.0 (2015-10-20) - 2.3.0-b2 (2015-09-04) - 2.3.0-b1 (2015-06-29) - 2.2.6 (unreleased) - 2.2.5 (2015-06-29) - 2.2.4 (2015-06-17) - 2.2.3 (2015-05-26) - 2.2.2 (2015-04-28) - 2.2.1 (2015-03-31) - 2.2.0 (2015-03-11) - 2.2.0-b1 (2015-02-18) - 2.1.5 (2015-02-18) - 2.1.4 (2014-12-16) - 2.1.3 (2014-12-09) - 2.1.2 (2014-10-23) - 2.1.1 (2014-10-08) - 2.1.0 (2014-09-21) - 2.0.5 (2014-08-08) - 2.0.4 (2014-06-25) - 2.0.3 (2014-05-22) - 2.0.2 (2014-05-13) - 2.0.1 (2014-05-13) brown bag -)
http://openpyxl.readthedocs.io/en/default/index.html
2017-07-20T18:43:41
CC-MAIN-2017-30
1500549423320.19
[]
openpyxl.readthedocs.io
The Platform Selection page of the Internet Explorer Customization Wizard 11 lets you pick the operating system and architecture (32-bit or 64-bit) for the devices on which you’re going to install the custom installation package. To use the Platform Selection page Pick the operating system and architecture for the devices on which you’re going to install the custom package. You must create individual packages for each supported operating system. Note To keep your settings across several operating system packages, you can specify the same destination folder. Then, after running the wizard, you can reuse the resulting .ins file. Any additional changes to the .ins file are saved. For more info about using .ins files, see Using Internet Settings (.INS) files with IEAK 11. For more info about adding in your .ins file, see Use the File Locations page in the IEAK 11 Wizard. Click Next to go to the Language Selection page or Back to go to the File Locations page.
https://docs.microsoft.com/en-us/internet-explorer/ie11-ieak/platform-selection-ieak11-wizard
2017-07-20T19:59:26
CC-MAIN-2017-30
1500549423320.19
[]
docs.microsoft.com
StorageFileQueryResult StorageFileQueryResult StorageFileQueryResult Class Definition Provides access to the results of a query of the files in the location that is represented by a storageFolder object. You can use storageFileQueryResult to enumerate the files in that storageFolder location. public : sealed class StorageFileQueryResult : IStorageFileQueryResult, IStorageFileQueryResult2, IStorageQueryResultBase public sealed class StorageFileQueryResult : IStorageFileQueryResult, IStorageFileQueryResult2, IStorageQueryResultBase Public NotInheritable Class StorageFileQueryResult Implements IStorageFileQueryResult, IStorageFileQueryResult2, IStorageQueryResultBase - Attributes - Examples This example demonstrates how to get a list of files from a storageFileQueryResult object. // Set query options with filter and sort order for results List<string> fileTypeFilter = new List<string>(); fileTypeFilter.Add(".jpg"); fileTypeFilter.Add(".png"); fileTypeFilter.Add(".bmp"); fileTypeFilter.Add(".gif"); var queryOptions = new QueryOptions(CommonFileQuery.OrderByName, fileTypeFilter); // Create query and retrieve files var query = KnownFolders.PicturesLibrary.CreateFileQueryWithOptions(queryOptions); IReadOnlyList<StorageFile> fileList = await query.GetFilesAsync(); // Process results foreach (StorageFile file in fileList) { // Process file } //) { // Process file }); }); The query variable gets the storageFileQueryResult that is used to retrieve files. You can get a storageFileQueryResult object by calling the following methods from a storageFolder or a folderInformation object: - storageFolder.createFileQuery methods - storageFolder.createFileQueryWithOptions method - folderInformation.createFileQuery methods - folderInformation.createFileQueryWithOptions Properties Gets the folder that was queried to create the StorageFileQueryResult object. This folder represents the scope of the query. public : StorageFolder Folder { get; } public StorageFolder Folder { get; } Public ReadOnly Property Folder As StorageFolder The original folder. - Attributes - Methods - newQueryOptions - QueryOptions QueryOptions QueryOptions The new query options. - Attributes - Remarks This method causes the OptionsChanged event to fire. When this method returns, subsequent calls to GetFilesAsync or GetItemCountAsync will reflect the results of the new QueryOptions. Retrieves the index of the file from the query results that most closely matches the specified property value (or file, if used with FileActivatedEventArgs.NeighboringFilesQuery ). The property that is matched is determined by the first SortEntry of the QueryOptions.SortOrder list. public : IAsyncOperation<unsigned short> FindStartIndexAsync(PlatForm::Object value) public IAsyncOperation<uint> FindStartIndexAsync(Object value) Public Function FindStartIndexAsync(value As Object) As IAsyncOperation( Of uint ) - value - PlatForm::Object Object Object The property value to match when searching the query results. The property to that is used to match this value is the property in the first SortEntry of the QueryOptions.SortOrder list. Or, the file to match when searching with FileActivatedEventArgs.NeighboringFilesQuery. When this method completes successfully, it returns the index of the matched file in the query results or the index of the file in the FileActivatedEventArgs.NeighboringFilesQuery. In the latter case, the file is expected to be sourced from FileActivatedEventArgs.Files. If this function fails, it returns uint.MaxValue. - Attributes - Examples This example shows how to find the first song in an album that has a title beginning with the "R" in a set of query results that contains songs grouped by albumFileQueryResult queryResult = musicFolder.CreateFileFileQueryWithOptions(queryOptions); var firstIndex = queryResult.findStartIndexAsync("R"); Remarks You can use this method in conjunction with FileActivatedEventArgs.NeighboringFilesQuery to iterate between neighboring files while preserving the original view's sort order. Retrieves the query options used to determine query results. public : QueryOptions GetCurrentQueryOptions() public QueryOptions GetCurrentQueryOptions() Public Function GetCurrentQueryOptions() As QueryOptions The query options. - Attributes - Retrieves a list of all the files in the query result set. public : IAsyncOperation<IVectorView<StorageFile>> GetFilesAsync() public IAsyncOperation<IReadOnlyList<StorageFile>> GetFilesAsync() Public Function GetFilesAsync() As IAsyncOperation( Of IReadOnlyListStorageFile ) When this method completes successfully, it returns a list (type IVectorView ) of files that are represented by storageFile objects. - Attributes - - See Also - Retrieves a list of files in a specified range. public : IAsyncOperation<IVectorView<StorageFile>> GetFilesAsync(unsigned int startIndex, unsigned int maxNumberOfItems) public IAsyncOperation<IReadOnlyList<StorageFile>> GetFilesAsync(UInt32 startIndex, UInt32 maxNumberOfItems) Public Function GetFilesAsync(startIndex As UInt32, maxNumberOfItems As UInt32) As IAsyncOperation( Of IReadOnlyListStorageFile ) - startIndex - unsigned int UInt32 UInt32 The zero-based index of the first file to retrieve. This parameter is 0 by default. - maxNumberOfItems - unsigned int UInt32 UInt32 The maximum number of files to retrieve. Use -1 to retrieve all files. If the range contains fewer files than the max number, all files in the range are returned. When this method completes successfully, it returns a list (type IVectorView ) of files that are represented by storageFile objects. - Attributes - Remarks Use this overload to improve system performance by presenting a virtualized view of the query results that includes only the necessary subset of files. For example, if your app displays many files in a gallery you could use this range to retrieve only the files that are currently visible to the user. - See Also - Retrieves the number of files in the set of query results. public : IAsyncOperation<unsigned short> GetItemCountAsync() public IAsyncOperation<uint> GetItemCountAsync() Public Function GetItemCountAsync() As IAsyncOperation( Of uint ) When this method completes successfully, it returns the number of files in the location that match the query. - Attributes - - See Also - GetMatchingPropertiesWithRanges(StorageFile) GetMatchingPropertiesWithRanges(StorageFile) GetMatchingPropertiesWithRanges(StorageFile) Gets matching file properties with corresponding text ranges. public : IMap<PlatForm::String, IVectorView<TextSegment>> GetMatchingPropertiesWithRanges(StorageFile file) public IDictionary<string, IReadOnlyList<TextSegment>> GetMatchingPropertiesWithRanges(StorageFile file) Public Function GetMatchingPropertiesWithRanges(file As StorageFile) As IDictionary( Of string, IReadOnlyListTextSegment ) The file to query for properties. The matched properties and corresponding text ranges. - Attributes - Remarks Use this method to implement hit highlighting in your app's query results. Events Fires when a file is added to, deleted from, or modified in the folder being queried. This event only fires after GetFilesAsync has been called at least once. public : event TypedEventHandler ContentsChanged public event TypedEventHandler ContentsChanged Public Event ContentsChanged - Attributes - public event TypedEventHandler OptionsChanged Public Event OptionsChanged - Attributes -
https://docs.microsoft.com/en-us/uwp/api/Windows.Storage.Search.StorageFileQueryResult
2017-07-20T19:42:01
CC-MAIN-2017-30
1500549423320.19
[]
docs.microsoft.com
Customizing BDDfy BDDfy strives to be very extensible: Its core barely has any logic in it all its responsibilities are delegated to extensions, all of which are configurable. For example, if you don't like the reports it generates, you can write your custom reporter in a few lines of code. This section will look at the extensibility points and provide samples of customizing BDDfy.
http://teststackbddfy.readthedocs.io/en/latest/Customizing/
2017-07-20T18:43:29
CC-MAIN-2017-30
1500549423320.19
[]
teststackbddfy.readthedocs.io
vSphere HA clusters enable a collection of ESXi hosts to work together so that, as a group, they provide higher levels of availability for virtual machines than each ESXi host can provide individually. When you plan the creation and usage of a new vSphere HA cluster, the options you select affect the way that cluster responds to failures of hosts or virtual machines. Before you create a vSphere HA cluster, you should know how vSphere HA identifies host failures and isolation and how it responds to these situations. You also should know how admission control works so that you can choose the policy that fits your failover needs. After you establish a cluster, you can customize its behavior with advanced options and optimize its performance by following recommended best practices. You might get an error message when you try to use vSphere HA. For information about error messages related to vSphere HA, see the VMware knowledge base article at.
https://docs.vmware.com/en/VMware-vSphere/6.0/com.vmware.vsphere.avail.doc/GUID-5432CA24-14F1-44E3-87FB-61D937831CF6.html
2017-07-20T18:55:18
CC-MAIN-2017-30
1500549423320.19
[]
docs.vmware.com