content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
Announcing the Modern Servicing Model for SQL Server
Background to SQL Server servicing
Historically, we have released Cumulative Updates (CUs) every 2 months after a major version is released, and roughly yearly Service Packs (SPs), containing fixes from all previous CUs, plus any feature completeness or supportability enhancements that may require localization. You can read more about the SQL Server Incremental Servicing Model (ISM) here.
Up to and including SQL Server 2016, RTM and any subsequent SPs establish a new product baseline. For each new baseline, CUs are provided for roughly 12 months after the next SP releases, or at the end of the mainstream phase of product lifecycle, whichever comes first.
For the entire product lifecycle, we release General Distribution Releases (GDRs) when needed, containing only security related fixes.
The Modern.
10/8/2018: Changes made to the above!
- Starting with SQL Server 2017 CU13, CUs will be delivered bi-monthly (every other month) instead of quarterly. CU13 is scheduled for 12/18/2018. We may reevaluate the need to move to a quarterly cadence at year 3 of mainstream support. For more details, please refer to Announcing Updates to the Modern Servicing Model for SQL Server.
Note: the Modern Servicing Model (MSM) only applies to SQL Server 2017 and future versions.
Servicing lifecycle
The servicing lifecycle is unchanged from SQL Server 2016:
- Years 0-5 (Mainstream Support): Security and Functional issue resolution though CUs. Security issues through GDRs.
- Years 6-10 (Extended Support): Security or critical functional issues.
- Years 11-16 (Premium Assurance): Optional paid extension to Extended Support (no scope change).
Having questions is expected. Please read below in case we have already covered it in this FAQ.
Q1: SPs were fully localized, and you released one update file for every supported language. How will this be handled with no SPs?
A1: CUs will be localized starting with SQL 2017. CUs will handle this requirement maintaining a single update file.
Q2: When we upgraded from a previous version of SQL Server, we did so at SP1 using slipstream media provided by Microsoft. How will this work with no SPs?
A2: We will provide CU based slipstream media for CU12 allowing for this.
10/8/2018: Changes made to the above!
- Slipstream media will NOT be provided for SQL Server 2017 CU12, or any subsequent CU. Please see Announcing Updates to the Modern Servicing Model for SQL Server for further details surrounding this change.
-.
Q3: My company always waited for SP1 to perform an upgrade from a previous version. What are my options now?
A3: Even before GA, the final SQL Server 2016 CTP versions were considered production-ready having gone through exhaustive testing both internally and with many preview customers. So there is no need to wait for an SP to install the latest SQL Server – you can install confidently as soon as a given version goes GA.
With that, you can still target any CU for Upgrade. For example, you could target CU12 for upgrade, and optionally utilize the manual slipstream for the upgrade.
Q4: I service an instance only with GDRs. I do not apply CUs, but apply SPs. Will I need to move to a CU servicing train if I need a non-critical/security fix?
A4: Yes. While this was previously true only leading up to SPs, now you must apply latest CU and there will not be an opportunity to reset back to receiving GDR updates only.
Q5: Assume that after Mainstream Support, you release a security fix. Are these going to be GDRs only? If so, how can I install it, if I'm already on a CU servicing train?
A5: During Extended Support, we will release GDRs and GDR-CUs separately. The same is valid for customers that purchase the additional Premium Assurance. required to an SP to receive CUs, or need to worry about which RTM/SP baseline a CU applies to. Additionally, while in mainstream support, it does not matter what CU an instance is at to request a hotfix.
Q7: If I am on RTM baseline, and CU20 (for example) was just released, will I receive technical support?
A7: This may be handled on a case by case basis. If the issue/question is in an area that has received a significant number of updates throughout the years, you may be asked to update to a later CU, yes.
Q8: Will SQL Server on Linux receive CUs and GDRs as well?
A8: Yes, every CU and GDR will have corresponding updates to all current Linux platforms.
Q9: Will CU and GDR KB articles then cover both SQL Server on Windows and Linux?
A9: Yes. Issues addressed in each release will be categorized by impacted platform(s).
Q10: Will SQL Server for Linux CUs and GDRs be updates to an existing installation like SQL Server on Windows?
A10: No, SQL Server on Linux updates will completely replace all binaries in the existing installation.
Q11: On SQL Server on Linux, can I remove an update?
A11: Yes, however this operation is performed by re-installing any desired previous servicing level package.
Q12: Will the testing and resulting quality levels of CUs be the same as SPs?
A12: Yes. CUs for all versions of SQL Server are tested to the same levels of Service Packs. As announced in January 2016, you should plan to install a CU with the same level of confidence you plan to install SPs as they are released. You can read more about that here.
Q13: Monthly CU releases are fast, I do not believe my business can keep pace with this, yet you have been proactively encouraging customers to stay current.
A13: Yes, the cadence is fast for the first 12 months. However, payload will be roughly 50% in theory, so these should be easier to consume. Of course, you still have the option to install every other CU for example, for the first 12 months. As the name suggests, all CUs are cumulative.
Q14: Why release CUs every month only for the first year, then move to a slower release cadence for the remaining 4 years?
A14: Data shows that the vast majority, and severity, of all hotfixes issued for a major release occur in the first 12 months. The monthly cadence brings these fixes to customers much faster when it has the most impact. Reducing the cadence after 12 months reduces customer and operational overhead over the course of the remaining 4 years.
Q15: Will the availability of CUs remain unchanged?
A15: For SQL Server on Windows CUs, no changes are planned. The most recent CU will be available on the Download Center, Windows Catalog, and WSUS. Previous CUs will be available in the Windows Catalog.
Q16: Where will I look for SQL Server on Linux CUs and GDRs?
A16: All updates, current and previous, will be maintained and available in repositories.
Q17: I see that Reporting Services (SSRS) is no longer installed by Setup. Where is it and how will it be serviced?
A17: RS is available for download via a link in Setup. Servicing will be independent moving forward.
Q18: Will the Modern Servicing Model be adopted only for SQL Server 2017 (and later)?
A18: Yes, the Modern Servicing Model (MSM) only applies to SQL Server 2017 and future versions.
Q19: When are CUs available in the Microsoft Update Catalog?
A19: CUs will be offered from Microsoft Update/WSUS and available in the Microsoft Windows Update Catalog within 7 business days following the release of the CU to Download Center. | https://docs.microsoft.com/en-us/archive/blogs/sqlreleaseservices/announcing-the-modern-servicing-model-for-sql-server | 2020-05-25T03:03:41 | CC-MAIN-2020-24 | 1590347387155.10 | [] | docs.microsoft.com |
Crate darling [−] [src]
Darling
Darling is a tool for declarative attribute parsing in proc macro implementations.
Design
Darling takes considerable design inspiration from [
serde]. A data structure that can be
read from any attribute implements
FromMetaItem (or has an implementation automatically
generated using
derive). Any crate can provide
FromMetaItem implementations, even one not
specifically geared towards proc-macro authors.
Proc-macro crates should provide their own structs which implement or derive
FromDeriveInput and
FromField
The traits
FromDeriveInput and
FromField support forwarding fields from the input AST directly
to the derived struct. These fields are matched up by identifier before
rename attribute values are
considered. The deriving struct is responsible for making sure the types of fields it does declare match this
table.
A deriving struct is free to include or exclude any of the fields below. | https://docs.rs/darling/0.2.0/darling/ | 2017-11-17T21:28:28 | CC-MAIN-2017-47 | 1510934803944.17 | [] | docs.rs |
Multi-Tenancy¶
If a site built with the django-SHOP framework shall be used by more than one vendor, we speak about a multi-tenant environment.. Each new product, he adds to the site, is assigned to him. Later on, existing products can only be modified and deleted by the vendor they belong to.
Product Model¶
Since we are free to declare our own product models, This can be achieved by adding a foreign key onto the User model:
from shop.models.product import BaseProduct class Product(BaseProduct): # other product attributes merchant = models.ForeignKey( User, verbose_name=_("Merchant"), limit_choices_to={'is_staff': True}, )
Note
unfinished docs | http://django-shop.readthedocs.io/en/latest/howto/multi-tenancy.html | 2017-11-17T20:54:34 | CC-MAIN-2017-47 | 1510934803944.17 | [] | django-shop.readthedocs.io |
Creating a new connection
How to create a new connection in DataStax Studio.
Prerequisites
A notebook persists its data to a DSE cluster. Unlike Studio 1.0, you do not need create a connection before creating a notebook.
For DSE Graph, the connection will identify a particular graph. For CQL, the connection identifies a database instance. For notebooks using CQL, the keyspace and table can be changed within the notebook. For notebooks using DSE Graph, the connection must be changed to use a different graph.
To create a new connection:
Procedure
- Browse to the URL for the Studio installation.
- In the menu, select the Browse Connections menu item.
The Browse Connections page displays.
- Create the new connection:
You now have a connection for a DataStax Enterprise cluster. Each notebook has only one connection. Each connection can have multiple notebooks.
- Select Add connection (the plus icon in the top center of the page).The Create Connection dialog appears.
- Enter the following information:
This example connects to a single-node DSE cluster on the local host via the default port.
- Name
- My First Connection
- Host/IP
127.0.0.1
- Port
- 9042
- Select Save. | http://docs.datastax.com/en/dse/5.1/dse-dev/datastax_enterprise/studio/gs/stdCreatingNewConnection.html | 2017-11-17T20:53:15 | CC-MAIN-2017-47 | 1510934803944.17 | [] | docs.datastax.com |
Author: SnowSultan
Tools Needed
Hi, and welcome to my latest tutorial. Ever since writing my original anime-style tutorial (also here on DAZ), I have been trying to come up with an easier procedure that didn't require so much postwork. Hopefully, this new method will save you a lot of work and frustration and help you create fun, cartoony Poser renders in much less time! Thanks to some very creative ideas from Stewer, ockham, and others, I've been able to improve this tutorial further and make it even easier than before!
You can use the Z-toon technique on any type of figure in Poser, but I would recommend starting with either Lady Littlefox's Koshini or Ichiro (available for purchase here at DAZ), or the anime figures at Play With Poser (). Vicki and Mike work well too, but you will probably want to use a very simple texture (or even none at all) if you want a more classic cel-shaded look.
The example image above was created using the old Z-toon method and a little postwork in Photoshop. In this tutorial, I'm going to use Koshini and some free clothing by Motsuura to create a more simple example. Let's get started!
If you read the original version of this Z-toon tutorial, you'll remember that we couldn't conform any clothing and had to apply pose Dots to each clothing item to match the figure. No more! Now you can conform figures just as if you were creating a regular Poser scene. Apply the clothing and hair to your figure and try to pose it at this point if you can. Use the Main camera to view your scene, but try not to move the camera or change it's angle too much. Poser's cameras do not work well when creating a Z-toon render, so we will use an alternate method of changing our viewing angle later in this tutorial.
Once your figure is all dressed and posed, it's time for lighting. If you read my original anime tutorial, you'll remember that I recommend using one light in order to keep the shadows simple. We're going to do the same thing in this tutorial too. :) Delete all but one light in your scene and make the remaining one a white Infinite at 100% Intensity. Try to position it like in the example above so that it's near the middle of the Light Control globe and shines directly on the figure. You can try coloring the light or even adding two lights if you have a large scene, but I would recommend starting with one for now.
The next few steps will be new for this tutorial, and may seem confusing at first. In the end however, you will see that a little preparation now makes things much easier later. :)
Open your Props folder and you should find several basic objects that were included with Poser (a Box, a Ball, a Cane, a Cone, etc.). Locate the Ball as shown in the image above and load it into your scene. Feel free to move it out of the way of other objects too, but try to keep it in view for now. We will hide it when we render our image anyway.
Now comes the important part. Select each figure in the scene (each character, clothing item, etc.) and choose Set Figure Parent… When the Figure Parent window appears, choose the Ball that we just added. Repeat these steps to parent each figure or prop to the Ball. You should not have to parent smart props like jewelry or weapons to the Ball because they will follow their original parent (the figure they are smart-propped to).
Once all of the figures are parented to the Ball, go back to your Props folder and load the Box into your scene. Go ahead and move it over if you wish, like I did in the example above. Now you should have a Ball and a Box in your scene, along with your figure.
Select the Ball now and choose Object - > Change Parent… When the window opens again asking you to choose the Ball's new parent, choose the Box.
At the moment, our figure and all her clothes are parented to the Ball, and the Ball is now parented to the Box. I hope you're not too confused! :) The hard part is over now though, so let's continue and you will see what we do with those two primitive props.
Now it's time to discover why I call this the Z-toon method! Select the Box and reduce it's zScale dial to somewhere between 5% and 20%. This shrinks the depth of the Box and everything that is parented to it - which now includes the figure and all her clothes. Without depth, the objects appear as flat, two-dimensional pictures instead of a 3D mesh, and this is what creates the cartoon look.
If you rotate the camera, you will see that the entire figure has become flat and two-dimensional, almost like a paper doll. The reason why this gives the character a cartoon look is because without depth (zScale), shadows are unable to form around the contours of the mesh and instead are simply cast by the paper-thin silhouette of the figure The lower you make the zScale, the less depth the figure will have and the fewer details and shadows will appear. Anything below 5% can cause the edges to lose focus and look like dotted lines, so I like to keep it around 10%-15%. You can always smooth out unwanted shaded areas or slight flaws in post-work anyway.
The most interesting thing about this technique though is that the figure is still poseable even while flattened! Morph targets can be adjusted, arms and legs can bend, and the figure can be rotated as well. However, it's very important to remember to only rotate the figure by her Hip and not by the Body or by rotating the camera around her. Select the character's Hip (not the clothing's Hip, be sure to check before rotating) and use the dials to turn her around or even tip her at an angle. You'll see that even though the figure itself is flat, it is still able to be posed and rotated within it's own space…kind of like the three criminals who were trapped in the Phantom Zone in the Superman movies. :)
Once the figure is flattened, you should look it over and eliminate any highlights or detailed textures that detract from the cel-shaded look. I always remove skin and eye highlights first, since these are rarely desired in cartoon images. Sometimes you may want to keep Poser's highlights on shiny clothing or objects, and other times you may prefer to paint them yourself in postwork to be more accurate. Very detailed textures often look strange on a Z-toon figure, and you may find that using no texture at all and simply coloring the figure in the Materials menu gives you a cleaner, more hand-colored result.
In this step, we will use the Ball to simulate camera movement. In the example above, I added a ground prop so that we could easily see how the entire environment is rotating and not just the figure. Parent the ground object to the Ball, otherwise it will not move with the rest of the scene.
Now all you have to do is select the Ball and adjust it's x, y, and zRotate dials. Rotating or moving the ball will move all of the parented objects and figures, so by rotating the ball, you are effectively rotating your entire scene as well.
You do not have to add background or ground objects to do this; as long as the figure and any clothes are parented to the Ball (which they should be anyway), you can manipulate the Ball's position and rotation to create a different viewing angle. When making animations, just remember to rotate the Ball to animate the camera angle and it should work just fine.
Now that your Z-toon render is complete, you might also want to make a quick shadow guide to help you add shadows more accurately in postwork. It's very easy to do, and can be very helpful when you have to paint or touch up complex shadows for creating more classic anime images.
Switch to Cartoon with Line mode and your scene should look similar to the one above. At first, you might not see many shadows on your figure because it is still flattened (not 100% zScale), and because your light is probably shining directly on the figure. Changing either the light's direction or restoring the zScale to 100% should cast stronger shadows. Doing both might cast too many shadows though, so try one at a time first. :) When you're ready, anti-alias the window and export the image (don't Render it or you won't get the Cartoon with Line mode style).
Don't adjust the camera (or Ball), or change anything else in your scene at this stage though. The only point of this step is to produce a simple cartoon-style image that shows the direction and location of shadows. Color, detail, and even size do not matter here, it's just for reference. If you don't want to do postwork, or if you're making an animation and cannot add shadows by hand, you can skip this step.
That's about it! Here is a quick list of the steps in this tutorial for easy reference.
1. Create your scene normally, loading figures and hair and conforming any clothing. Try to pose the figure(s) at this stage, and don't adjust the camera angle too much.
2. Delete all but one light and shine it directly at the figure. I recommend using an Infinite 100% Intensity white light while you are learning.
3. Import the Ball that comes with Poser into the scene and parent each figure (clothing, characters, etc) to it using “Set Figure Parent …”.
4. Import the Box that comes with Poser into the scene and parent the Ball to it.
5. Select the Box and reduce it's zScale to 5%-20% (I like 10%-15% personally). This will flatten the figure and create the cartoon style look.
6. Eliminate highlights that detract from the flat cartoon effect. Usually you will at least want to remove them from the character's skin and eyes.
7. Rotate the Ball to change the view of the scene. DO NOT use the cameras or you will see that the figures are flat.
8. Switch to Cartoon with Line mode and make slight adjustments to the lights if necessary to create a shadow painting guide. This step is optional.
One more thing…remember that using a flat cartoony figure in front of non-flattened background props can look strange if you are really trying to achieve a cel-shaded look. You can always try flattening background items too…sometimes they work and sometimes they don't. ;)
Be sure to try your own ideas and techniques when using this method! Changing the light color, camera angles, and creative postwork are just a few things you can do to improve upon what I've described here.
Anyway, I hope you've found this tutorial helpful and I'll look forward to seeing any images you create using it! Take care!
SnowS
If you enjoyed this tutorial, please take a look at my other ones here and on Renderosity!
No part of this tutorial may be reprinted in any form without my permission (although I will almost always give permission). :) I will always allow translation of my tutorials into other languages however, as long as I am given credit for the original English version and am notified where I can view the translated version. Original version of this tutorial (Z-toon) available on. Thank you! | http://docs.daz3d.com/doku.php/artzone/pub/tutorials/poser/poser-misc51 | 2017-11-17T21:12:48 | CC-MAIN-2017-47 | 1510934803944.17 | [] | docs.daz3d.com |
Removing Nodes, Content, and Values from an XML Document
Once an XML Document Object Model (DOM) is in memory, you can remove nodes from the tree, or remove content and values from certain node types. For information on how to remove a leaf node or entire node subtree from a document, see Removing Nodes from the DOM. For information on how to remove attributes on an element, see Removing Attributes from an Element Node in the DOM. For information on removing content of a node but leaving the node in the tree, see Removing Node Content in the DOM.
See Also
XML Document Object Model (DOM)
Inserting Nodes into an XML Document
Modifying Nodes, Content, and Values in an XML Document | https://docs.microsoft.com/en-us/dotnet/standard/data/xml/removing-nodes-content-and-values-from-an-xml-document | 2017-11-17T22:26:12 | CC-MAIN-2017-47 | 1510934803944.17 | [] | docs.microsoft.com |
lifecycles.
Tip: For information on creating assets from scratch through the G-Reg Publisher, see Adding and Deleting Assets.
Make sure you have the G-Reg server started as instructed in the "Before you begin" help the results of your previous search now appear to find an asset that will calculate bill values of the products being purchased at BuyMore. Smith searches for the asset and subscribes to it in order to receive notifications regarding any changes that happen to the asset later.
Log in to the G-Reg Store () as Smith. Smith's credentials are smith/smith@bm.
- Once logged in, search for an asset to help you calculate bill values by typing "calculate" or similar in the search field and clicking the search icon. Then, note that the two services that we mentioned at the beginning of this guide (i.e.,
BuyMoreBillCalculateRESTAPIversion 1.0.0 and 2.0.0) appear in the search results.
Smith clicks both assets to read their details and notices that version 1.0.0 is in the Production state and ready for use, whereas version 2.0.0 is still in the Testing state. Therefore, Smith decides to use version 1.0.0.
Tip: The G-Reg Store displays assets in all lifecycle states by default, but can customize this behavior so that only assets of a lifecycle state/s of your choice are displayed in the Store.
- Click version 1.0.0 of the asset to open it.
The asset opens. Scroll down to see the following links:
- Swagger UI: Opens a Swagger console where you can invoke the REST asset.
- Download: Downloads the Swagger definition of the REST asset to a given location in your machine.
- Copy URL: Gives you the option to copy the URL of the Swagger definition.
Smith has now discovered Store, this.
- In the G-Reg Store, click the BuyMore tag under the Tags section.
Alternatively, you can search for an asset using the search facility or by going to a related category using the All Categories drop-down list at the top of the console.
- All the assets that have been tagged as BuyMore get listed on the console. Click version 2.0.0 of the asset
BuyMoreBillCalculateRESTAPIto open it.
Comparing versions of the asset
After discovering
BuyMoreBillCalculateRESTAPI Version 2.0.0 earlier, Smith now proceeds to compare it with version 1.0.0 to identify their differences.
- Sign in to the G-Reg Store () as Smith (smith/smith@bm).
- Go to the asset's homepage, and click the link of the underlying source of the asset under Associations > dependencies.
As this is a Swagger-based REST API, the Swagger definition of the asset opens. Click Compare with 1.0.0 in the homepage of the Swagger file that opens.
Tip: As you have only two versions of this asset, you get the button labelled with the other available version. However, if you have more than two versions of an asset, you need to first click the Compare With button, and then select the version.
Compare the differences of the two asset versions.
Reviewing the new version of the asset
In the previous step, Smith compared
BuyMoreBillCalculateRESTAPI Version 2.0.0 with its first version in the G-Reg Store. Smith then proceeds to review the new version.
- Go to the User Reviews tab of the asset and add a rating (such as the asset
Log in to the G-Reg Publisher () as Mark (credentials: mark/mark@bm).
Search for the term "BuyMoreBillCalculateRESTAPI" and click version 1.0.0 of the asset to open it.
- Click the asset
- Log in to the G-Reg Store () as Smith if you haven't done so already (credentials: smith/smith@bm).
- Click any one of the menus (such as the REST Services menu) and note that you now have two the User Guide. | https://docs.wso2.com/display/Governance520/Quick+Start+Guide | 2017-11-17T20:59:54 | CC-MAIN-2017-47 | 1510934803944.17 | [] | docs.wso2.com |
Bases: astropy.io.votable.tree.SimpleElement, astropy.io.votable.tree._IDProperty, astropy.io.votable.tree._NameProperty, astropy.io.votable.tree._XtypeProperty, astropy.io.votable.tree._UtypeProperty, astropy.io.votable.tree._UcdProperty
FIELD element: describes the datatype of a particular column of data.
The keyword arguments correspond to setting members of the same name, documented below.
If ID is provided, it is used for the column name in the resulting recarray of the table. If no ID is provided, name is used instead. If neither is provided, an exception will be raised.
Attributes Summary
Methods Summary
Attributes Documentation
Specifies the size of the multidimensional array if this FIELD contains more than a single value.
See multidimensional arrays.
[required] The datatype of the column. Valid values (as defined by the spec) are:
‘boolean’, ‘bit’, ‘unsignedByte’, ‘short’, ‘int’, ‘long’, ‘char’, ‘unicodeChar’, ‘float’, ‘double’, ‘floatComplex’, or ‘doubleComplex’
Many VOTABLE files in the wild use ‘string’ instead of ‘char’, so that is also a valid option, though ‘string’ will always be converted to ‘char’ when writing the file back out.
A list of Link instances used to reference more details about the meaning of the FIELD. This is purely informational and is not used by the astropy.io.votable package.
Along with width, defines the numerical accuracy associated with the data. These values are used to limit the precision when writing floating point values back to the XML file. Otherwise, it is purely informational – the Numpy recarray containing the data itself does not use this information.
On FIELD elements, ref is used only for informational purposes, for example to refer to a COOSYS element.
The type attribute on FIELD elements is reserved for future extensions.
A string specifying the units for the FIELD.
A Values instance (or None) defining the domain of the column.
Along with precision, defines the numerical accuracy associated with the data. These values are used to limit the precision when writing floating point values back to the XML file. Otherwise, it is purely informational – the Numpy recarray containing the data itself does not use this information.
Methods Documentation
Restores a Field instance from a given astropy.table.Column instance.
Sets the attributes of a given astropy.table.Column instance to match the information in this Field.
Make sure that all names and titles in a list of fields are unique, by appending numbers if necessary. | https://astropy.readthedocs.io/en/v0.2.5/_generated/astropy.io.votable.tree.Field.html | 2017-11-17T21:14:57 | CC-MAIN-2017-47 | 1510934803944.17 | [] | astropy.readthedocs.io |
Bases: astropy.cosmology.core.Cosmology
A class describing an isotropic and homogeneous (Friedmann-Lemaitre-Robertson-Walker) cosmology.
This is an abstract base class – you can’t instantiate examples of this class, but must work with one of its subclasses such as LambdaCDM or wCDM.
Notes
Class instances are static – you can’t change the values of the parameters. That is, all of the attributes above are read only.
The neutrino treatment assumes all neutrino species are massless.
Attributes Summary
Methods Summary
Attributes Documentation
Return the Hubble constant in [km/sec/Mpc] at z=0
Number of effective neutrino species
Omega dark energy; dark energy density/critical density at z=0
Omega gamma; the density/critical density of photons at z=0
Omega curvature; the effective curvature density/critical density at z=0
Omega matter; matter density/critical density at z=0
Omega nu; the density/critical density of neutrinos at z=0
Temperature of the CMB in Kelvin at z=0
Critical density in [g cm^-3] at z=0
Dimensionless Hubble constant: h = H_0 / 100 [km/sec/Mpc]
Hubble distance in [Mpc]
Hubble time in [Gyr]
Methods Documentation
Hubble parameter (km/s/Mpc) at redshift z.
Return the density parameter for dark energy at redshift z.
Return the density parameter for photons at redshift z.
Return the equivalent density parameter for curvature at redshift z.
Return the density parameter for non-relativistic matter at redshift z.
Return the density parameter for massless neutrinos at redshift z.
Return the CMB temperature at redshift z.
Absorption distance at redshift z.
This is used to calculate the number of objects with some cross section of absorption and number density intersecting a sightline per unit redshift path.
References
Hogg 1999 Section 11. (astro-ph/9905116) Bahcall, John N. and Peebles, P.J.E. 1969, ApJ, 156L, 7B
Age of the universe in Gyr at redshift z.
Angular diameter distance in Mpc at a given redshift.
This gives the proper (sometimes called ‘physical’) transverse distance corresponding to an angle of 1 radian for an object at redshift z.
Weinberg, 1972, pp 421-424; Weedman, 1986, pp 65-67; Peebles, 1993, pp 325-327.
Angular diameter distance between objects at 2 redshifts. Useful for gravitational lensing.
Notes
This method only works for flat or open curvature (omega_k >= 0).
Angular separation in arcsec corresponding to a comoving kpc at redshift z.
Angular separation in arcsec corresponding to a proper kpc at redshift z.
Comoving line-of-sight distance in Mpc at a given redshift.
The comoving distance along the line-of-sight between two objects remains constant with time for objects in the Hubble flow.
Comoving transverse distance in Mpc at a given redshift.
This value is the transverse comoving distance at redshift z corresponding to an angular separation of 1 radian. This is the same as the comoving distance if omega_k is zero (as in the current concordance lambda CDM model).
Notes
This quantity also called the ‘proper motion distance’ in some texts.
Comoving volume in cubic Mpc at redshift z.
This is the volume of the universe encompassed by redshifts less than z. For the case of omega_k = 0 it is a sphere of radius comoving_distance(z) but it is less intuitive if omega_k is not 0.
Critical density in grams per cubic cm at redshift z.
Evaluates the redshift dependence of the dark energy density.
Notes
The scaling factor, I, is defined by
,
and is given by
It will generally helpful for subclasses to overload this method if the integral can be done analytically for the particular dark energy equation of state that they implement.
Distance modulus at redshift z.
The distance modulus is defined as the (apparent magnitude - absolute magnitude) for an object at redshift z.
Function used to calculate H(z), the Hubble parameter.
Notes
The return value, E, is defined such that
.
It is not necessary to override this method, but if de_density_scale takes a particularly simple form, it may be advantageous to.
Inverse of efunc.
Separation in transverse comoving kpc corresponding to an arcminute at redshift z.
Separation in transverse proper kpc corresponding to an arcminute at redshift z.
Lookback time in Gyr to redshift z.
The lookback time is the difference between the age of the Universe now and the age at redshift z.
Luminosity distance in Mpc at redshift z.
This is the distance to use when converting between the bolometric flux from an object at redshift z and its bolometric luminosity.
References
Weinberg, 1972, pp 420-424; Weedman, 1986, pp 60-62.
Scale factor at redshift z.
The scale factor is defined as
.
The dark energy equation of state.
Notes
The dark energy equation of state is defined as
, where
is the
pressure at redshift z and
is the density
at redshift z, both in units where c=1.
This must be overridden by subclasses. | https://astropy.readthedocs.io/en/v0.2.5/_generated/astropy.cosmology.core.FLRW.html | 2017-11-17T21:11:43 | CC-MAIN-2017-47 | 1510934803944.17 | [] | astropy.readthedocs.io |
Search all docs
Topics for securing Cassandra.
Topics for internal authentication.
Cassandra provides various security features to the open source community.
Topics for using SSL in Cassandra.
Internal authentication is based on Cassandra-controlled login accounts and passwords.
Steps for configuring authentication.
How to create a cqlshrc file to avoid having enter credentials every time you launch cqlsh.
Topics about internal authorization.
Which ports to open when nodes are protected by a firewall.
The default settings for Cassandra make JMX accessible only from localhost. To enable remote JMX connections, change
the LOCAL_JMX setting in cassandra-env.sh.. | http://docs.datastax.com/en/cassandra/2.1/cassandra/security/secureInternalAuthenticationTOC.html | 2017-11-17T21:08:45 | CC-MAIN-2017-47 | 1510934803944.17 | [] | docs.datastax.com |
To protect the wiki against automated account creation, we kindly ask you to answer the question that appears below (more info):
To pass captcha, please enter the... 8th ...characters from the sequence 909c0e5282 in the reverse order of the listing above:
909c0e5282
Real name is optional.
If you choose to provide it, this will be used for giving you attribution for your work.
edits
pages
recent contributors | https://docs.joomla.org/index.php?title=Special:UserLogin&type=signup&returnto=Making+your+site+Search+Engine+Friendly | 2015-03-26T23:57:06 | CC-MAIN-2015-14 | 1427131293283.10 | [] | docs.joomla.org |
Web
- Central Pa Java Users Group
- Dallas/Ft. Worth Groovy/Grails Users Group
- San Diego Groovy Grails Group
South America
Europe
- Belgium
- Belgian Grails and Groovy User Group: First meeting held at ixor on 2009 March 19
- Croatia
- France
- Germany
-.
- Munich Groovy, Grails & Griffon User Group
- Berlin Groovy User Group focused on the whole Groovy universe (Groovy, Grails, Griffon, Gradle, ...)
- Italy
- Norway
- Slovakia
- Spain
- UK
Australia
- Melbourne Groovy User Group held at the Aegeon offices on the 1st Monday of each Month.
- Groovy & Grails Queensland
1 Comment
Ebony Fitzgerald
People deserve good life time and personal loans or just secured loan can make it better. Because freedom bases on money state. | http://docs.codehaus.org/pages/viewpage.action?pageId=53825 | 2015-03-26T23:49:35 | CC-MAIN-2015-14 | 1427131293283.10 | [] | docs.codehaus.org |
This page lists all the plugins hosted on our forge.
*: Commercial plugin
Tempted to write your own plugin and share it on the forge? Want to contribute your plugin back to the community? Just follow the instructions on the Hosting on the Forge page.
Other plugins
- External plugins
- Deprecated plugins
- Plugins raising issues (that may affect the platform’s stability and performance) | http://docs.codehaus.org/pages/diffpages.action?pageId=116359189&originalId=239371168 | 2015-03-26T23:57:53 | CC-MAIN-2015-14 | 1427131293283.10 | [] | docs.codehaus.org |
When writing any web application, it is crucial that you filter input data before using it. Joomla! provides a set of filtering libraries to help you accomplish this.:
Finally, there are some mask constants you can pass in as the fifth parameter that allow you to bypass portions of the filtering:
The class JRequest is defined in the following location.
libraries\joomla\environment\request.php | https://docs.joomla.org/index.php?title=Retrieving_and_Filtering_GET_and_POST_requests_with_JRequest::getVar&diff=12820&oldid=9058 | 2015-03-26T23:56:21 | CC-MAIN-2015-14 | 1427131293283.10 | [] | docs.joomla.org |
Create a drop-down menu on my storefront
A drop-down menu displays a drop-down list of links when your customer hovers their mouse over a link in your store's navigation menu.
Create a drop-down menu (e.g. from the Catalog button)
1. Click to the Navigation page of the admin
The navigation page of most new stores has both a Main Menu link list and a Footer link list. You can make drop-down menus on your storefront from any link in the Main Menu.
This means you can drop-down from an existing link, or add a new link to the main menu first, to use as the head of your drop-down menu before proceeding.
This walkthrough will show you how to make a drop-down from the existing Catalog link.
2. Click Add link list.
3. Type in the name for your navigation. The handle will be automatically generated for you (do not edit it):
For this walkthrough to work, the name you enter MUST be the same as the name of the link in your Main Menu that you wish to drop-down from. Because we want to drop-down from Catalog, we will name our link list "Catalog".
4. Click Add another link one time for each link you want to include in your drop-down menu.
5. For each link you've clicked to add, type in a Link Name, then click Links To... drop-down box and choose your destinations for these links (e.g. you can link to specific collections, all products, or pages etc.)
You can set anything you like for these Link Names (they will appear to your customers), there are no naming rules like when we named the Link List.
6. Click
Save.
7. To check your work, click back into the Navigation page of your admin. You should have a new link list with the same name as one of the Main Menu links, like this:
8. You can check your work in your storefront by clicking the "View your website" button in the bottom left:
9. Hover over the Catalog button to view the drop-down menu:
I don't want to drop down from Catalog – can I drop down from another button?
Yes, you can do this for any button in the main menu, not just for Catalog. The key things to remember are:
If you want a drop-down menu from a button other than Home, Catalog, Blog, or About us, you must first add the link to your main menu.
When you name your new link list in Step 3, the name you enter MUST be the same as the name of the link in your Main Menu that you wish to drop-down from.
My drop-down menu won't work with non-English characters
If the name of your linklist uses characters from Arabic, Hebrew or Cyrillic alphabet, or is in Chinese or Japanese, chances are (unless you are using the New Standard theme), that your drop-down menu will not work.
Are you extremely code-savvy? Do you have a designer/developer you are employing? We have a hack you can try: Can I have drop-down menus that use non-English characters?. Please read our Disclaimer before you attempt this. | http://docs.shopify.com/manual/your-website/navigation/create-drop-down-menu | 2015-03-26T23:39:54 | CC-MAIN-2015-14 | 1427131293283.10 | [array(['/assets/images/manual/links/store-front.jpg?1427403215', '#'],
dtype=object)
array(['/assets/images/manual/links/drop-down-1.jpg?1427403215', '#'],
dtype=object)
array(['/assets/images/manual/links/add-link-list-btn.jpg?1427403215',
'#'], dtype=object)
array(['/assets/images/manual/links/link-list-name.jpg?1427403215', '#'],
dtype=object)
array(['/assets/images/manual/links/add-a-link.jpg?1427403215', '#'],
dtype=object)
array(['/assets/images/manual/links/for-each-link.jpg?1427403215', '#'],
dtype=object)
array(['/assets/images/manual/links/drop-down-relationship.jpg?1427403215',
'#'], dtype=object)
array(['/assets/images/manual/links/view-your-website.jpg?1427403215',
'#'], dtype=object)
array(['/assets/images/manual/links/store-front.jpg?1427403215', '#'],
dtype=object) ] | docs.shopify.com |
public class JobSynchronizationManager extends java.lang.Object
Job. N.B. it is the responsibility of every
Jobimplementation to ensure that a
JobContextis available on every thread that might be involved in a job execution, including worker threads from a pool.
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
public JobSynchronizationManager()
public static JobContext getContext()
JobContextor null if there is none (if one has not been registered for this thread).
public static JobContext register(JobExecution JobExecution)
close()call in a finally block to ensure that the correct context is available in the enclosing block.
JobExecution- the step context to register
JobContextor the current one if it has the same
JobExecution
public static void close()
register(JobExecution)to ensure that
getContext()always returns the correct value. Does not call
JobContext.close()- that is left up to the caller because he has a reference to the context (having registered it) and only he has knowledge of when the step actually ended.
public static void release()
close()if the step execution for the current context is ending. Delegates to
JobContext.close()and then ensures that
close()is also called in a finally block. | http://docs.spring.io/spring-batch/apidocs/org/springframework/batch/core/scope/context/JobSynchronizationManager.html | 2015-03-27T00:04:34 | CC-MAIN-2015-14 | 1427131293283.10 | [] | docs.spring.io |
, and events generated by the Address Manager application itself. An Event List includes start and end times for all deployments, and the deployment duration.
- Application—events related to the operation of the Address Manager software.
- Deployment Service—events related to deploying data to servers managed by Address Manager.Note:.
- Apply IPv4 Network Template—events related to IPv4 Network Templates.
- SSO and OAuth Management Service—events related to certificates with the SSO and OAuth configurations. | https://docs.bluecatnetworks.com/r/Address-Manager-Administration-Guide/Managing-Events/9.2.0 | 2022-05-16T18:42:39 | CC-MAIN-2022-21 | 1652662512229.26 | [] | docs.bluecatnetworks.com |
Creating and managing mount targets
After you create an Amazon EFS file system, you can create mount targets. For Amazon EFS file systems that use Standard storage classes, you can create a mount target in each Availability Zone in an AWS Region. For EFS file systems that use One Zone storage classes, you can only create a single mount target in the same Availability Zone as the file system. Then you can mount the file system on compute instances, including Amazon EC2, Amazon ECS, and AWS Lambda in your virtual private cloud (VPC).
The following diagram shows an Amazon EFS file system that uses Standard storage classes, with mount targets created in all Availability Zones in the VPC.
The following diagram shows an Amazon EFS file system using One Zone storage classes, with a single mount target created in the same Availability Zone as the file system. Accessing the file system using the EC2 instance in the us-west2c Availability Zone incurs data access charges because it is located in a different Availability Zone than the mount target.
The mount target security group acts as a virtual firewall that controls the traffic. For example, it determines which clients can access the file system. This section explains the following:
Managing mount target security groups and enabling traffic.
Mounting the file system on your clients.
NFS-level permissions considerations.
Initially, only the root user on the Amazon EC2 instance has read-write-execute permissions on the file system. This topic discusses NFS-level permissions and provides examples that show you how to grant permissions in common scenarios. For more information, see Working with users, groups, and permissions at the Network File System (NFS) Level.
You can create mount targets for a file system using the AWS Management Console, AWS CLI, or programmatically using the AWS SDKs. When using the console, you can create mount targets when you first create a file system or after the file system is created.
For instructions to create mount targets using the Amazon EFS console when creating a new file system, see Step 2: Configure network access.
Use the following procedure to add or modify mount targets for an existing Amazon EFS file system.
To manage mount targets on an Amazon EFS file system (console)
Sign in to the AWS Management Console and open the Amazon EFS console at
In the left navigation pane, choose File systems. The File systems page displays the EFS file systems in your account.
Choose the file system that you want to manage mount targets for by choosing its Name or the File system ID to display the file system details page.
Choose Network to display the list of existing mount targets.
Choose Manage to display the Availability zone page and make modifications.
On this page, for existing mount targets, you can add and remove security groups, or delete the mount target. You can also create new mount targets.
Note
For file systems that use One Zone storage classes, you can only create a single mount target that is in the same Availability Zone as the file system.
To remove a security group from a mount target, choose X next to the security group ID.
To add a security group to a mount target, choose Select security groups to display a list of available security groups. Or, enter a security group ID in the search field at the top of the list.
To queue a mount target for deletion, choose Remove.
Note
Before deleting a mount target, first unmount the file system.
To add a mount target, choose Add mount target. This is available only for file systems that use Standard storage classes, and if mount targets do not already exist in each Availability Zone for the AWS Region.
Choose Save to save any changes.
To change the VPC for an Amazon EFS file system (console)
To change the VPC for a file system's network configuration, you must delete all of the file system's existing mount targets.
Open the Amazon Elastic File System console at
In the left navigation pane, choose File systems. The File systems page shows the EFS file systems in your account.
For the file system that you want to change the VPC for, choose the Name or the File system ID. The file system details page is displayed.
Choose Network to display the list of existing mount targets.
Choose Manage. The Availability zone page appears.
Remove all mount targets displayed on the page.
Choose Save to save changes and delete the mount targets. The Network tab shows the mount targets status as deleting.
When all the mount targets statuses show as deleted, choose Manage. The Availability zone page appears.
Choose the new VPC from the Virtual Private Cloud (VPC) list.
Choose Add mount target to add a new mount target. For each mount target you add, enter the following:
An Availability zone
A Subnet ID
An IP address, or keep it set to Automatic
One or more Security groups
Choose Save to implement the VPC and mount target changes.
For file systems that use One Zone storage classes, you can only create a single mount target that is in the same Availability Zone as the file system.
To create a mount target (CLI)
To create a mount target, use the
create-mount-targetCLI command (corresponding operation is CreateMountTarget), as shown following.
file-system-id\ --subnet-id
subnet-id\ --security-group
ID-of-the-security-group-created-for-mount-target\ --region
aws-region\ --profile adminuser
The following example shows the command with sample data.
After successfully creating the mount target, Amazon EFS returns the mount target description as JSON as shown in the following example.
{ "MountTargetId": "fsmt-f9a14450", "NetworkInterfaceId": "eni-3851ec4e", "FileSystemId": "fs-b6a0451f", "LifeCycleState": "available", "SubnetId": "subnet-b3983dc4", "OwnerId": "23124example", "IpAddress": "10.0.1.24" }
To retrieve a list of mount targets for a file system (CLI)
You can also retrieve a list of mount targets created for a file system using the
describe-mount-targetsCLI command (the corresponding operation is DescribeMountTargets), as shown following.
{ "MountTargets": [ { "OwnerId": "111122223333", "MountTargetId": "fsmt-48518531", "FileSystemId": "fs-a576a6dc", "SubnetId": "subnet-88556633", "LifeCycleState": "available", "IpAddress": "172.31.25.203", "NetworkInterfaceId": "eni-0123456789abcdef1", "AvailabilityZoneId": "use2-az2", "AvailabilityZoneName": "us-east-2b" }, { "OwnerId": "111122223333", "MountTargetId": "fsmt-5651852f", "FileSystemId": "fs-a576a6dc", "SubnetId": "subnet-44223377", "LifeCycleState": "available", "IpAddress": "172.31.46.181", "NetworkInterfaceId": "eni-0123456789abcdefa", "AvailabilityZoneId": "use2-az3", "AvailabilityZoneName": "us-east-2c" }, { "OwnerId": "111122223333", "MountTargetId": "fsmt-5751852e", "FileSystemId": "fs-a576a6dc", "SubnetId": "subnet-a3520bcb", "LifeCycleState": "available", "IpAddress": "172.31.12.219", "NetworkInterfaceId": "eni-0123456789abcdef0", "AvailabilityZoneId": "use2-az1", "AvailabilityZoneName": "us-east-2a" } ] }
To delete an existing mount target (CLI)
To delete an existing mount target, use the
delete-mount-targetAWS CLI command (corresponding operation is DeleteMountTarget), as shown following.
Note
Before deleting a mount target, first unmount the file system.
mount-target-ID-to-delete\ --region
aws-region-where-mount-target-exists
The following is an example with sample data.
To modify the security group of an existing mount target
To modify security groups that are in effect for a mount target, use the
modify-mount-target-security-groupAWS CLI command (the
The following is an example with sample data.
$ aws efs modify-mount-target-security-groups \ --mount-target-id
fsmt-5751852e\ --security-groups
sg-1004395a sg-1114433a\ --region
us-east-2
For more information, see Walkthrough: Create an Amazon EFS file system and mount it on an Amazon EC2 instance using the AWS CLI. | https://docs.aws.amazon.com/efs/latest/ug/accessing-fs.html | 2022-05-16T20:00:50 | CC-MAIN-2022-21 | 1652662512229.26 | [] | docs.aws.amazon.com |
What’s new in Surface Dock 2
Surface Dock 2, the next-generation Surface dock, lets users connect external monitors and multiple peripherals for a fully modernized desktop experience from a Surface device. Built to maximize efficiency at the office, in a flexible workspace, or at home, Surface Dock 2 features seven ports, including two front-facing USB-C ports, with 15 watts of fast charging power for phones and accessories.
Full device management support
Surface Dock 2 is designed to simplify IT management, enabling admins to automate firmware updates using Windows Update or centralize updates with internal software distribution tools.
- Surface Enterprise Management Mode (SEMM) enables IT admins to secure ports on Surface Dock 2. For more information, see Secure Surface Dock 2 ports with Surface Enterprise Management Mode.
- Windows Management Instrumentation (WMI) support enables IT admins to remotely monitor and manage the latest firmware, policy state, and related data across Surface Dock 2 devices. For more information, see Manage Surface Dock 2 with WMI.
- Centralize updates on your local network using software distribution tools. Download Surface Dock 2 Firmware and Drivers.
General system requirements
Windows 10 version 1809 and later. There is no support for Windows 7, Windows 8, or non-Surface host devices. Surface Dock 2 works with the following Surface devices:
- Surface Pro (5th Gen)
- Surface Laptop (1st Gen)
- Surface Pro 6
- Surface Book 2
- Surface Laptop 2
- Surface Go
- Surface Pro 7
- Surface Pro X
- Surface Laptop 3
- Surface Book 3
- Surface Go 2
- Surface Laptop Go
- Surface Pro 7+
- Surface Laptop 4
- Surface Laptop Studio
- Surface Pro 8
- Surface Go 3
Surface Dock 2 Components
USB
- Two front-facing USB-C ports
- Two rear-facing USB-C (gen 2) ports
- Two rear-facing USB-A ports
Video
Dual 4K@60Hz. Supports up to two displays on the following devices:
- Surface Laptop Studio
- Surface Book 3
- Surface Pro 8
- Surface Pro 7
- Surface Pro 7+
- Surface Pro X
- Surface Laptop 3
- Surface Laptop 4
Dual 4K@30Hz. Supports up to two displays on the following devices:
- Surface Pro 6
- Surface Pro (5th Gen)
- Surface Laptop 2
- Surface Laptop (1st Gen)
- Surface Go
- Surface Go 2
- Surface Go 3
- Surface Book 2
Ethernet
- 1 gigabit Ethernet port.
External Power supply
- 199 watts supporting 100V-240V.
Compare Surface Dock
Table 1. Surface Dock and USB-C Travel Hub .
Devices must be configured for Wake on LAN via Surface Enterprise Management Mode (SEMM) or Device Firmware Control Interface (DFCI) to wake from hibernation or power-off states. Wake from hibernation or power-off is supported on Surface Laptop Studio, Surface Pro 8, Surface Pro 7+, Surface Pro 7, Surface Laptop 4, Surface Laptop 3, Surface Pro X, Surface Book 3, Surface Go 3, and Surface Go 2. Software license required for some features. Sold separately.
Software license required for some features. Sold separately.
Streamlined device management
Surface has released streamlined management functionality via Windows Update enabling IT admins to utilize the following enterprise-grade features:
- Frictionless updates. Update your docks silently and automatically, with Windows Update or Microsoft Endpoint Configuration Manager (formerly System Center Configuration Manager - SCCM) or other MSI deployment tools.
- Wake from the network. Manage and access corporate devices without depending on users to keep their devices powered on. Even when a docked device is in sleep, hibernation, or power off mode, your team can wake from the network for service and management, using Endpoint Configuration Manager or other enterprise management tools.
- Centralized IT control. Control who can connect to Surface Dock 2 by turning ports on and off. Restrict which host devices can be used with Surface Dock 2. Limit dock access to a single user or configure docks for access only by specific users in your team or across the entire company.
Next steps
Feedback
Submit and view feedback for | https://docs.microsoft.com/en-us/surface/surface-dock-whats-new | 2022-05-16T17:58:09 | CC-MAIN-2022-21 | 1652662512229.26 | [] | docs.microsoft.com |
Default Roles¶
Primer¶
Like most OpenStack services, keystone protects its API using role-based access control (RBAC).
Users can access different APIs depending on the roles they have on a project, domain, or system.
As of the Rocky release, keystone provides three roles called
admin,
member, and
reader by default. Operators can grant these roles to any
actor (e.g., group or user) on any target (e.g., system, domain, or project).
If you need a refresher on authorization scopes and token types, please refer
to the token guide. The following sections describe how each default role
behaves with keystone’s API across different scopes.
Default roles and behaviors across scopes allow operators to delegate more functionality to their team, auditors, customers, and users without maintaining custom policies.
Roles Definitions¶
The default roles imply one another. The
admin role implies the
member
role, and the
member role implies the
reader role. This implication
means users with the
admin role automatically have the
member and
reader roles. Additionally, users with the
member role automatically
have the
reader role. Implying roles reduces role assignments and forms a
natural hierarchy between the default roles. It also reduces the complexity of
default policies by making check strings short. For example, a policy that
requires
reader can be expressed as:
"identity:list_foo": "role:reader"
Instead of:
"identity:list_foo": "role:admin or role:member or role:reader"
Reader¶
The
reader role provides read-only access to resources within the system, a
domain, or a project. Depending on the assignment scope, two users with the
reader role can expect different API behaviors. For example, a user with
the
reader role on the system can list all projects within the deployment.
A user with the
reader role on a domain can only list projects within their
domain.
By analyzing the scope of a role assignment, we increase the re-usability of
the
reader role and provide greater functionality without introducing more
roles. For example, to accomplish this without analyzing assignment scope, you
would need
system-reader,
domain-reader, and
project-reader roles
in addition to custom policies for each service.
Member¶
Within keystone, there isn’t a distinct advantage to having the
member role
instead of the
reader role. The
member role is more applicable to other
services. The
member role works nicely for introducing granularity between
admin and
reader roles. Other services might write default policies
that require the
member role to create resources, but the
admin role to
delete them. For example, users with
reader on a project could list
instance, users with
member on a project can list and create instances, and
users with
admin on a project can list, create, and delete instances.
Service developers can use the
member role to provide more flexibility
between
admin and
reader on different scopes.
Admin¶
We reserve the
admin role for the most privileged operations within a given
scope. It is important to note that having
admin on a project, domain, or
the system carries separate authorization and are not transitive. For example,
users with
admin on the system should be able to manage every aspect of the
deployment because they’re operators. Users with
admin on a project
shouldn’t be able to manage things outside the project because it would violate
the tenancy of their role assignment (this doesn’t apply consistently since
services are addressing this individually at their own pace).
Note
As of the Train release, keystone applies the following personas consistently across its API.
System Administrators¶
System administrators are allowed to manage every resource in keystone. System administrators are typically operators and cloud administrators. They can control resources that ultimately affect the behavior of the deployment. For example, they can add or remove services and endpoints in the catalog, create new domains, add federated mappings, and clean up stale resources, like a user’s application credentials or trusts.
You can find system administrators in your deployment with the following assignments:
$ openstack role assignment list --names --system all +-------+------------------+-----------------------+---------+--------+--------+-----------+ | Role | User | Group | Project | Domain | System | Inherited | +-------+------------------+-----------------------+---------+--------+--------+-----------+ | admin | | system-admins@Default | | | all | False | | admin | admin@Default | | | | all | False | | admin | operator@Default | | | | all | False | +-------+------------------+-----------------------+---------+--------+--------+-----------+
System Members & System Readers¶
In keystone, system members and system readers are very similar and have the same authorization. Users with these roles on the system can view all resources within keystone. They can audit role assignments, users, projects, and group memberships, among other resources.
The system reader persona is useful for auditors or members of a support team. You can find system members and system readers in your deployment with the following assignments:
$ openstack role assignment list --names --system all --role member --role reader +--------+------------------------+-------------------------+---------+--------+--------+-----------+ | Role | User | Group | Project | Domain | System | Inherited | +--------+------------------------+-------------------------+---------+--------+--------+-----------+ | reader | | system-auditors@Default | | | all | False | | admin | operator@Default | | | | all | False | | member | system-support@Default | | | | all | False | +--------+------------------------+-------------------------+---------+--------+--------+-----------+
Domain Administrators¶
Domain administrators can manage most aspects of the domain or its contents. These users can create new projects and users within their domain. They can inspect the role assignments users have on projects within their domain.
Domain administrators aren’t allowed to access system-specific resources or resources outside their domain. Users that need control over project, group, and user creation are a great fit for domain administrators.
You can find domain administrators in your deployment with the following role assignment:
$ openstack role assignment list --names --domain foobar --role admin +-------+----------------+----------------------+---------+--------+--------+-----------+ | Role | User | Group | Project | Domain | System | Inherited | +-------+----------------+----------------------+---------+--------+--------+-----------+ | admin | jsmith@Default | | | foobar | | False | | admin | | foobar-admins@foobar | | foobar | | False | +-------+----------------+----------------------+---------+--------+--------+-----------+
Domain Members & Domain Readers¶
Domain members and domain readers have the same relationship as system members and system readers. They’re allowed to view resources and information about their domain. They aren’t allowed to access system-specific information or information about projects, groups, and users outside their domain.
The domain member and domain reader use-cases are great for auditing, support, or monitoring the details of an account. You can find domain members and domain readers with the following role assignments:
$ openstack role assignment list --names --role member --domain foobar +--------+-------------+-------+---------+--------+--------+-----------+ | Role | User | Group | Project | Domain | System | Inherited | +--------+-------------+-------+---------+--------+--------+-----------+ | member | jdoe@foobar | | | foobar | | False | +--------+-------------+-------+---------+--------+--------+-----------+ $ openstack role assignment list --names --role reader --domain foobar +--------+-----------------+-------+---------+--------+--------+-----------+ | Role | User | Group | Project | Domain | System | Inherited | +--------+-----------------+-------+---------+--------+--------+-----------+ | reader | auditor@Default | | | foobar | | False | +--------+-----------------+-------+---------+--------+--------+-----------+
Project Administrators¶
Project administrators can only view and modify data within the project in their role assignment. They’re able to view information about their projects and set tags on their projects. They’re not allowed to view system or domain resources, as that would violate the tenancy of their role assignment. Since the majority of the resources in keystone’s API are system and domain-specific, project administrators don’t have much authorization.
You can find project administrators in your deployment with the following role assignment:
$ openstack role assignment list --names --project production --role admin +-------+----------------+--------------------------+-------------------+--------+--------+-----------+ | Role | User | Group | Project | Domain | System | Inherited | +-------+----------------+--------------------------+-------------------+--------+--------+-----------+ | admin | jsmith@Default | | production@foobar | | | False | | admin | | production-admins@foobar | production@foobar | | | False | +-------+----------------+--------------------------+-------------------+--------+--------+-----------+
Project Members & Project Readers¶
Project members and project readers can discover information about their projects. They can access important information like resource limits for their project, but they’re not allowed to view information outside their project or view system-specific information.
You can find project members and project readers in your deployment with the following role assignments:
$ openstack role assignment list --names --project production --role member +--------+------+--------------------------+-------------------+--------+--------+-----------+ | Role | User | Group | Project | Domain | System | Inherited | +--------+------+--------------------------+-------------------+--------+--------+-----------+ | member | | foobar-operators@Default | production@foobar | | | False | +--------+------+--------------------------+-------------------+--------+--------+-----------+ $ openstack role assignment list --names --project production --role reader +--------+-----------------+----------------------------+-------------------+--------+--------+-----------+ | Role | User | Group | Project | Domain | System | Inherited | +--------+-----------------+----------------------------+-------------------+--------+--------+-----------+ | reader | auditor@Default | | production@foobar | | | False | | reader | | production-support@Default | production@foobar | | | False | +--------+-----------------+----------------------------+-------------------+--------+--------+-----------+
Writing Policies¶
If the granularity provided above doesn’t meet your specific use-case, you can still override policies and maintain them manually. You can read more about how to do that in oslo.policy usage documentation. | https://docs.openstack.org/keystone/ussuri/admin/service-api-protection.html | 2022-05-16T18:06:57 | CC-MAIN-2022-21 | 1652662512229.26 | [] | docs.openstack.org |
Last updated 6th December 2021
Objective
This guide aims to familiarise you with the management of your containers / objects.
Requirements
- Access to the OVHcloud Control Panel
- an OpenStack user
Instructions
User management
Once your user has been created, you must generate its S3 certificates.
Add a user to a container.
...at the end of the line of your bucket and then
Select the user to add to your bucket and click
Define access to your bucket for this user and click
Managing objects
Click on the
Display objects.
...at the end of the line of your bucket and then on
Click on
+ Add objects.
If needed, set a prefix, click
Select files then
Import.
You can now interact with your object.
Using the AWS CLI
Installation
user@host:~$ pip3 install python-openstackclient awscli awscli-plugin-endpoint
Install the
groff package if you want to use the command line help
Configuration
S3 tokens are different, you need 2 parameters (access and secret) to generate an S3 token. These credentials will be stored securely in Keystone. Follow the next steps to generate it.
Set the OpenStack environment variables:
user@host:~$ source openrc.sh
If necessary, download your user's OpenRC file.
Finally, with the python-openstack client:
user@host:~$ openstack ec2 credentials create +------------+--------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +------------+--------------------------------------------------------------------------------------------------------------------------------------------+ | access | 86cfae29192b4cedb49bbc0f067a9df8 | | links | {'self': ' | | project_id | 702de32b692c4842b0bb751dc5085daf | | secret | 3b3e625d867d4ddb9e748426daf5aa6a | | trust_id | None | | user_id | a1a8da433b04476593ce9656caf85d66 | +------------+--------------------------------------------------------------------------------------------------------------------------------------------+
Configure the aws client as follows:
user@host:~$ cat ~/.aws/credentials [default] aws_access_key_id = <access_key> aws_secret_access_key = <secret_key> user@host:~$ cat ~/.aws/config [profile default] region = <region> s3 = endpoint_url = signature_version = s3v4 s3api = endpoint_url =
You can also use interactive configuration by running the following command:
aws --configure
Here are the configuration values you can specifically set:
Usage
If you don't have
awscli-plugin-endpoint installed, you must add
--endpoint-url to the command line.
If you have defined multiple profiles, add
--profile <profile> to the command line.
Create a bucket
aws s3 mb s3://<bucket_name> aws --endpoint-url -. | https://docs.ovh.com/gb/en/storage/s3/getting-started-with-s3/ | 2022-05-16T19:18:49 | CC-MAIN-2022-21 | 1652662512229.26 | [] | docs.ovh.com |
RadiusCore is an Excel add-on that connects Excel directly to Xero. Designed for accountants and bookkeepers, RadiusCore can help ensure accuracy of workpapers, budgets, forecasts and more. With RadiusCore you can improve workflow by minimising manual data entry and creating efficiencies that were not previously possible.
Designed to deliver organisation-wide access to all connected Xero organisations, RadiusCore is structured to provide wide-spread, unlimited access to those organisations that need it. Link each client only once and have access ready to go across multiple Excel workbooks, and even for subsequent financial years.
RadiusCore also features a full-scope command line interface (CLI). Because it has been built in native VBA code, you can build custom implementations into any Excel workbook. A basic level of ‘macro’ knowledge is needed to use the CLI, but if you don’t have the necessary expertise the team at RadiusCore are always happy to help you achieve the integration you are after!
With RadiusCore, data flows to and from Xero. The diagram below outlines an expected dataflow you can use RadiusCore for. | https://docs.radiuscore.co.nz/user_documentation/introduction | 2022-05-16T18:22:58 | CC-MAIN-2022-21 | 1652662512229.26 | [] | docs.radiuscore.co.nz |
Documents
Search…
Documents
Introduction
Roadmap
Social Network Features
SocialFi
BSocial Token (BINS)
Active Account
Social Engagement Reward System
Account (NFT) and Marketplace
Information
For Beginer
DRK Coin Reward
BINS Social Media
GitBook
Account (NFT) and Marketplace
Renovating Social Networking
What is a.
Source:
How will NFTs be implemented inside Bsocial?
Each Bsocial account corresponds to a Non-fungible Token (NFT)
. Following the decentralized movement, Bsocial aims to orient a social network without the presence of one’s identity. The value of an account here will genuinely be shown by its content quality, follower count and also, the quality of its followers.
When an account is created and consequentially activated, it will mint a NFT that contains a userID at Bsocial. Users will be able to conveniently log in to the platform by connecting the wallet that contains the NFT. The NFT Account system at Bsocial is built on Binance Smart Chain.
With all the distinctive benefits of a NFT, such as uniqueness, high security and transferability, each account will be a valuable asset that can be traded.
Marketplace
Being inspired by Telegram, plus the NFT Account system, Bsocial strives to bring out great privacy for users and “price” every account based on their published content, follower quality and account’s rank. On Bsocial, it doesn't matter who you are. Your value lies in what you have accomplished in terms of socializing, reflected through engagement statistics, and the quality of your published content. These values might directly or indirectly create some benefits for you, depending on how you look at it. A clear observation is that, if you are doing well on the platform, the monthly rewards for Top-followed Accounts can be your source of passive income. Therefore, each account can be viewed as a token with its own intrinsic value at Bsocial.
Marketplace is introduced as a medium that creates assurance and convenience for peer-to-peer (P2P) account trading. Marketplace contract is set on Binance Smart Chain. Users trade accounts as NFTs.
SocialFi - Previous
Social Engagement Reward System
Next - Information
For Beginer
Last modified
8mo ago
Copy link
Contents
What is a NFT?
How will NFTs be implemented inside Bsocial?
Marketplace | https://docs.bsocial.pro/socialfi/nft-and-marketplace | 2022-05-16T19:23:09 | CC-MAIN-2022-21 | 1652662512229.26 | [] | docs.bsocial.pro |
For teams that rely on Okta for provisioning, Custom Roles fully integrate with Okta! Any role created in ClickUp can be added as an option within Okta, making it easy to integrate Custom Roles into existing workflows.
Okta support is exclusive to the Enterprise Plan. Learn more about our Plans here. For help with sales, please contact [email protected].
How to configure your integration
Note: Before you're able to set up provisioning, you must have Okta SSO enabled for your Workspace. Please refer to this guide for instructions on how to set up Okta SSO.
Once SSO is enabled, you'll be presented with a SCIM Base URL and SCIM API Token
2. In the Okta Dashboard, navigate to the ClickUp application and click the
Provisioning tab
3. Check the
Enable provisioning features box
4. Click
Configure API Integration
5. Check the
Enable API integration box
6. Enter the SCIM Base URL and SCIM API Token you received in step 1
7. Click Test API Credentials; if successful, a verification message will appear
8. Click
Save
9. Select
To App in the left panel, then select the
Provisioning Features you want to enable.
10. Assign people to the app (if needed) and finish the application setup.
11. When assigning users or groups, assign the ClickUp Role attribute (guest, member, or admin) Note: If this attribute is unset, it will default to a member user
What you can do
Push New Users
- New users created through Okta will also be created in the third party application.
Note: For this application, deactivating a user means removing access to login, but maintaining the user's Chorus information as an inactive user
Reactivate Users
- User accounts can be reactivated in the application.
Troubleshooting & Tips
1. Once a user is created in ClickUp, it will not receive updates when the
givenName,
lastName, or
2. When users are deactivated in Okta, they will be removed from the associated ClickUp Workspace. Users will not be able to access anything in that Workspace, but their data will remain available as an ‘inactive user’.
3. To set a custom role for you users, you can map to either the
customRoleName attribute, or the
customRoleId attribute. If you do not have someone that can access the ClickUp Public API, create an attribute in the Okta profile that is an enumerated list of names that match the custom roles that you created in your ClickUp Workspace and make sure this maps to
customRoleName during user provisioning. Please note that if the role name is changed in ClickUp, this mapping will break. If you can access the ClickUp Public API, use the
customRoleId attribute to ensure that the custom role mapping will not break on name change. To find out the ID’s that correspond to the Custom Roles that you created, use this endpoint to find the list of roles available on your workspace. | https://docs.clickup.com/en/articles/4564997-okta-scim-clickup-configuration-guide | 2022-05-16T19:17:23 | CC-MAIN-2022-21 | 1652662512229.26 | [] | docs.clickup.com |
Deploy the InfluxData Platform in Google Cloud Platform
For deploying InfluxDB Enterprise clusters on Google Cloud Platform (GCP) infrastructure, InfluxData provides an InfluxDB Enterprise bring-your-own-license (BYOL) solution on the Google Cloud Platform Marketplace that makes the installation and setup process easy and straightforward. Clusters deployed through the GCP Marketplace are ready for production.
The Deployment Manager templates used for the InfluxDB Enterprise BYOL solution are open source. Issues and feature requests for the Marketplace deployment should be submitted through the related GitHub repository (requires a GitHub account) or by contacting InfluxData support.
Prerequisites
This guide requires the following:
- A Google Cloud Platform (GCP) account with access to the GCP Marketplace.
- A valid InfluxDB Enterprise license key, or sign up for a free InfluxDB Enterprise trial for GCP.
- Access to GCP Cloud Shell or the
gcloudSDK and command line tools.
To deploy InfluxDB Enterprise on platforms other than GCP, please see InfluxDB Enterprise installation options.
Deploy a cluster
To deploy an InfluxDB Enterprise cluster, log in to your Google Cloud Platform account and navigate to InfluxData’s InfluxDB Enterprise (BYOL) solution in the GCP Marketplace.
Click Launch on compute engine to open up the configuration page.
Copy the InfluxDB Enterprise license key to the InfluxDB Enterprise license key field or sign up for a free InfluxDB Enterprise trial for GCP to obtain a license key.
Adjust any other fields as desired. The cluster will only be accessible within the network (or subnetwork, if specified) in which it is deployed. The fields in collapsed sections generally do not need to be altered.
Click Deploy to launch the InfluxDB Enterprise cluster.
The cluster will take up to five minutes to fully deploy. If the deployment does not complete or reports an error, read through the list of common deployment errors.
Your cluster is now deployed!
Make sure you save the “Admin username”, “Admin password”, and “Connection internal IP” values displayed on the screen. They will be required when attempting to access the cluster.
Access the cluster
The cluster’s IP address is only reachable from within the GCP network (or subnetwork) specified in the solution configuration. A cluster can only be reached from instances or services within the same GCP network or subnetwork in which it was provisioned.
Using the GCP Cloud Shell or
gcloud CLI, create a new instance that will be used to access the InfluxDB Enterprise cluster.
gcloud compute instances create influxdb-access --zone us-central1-f --image-family debian-9 --image-project debian-cloud
SSH into the instance.
gcloud compute ssh influxdb-access
On the instance, install the
influx command line tool via the InfluxDB open source package.
wget sudo dpkg -i influxdb_1.6.3_amd64.deb
Now the InfluxDB Enterprise cluster can be accessed using the following command with “Admin username”, “Admin password”, and “Connection internal IP” values from the deployment screen substituted for
<value>.
influx -username <Admin username> -password <Admin password> -host <Connection internal IP> -execute "CREATE DATABASE test" influx -username <Admin username> -password <Admin password> -host <Connection internal IP> -execute "SHOW DATABASES"
Next steps
For an introduction to InfluxDB database and the InfluxData Platform, see Getting started with InfluxDB.. | https://docs.influxdata.com/platform/install-and-deploy/deploying/google-cloud-platform/ | 2022-05-16T18:45:40 | CC-MAIN-2022-21 | 1652662512229.26 | [] | docs.influxdata.com |
.
To see REST API features and to try them out, go to the the REST API autogenerated reference.
To access it, log in to Plesk, go to Tools & Settings > Remote API (REST) (under “Server Management”),
and then click API Reference and Playground (you will need to provide the
server
root or
administrator user credentials). | https://docs.plesk.com/en-US/obsidian/api-rpc/introduction.79358/ | 2022-05-16T17:47:26 | CC-MAIN-2022-21 | 1652662512229.26 | [] | docs.plesk.com |
.
You can see the WAF rule you created in the Firewall rules table.
More resources | https://docs.sophos.com/nsg/sophos-firewall/18.0/Help/en-us/webhelp/onlinehelp/AdministratorHelp/RulesAndPolicies/WebServerProtection/WAF/Rules/WAFRuleAdd/index.html | 2022-05-16T18:48:04 | CC-MAIN-2022-21 | 1652662512229.26 | [] | docs.sophos.com |
Runtime Image Configuration¶
A runtime image configuration identifies a container image that Elyra can utilize to run pipeline nodes on container-based platforms, such as Kubeflow Pipelines or Apache Airflow.
Prerequisites¶
A runtime image configuration is associated with a container image that must meet these prerequisites:
- The image is stored in a container registry in a public or private network that the container platform in which the pipeline is executed can connect to. Examples of such registries are hub.docker.com or a self-managed registry in an intranet environment.
- The image must have a current
Python 3version pre-installed and
python3in the search path.
- The image must have
curlpre-installed and in the search path.
Refer to Creating a custom runtime container image for details.
You can manage runtime image configurations using the JupyterLab UI or the Elyra CLI.
Managing runtime image configurations using the JupyterLab UI¶
Runtime image configurations can be added, modified, duplicated, and removed in the Runtime Images panel.
To access the panel in JupyterLab:
Click the
Open Runtime Imagesbutton in the pipeline editor toolbar.
OR
Select the
Runtime Imagespanel from the JupyterLab sidebar.
OR
Open the JupyterLab command palette (
Cmd/Ctrl + Shift + C) and search for
Manage Runtime Images.
Adding a runtime image configuration¶
To add a runtime image configuration:
- Open the
Runtime Imagespanel.
- Click
- Add the runtime image properties as appropriate.
Modifying a runtime image configuration¶
To edit a runtime image configuration:
- Open the
Runtime Imagespanel.
- Click the
editicon next to the runtime image name.
- Modify the runtime image properties as desired.
Duplicating a runtime image configuration¶
To duplicate a runtime image configuration:
- Open the
Runtime Imagespanel.
- Click the duplicate icon next to the runtime image configuration.
- Follow the steps in ‘Modifying a runtime image configuration’ to customize the duplicated configuration.
Managing runtime image configurations using the Elyra CLI¶
Runtime image configurations can be added, replaced, and removed with the
elyra-metadata command line interface.
To list runtime image configurations:
$ elyra-metadata list runtime-images Available metadata instances for runtime-images (includes invalid): Schema Instance Resource ------ -------- -------- runtime-image anaconda /Users/jdoe/.../jupyter/metadata/runtime-images/anaconda.json ...
Adding a runtime configuration¶
To add a runtime image configuration for the public
jdoe/my-image:1.0.0 container image:
$ elyra-metadata create runtime-images \ --name "my_image_name" \ --display_name "My runtime image" \ --description "My custom runtime container image" \ --image_name "jdoe/my-image:1.0.0"
Modifying a runtime configuration¶
To replace a runtime image configuration use the
update command:
$ elyra-metadata update runtime-images \ --name "my_image_name" \ --display_name "My runtime image" \ --description "My other custom runtime container image" \ --image_name "jdoe/my-other-image:1.0.1"
Exporting runtime image configurations¶
To export runtime image configurations:
$ elyra-metadata export runtime-images \ --directory "/tmp/foo"
The above example will export all runtime image configurations to the “/tmp/foo/runtime-images” directory.
Note that you must specify the
--directory option.
There are two flags that can be specified when exporting runtime image configurations:
- To include invalid runtime image configurations, use the
--include-invalidflag.
- To clean out the export directory, use the
--cleanflag. Using the
--cleanflag in the above example will empty the “/tmp/foo/runtime-images” directory before exporting the runtime image configurations.
Importing runtime image configurations¶
To import runtime image configurations:
$ elyra-metadata import runtime-images \ --directory "/tmp/foo"
The above example will import all valid runtime image configurations in the “/tmp/foo” directory (files present in any sub-directories will be ignored).
Note that you must specify the
--directory option.
By default, metadata will not be imported if a runtime image configuration instance with the same name already exists. The
--overwrite flag can be used to override this default behavior and to replace any installed metadata with the newer file in the import directory.
Configuration properties¶
The runtime image configuration properties are defined as follows. The string in the headings below, which is enclosed in parentheses, denotes the CLI option name.
Name (display_name)¶
A user-friendly name for runtime image configuration. This property is required.
Example:
My runtime image
Description (description)¶
Description for this runtime image configuration.
Example:
My custom runtime container image
Image Name (image_name)¶
The name and tag of an existing container image in a container registry that meets the stated prerequisites. This property is required.
Example:
jdoe/my-image:1.0.0
Providing only
owner/image:tag uses default registry: Docker Hub registry
In general for other public container registries, the URL shall contain also
registry, therefore the complete URL to be used in this case is:
registry/owner/image:tag
Example:
quay.io/jdoe/my-image:1.0.0
Image Pull Policy (pull_policy)¶
This field will be the pull policy of the image when the image is selected to be part of the pipeline. This field
is optional and not required to run a pipeline. If not selected, the behavior will default to that of the kubernetes
cluster. The three options are :
Always
IfNotPresent
Never
Example:
IfNotPresent
This example will tell the kubelet to only pull the image if it does not exist.
Image Pull Secret (pull_secret)¶
If
Image Name references a container image in a secured registry (requiring credentials to pull the image), create a Kubernetes secret in the appropriate namespace and specify the secret name as image pull secret.
Restrictions:
- Only supported for generic components.
Example:
my-registry-credentials-secret
N/A (name)¶
A unique internal identifier for the runtime image configuration. The property is required when the command line interface is used manage a configuration. An identifier is automatically generated from the user-friendly name when a configuration is added using the UI.
Example:
my_runtime_image | https://elyra.readthedocs.io/en/stable/user_guide/runtime-image-conf.html | 2022-05-16T18:03:44 | CC-MAIN-2022-21 | 1652662512229.26 | [] | elyra.readthedocs.io |
Poisson Factorization¶
This is the documentation page for the Python package poismf, which produces approximate non-negative low-rank matrix factorizations of sparse counts matrices by maximizing Poisson likelihood minus a regularization term, the result of which can be used for e.g. implicit-feedback recommender systems or bag-of-words-based topic modeling.
For more information, visit the project’s GitHub page:
For the R version, see the CRAN page:
Installation¶
The Python version of this package can be easily installed from PyPI
pip install poismf
(See the GitHub page for more details)
Quick Example¶
PoisMF¶
- class
poismf.
PoisMF(k=50, method='tncg', l2_reg='auto', l1_reg=0.0, niter='auto', maxupd='auto', limit_step=True, initial_step=1e-07, early_stop=True, reuse_prev=False, weight_mult=1.0, random_state=1, reindex=True, copy_data=True, produce_dicts=False, use_float=True, handle_interrupt=True, nthreads=-1, n_jobs=None)[source]¶
Bases:
object
Poisson Matrix Factorization
Fast and memory-efficient model for recommender systems based on Poisson factorization of sparse counts data (e.g. number of times a user played different songs), using gradient-based optimization procedures.
- The model idea is to approximate:
- \(\mathbf{X} \sim \text{Poisson}(\mathbf{A} \mathbf{B}^T)\)
Note
If passing
reindex=True, it will internally reindex all user and item IDs. Your data will not require reindexing if the IDs for users and items meet the following criteria:
- Are all integers.
- Start at zero.
- Don’t have any enumeration gaps, i.e. if there is a user ‘4’, user ‘3’ must also be there.
Note
Although the main idea behind this software is to produce sparse model/factor matrices, they are always taken in dense format when used inside this software, and as such, it might be faster to use these matrices through some other external library that would be able to exploit their sparsity.
Note
When using proximal gradient method, this model is prone to numerical instability, and can turn out to spit all NaNs or zeros in the fitted parameters. The TNCG method is not prone to such failed optimizations.
References
fit_unsafe(A, B, Xcsr, Xcsc)[source]¶
Faster version for ‘fit’ with no input checks or castings
This is intended as a faster alternative to
fitwhen a model is to be fit multiple times with different hyperparameters. It will not make any checks or conversions on the inputs, as it will assume they are all in the right format.
Passing the wrong types of inputs or passing inputs with mismatching shapes will crash the Python process. For most use cases, it’s recommended to use
fitinstead.
Note
Calling this will override
produce_dictsand
reindex(will set both to
False).
predict(user, item)[source]¶
Predict expected count for combinations of users (rows) and items (columns)
Note
You can either pass an individual user and item, or arrays representing tuples (UserId, ItemId) with the combinatinons of users and items for which to predict (one entry per prediction).
predict_factors(X, l2_reg=None, l1_reg=None, weight_mult=None, maxupd=None)[source]¶
Get latent factors for a new user given her item counts
This is similar to obtaining topics for a document in LDA. See also method ‘transform’ for getting factors for multiple users/rows at a time.
Note
This function works with one user at a time, and will use the TNCG solver regardless of how the model was fit. Note that, since this optimization method may have different optimal hyperparameters than the other methods, it offers the option of varying those hyperparameters in here.
Note
The factors are initialized to the mean of each column in the fitted model.
topN(user, n=10, include=None, exclude=None, output_score=False)[source]¶
Rank top-N highest-predicted items for an existing user
Note
Even though the fitted model matrices might be sparse, they are always used in dense format here. In many cases it might be more efficient to produce the rankings externally through some library that would exploit the sparseness for much faster computations. The matrices can be access under
self.Aand
self.B.
topN_new(X, n=10, include=None, exclude=None, output_score=False, l2_reg=None, l1_reg=None, weight_mult=1.0, maxupd=None)[source]¶
Rank top-N highest-predicted items for a new user
Note
This function calculates the latent factors in the same way as
predict_factors- see the documentation of
predict_factorsfor details.
Just like
topN, it does not exploit any potential sparsity in the fitted matrices and vectors, so it might be a lot faster to produce the recommendations externally (see the documentation for
topNfor details).
Note
The factors are initialized to the mean of each column in the fitted model.
transform(X, y=None)[source]¶
Determine latent factors for new rows/users
Note
This function will use the same method and hyperparameters with which the model was fit. If using this for recommender systems, it’s recommended to use instead the function ‘predict_factors’ as it’s likely to be more precise.
Note
When using
method='pg'(not recommended), results from this function and from ‘fit’ on the same datamight differ a lot.
Note
This function is prone to producing all zeros or all NaNs values.
Note
The factors are initialized to the mean of each column in the fitted model. | https://poismf.readthedocs.io/en/latest/?badge=latest | 2022-05-16T19:35:36 | CC-MAIN-2022-21 | 1652662512229.26 | [] | poismf.readthedocs.io |
Publish and subscribe overview
Overview of the Pub/Sub building block
The documentation you are viewing is for Dapr v1.2 which is an older version of Dapr. For up-to-date documentation, see the latest version.
Overview of the Pub/Sub building block
Learn how to send messages to a topic with one service and subscribe to that topic in another service
Use scopes to limit Pub/Sub topics to specific applications
Use time-to-live in Pub/Sub messages.
Use Pub/Sub without CloudEvents. | https://v1-2.docs.dapr.io/developing-applications/building-blocks/pubsub/ | 2022-05-16T19:37:23 | CC-MAIN-2022-21 | 1652662512229.26 | [] | v1-2.docs.dapr.io |
The Drops associated with a shipment management route can be viewed on the drops tab of the Shipment Management Route page. Each line shows a destination (delivery or collection) on the delivery route. This is where the order of the route drops can be managed.
Note: By default drops will only be shown for released documents. Drops related to unreleased documents can be seen if the Show Open Orders on Shipment Management option is enabled on the Shipment Management Setup. All entries can be seen by the selecting Entries option on the ribbon of the Shipment Management Route page.
By default, drops will be sequences based on previously saved order of drops. It may be that the default order of drops is not the optimal delivery order, in which case the drops can be re-sequenced. This can be down using the Move Up and Move Down actions under manage on the ribbon.
The Move Up action (or using the shortcut Shift+Ctrl+Up) moves the currently selected line up one place in the delivery order.
The Move Down action (or using the shortcut Shift+Ctrl+Down) moves the currently selected line down one place in the delivery order.
Note: For ease of use, the Move Up and Move Down actions are also on the context menu of the line.
Saving the delivery order is useful in instances where the route has a drop number you would not typically have in the delivery area. As an example, this could be if drops have been moved onto a new route or the delivery order has been changed. To save this delivery order, highlight the drop that has been added and select Save Delivery Order on the ribbon.
This will show a prompt where the Delivery Area to save to can be entered. Once this has been completed in the dialog will show the existing delivery order lines for the selected delivery area code.
It is then possible to specify whether the drop should be delivered first, last, before or after the selected drop. Click OK and the Delivery Area will be updated with the changes.
Saving the delivery order allows the existing Delivery Area to be amended to incorporate the updated delivery order changes.
When reviewing drops,
When reviewing drops it mat be more efficient to move the drop to a different route. Alternatively the drops on the route may exceed the capacity of the vehicle or exceed the hours of the driver utilised on the route. To do this select Move Lines from the ribbon:
This allows a drop to be moved to either an existing route or create a new route.
If moving to an existing Route, enable the Existing Route option, and the select the Route No. for the drop to be transferred to. If you know the route number you’d like to move the Drop to you can fill this in, alternatively select the drop-down arrow and this will open a list of valid existing routes available for you to move the drop to. The list will automatically filter for the relevant location code a date filter greater than or equal to todays work date. You can apply additional filters to find the most suitable route.
If moving to a new route, enable the New Route option and complete the Delivery Area, Shipping Agent, Shipping Agent Service and Shipment Date. This will then create the new route
Notes: When moving a drop to a new route Shipment Management permits you to override the delivery date to one which is not scheduled as per the setup for that Delivery Area Schedule. If you move drops to a route under a different delivery area the delivery order on the route for the drops will be reset to zero, you will then need to move the drops up/down accordingly to give the route an order. If the user tries to create a new route that is a duplicate of an existing route, the system will prompt if the existing route should be used.
When deciding on moving drops it can be useful to know the impact of the drops on the total load. By selecting one or more drops, it is possible to see the Volume, Net Weight and Gross Weight in the Selected Drops area immediately under the lines. The Gross Weight will be highlighted red if it exceeds the maximum load of the vehicle (if assigned).
You can view the full address of a drop by selecting the Address action on the ribbon:
This will display the full drop address in a new dialog:
To see the entries that make up the Drop, select Entries on the ribbon:
This will give you the order and item information being delivered/collected on the drop:
It is possible to view the source document be selecting Show Document from the ribbon:
The is will show the original Sales Order, Sales Return Order, Purchase Order, Purchase Return Order or Transfer Order that the entry relates to.
The following video shows some examples of how drops can be managed: | https://docs.cleverdynamics.com/clever-wms/shipment-management/user-guide/using-routes/drops | 2022-05-16T19:20:07 | CC-MAIN-2022-21 | 1652662512229.26 | [] | docs.cleverdynamics.com |
ClickUp Docs are effortlessly integrated with your tasks and projects, empowering your team to take action immediately and get work done faster.
Key Features
Pages
Add structure to the content in your Docs using pages and subpages. Add cover images and page icons using popular emoji to add flair to your content!
Tip: Learn more about the text formatting available in Docs.
Import
Bring your content from other apps into ClickUp Docs to use rich formatting and collaborative editing.
Export
Take your Docs outside of ClickUp by exporting them into PDF, HTML, or markdown.
Add comments to Docs to collaborate with your team! Ask and answer questions, provide feedback and approval, or suggest content with rich text formatting, embeds, and attachments.
Doc comments are automatically assigned to Anyone or the first person or Team that you mention in a comment.
Doc Tags
Apply tags to filter and find the Docs you're looking for even faster.
Note: Workspaces on our Free Forever Plan, Unlimited Plan, and Business Plan have 100 uses of Doc tags. Workspaces on the Business Plus Plan and Enterprise Plan have unlimited uses. Learn more about our Plans.
Views
Create a Doc view, or add a Doc to a Location to show it right alongside your views.
Add Docs to the Sidebar
Add Docs to the Sidebar so they appear alongside your Folders and Lists in the ClickUp hierarchy.
Templates
Find and create templates to speed up the writing process and keep your Docs consistent.
Privacy and Sharing
Set permissions and share your beautiful content with others in your organization and the world!
Relationships
Relate Docs and tasks to build connections and relationships between content and your work.
Protecting Docs and Pages
Prevent unwanted changes to your pages and Docs by protecting them.
Note: Protecting Docs and pages is only available on our Business Plus and Enterprise Plans.
Settings and preferences
Learn more about the switches and options available under the hood.
Archive Docs
Done with a Doc, but not ready to delete it? Archive it instead!
While archived Docs will be hidden automatically, they are still saved so you can find them at a later date.
Want to learn more?
Read how Sam created a product brief for their team
Find out how Alex saved time by documenting features and code in Docs
Learn how Jodi-Kay Edwards leveled up her note-taking process with ClickUp Docs
Check out how Karla Massiel uses ClickUp Docs to map customer journeys | https://docs.clickup.com/en/articles/2957090-clickup-docs-overview | 2022-05-16T18:28:58 | CC-MAIN-2022-21 | 1652662512229.26 | [] | docs.clickup.com |
Running Altus Director and Cloudera Manager in Different Regions or Clouds
A Altus Director instance requires network access to all of the Cloudera Manager and CDH instances it deploys and manages. If Altus Director is installed in the same subnet where you install Cloudera Manager and create CDH clusters, this requirement is satisfied automatically. However, the following alternative configurations are also supported:
- Running Altus Director in one region and Cloudera Manager and the CDH clusters it manages in a different region.
- Installing Altus Director on one cloud provider, such as AWS, and Cloudera Manager and the CDH clusters it manages on a different cloud provider, such as Microsoft Azure or Google Cloud Platform.
- Installing Altus Director in your local network environment (on your laptop, for instance), and Cloudera Manager and the CDH clusters it manages in a cloud environment.
The most secure solution in these cases is to set up a VPN giving Altus Director access to the private subnet. Alternatively, Altus Director can be given SSH access to the instances through the public internet.
When using SSH to configure Cloudera Manager and CDH instances, Altus Director will try to connect to the instances in the following order:
- Private IP address
- Private DNS host name
- Public IP address
- Public DNS host name
The following requirements apply to running Altus Director and clusters in different regions or cloud provider environments when connecting to instances through their public endpoints:
- Your cluster instances must have public IP addresses and your security group must allow SSH access on port 22 from the IP address of the Altus Director host.
- For AWS: If you are creating the cluster with the UI, set Associate public IP addresses to true in the Environment for Cloudera Manager and the cluster. If you are creating the cluster with the CLI, set the associatePublicIpAddresses to true in the configuration file.
- For Microsoft Azure: If you are creating the cluster with the UI, set Public IP to Yes in the instance template for Cloudera Manager and the cluster. If you are creating the cluster with the CLI, set publicIP to Yes in the configuration file.
- While Altus Director can run in a different subnet, Cloudera Manager and the CDH cluster hosts must be in the same subnet.
- Altus Director must have SSH access to the public IP addresses of all cluster instances.
- Altus Director needs to communicate with Cloudera Manager on its API endpoint (typically through HTTP to port 7180) on the private IP address. For security reasons, this endpoint should not be exposed to the public internet.
- For Cloudera Manager instances that were deployed by Altus Director, if Altus Director cannot make a direct connection to the Cloudera Manager API on the private IP address, it will automatically attempt to create an SSH tunnel to the Cloudera Manager API endpoint through an SSH connection to the instance on its public IP address.
- Connecting to an existing deployment of Cloudera Manager through SSH tunneling is not supported. | https://docs.cloudera.com/documentation/director/6-0-x/topics/director_multi-region.html | 2022-05-16T20:02:48 | CC-MAIN-2022-21 | 1652662512229.26 | [] | docs.cloudera.com |
public class MBStyle extends Object
This class is responsible for presenting the wrapped JSON in an easy to use / navigate form for Java developers:
Access methods should return Java Objects, rather than generic maps. Additional access methods to perform common queries are expected and encouraged.
This class works closely with
MBLayer hierarchy used to represent the fill, line,
symbol, raster, circle layers. Additional support will be required to work with sprites and
glyphs.
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
public final JSONObject json
All methods act as accessors on this JSON document, no other state is maintained. This allows modifications to be made cleaning with out chance of side-effect.
public MBStyle(JSONObject json)
json- Map Box Style as parsed JSON
public static MBStyle create(Object json) throws MBFormatException
json- Required to be a JSONObject
MBFormatException- JSON content inconsistent with specification
public MBLayer layer(String id)
id- Id of layer to access
public List<MBLayer> layers()
public List<MBLayer> layers(String source) throws MBFormatException
source- Data source
MBFormatException
public String getName()
public JSONObject getMetadata()
JSONObjectcontaining the metadata, or an empty JSON object the style has no metadata.
public Point2D getCenter()
Pointfor the map center, or null if the style contains no center.
public Number getZoom()
public Number getBearing()
public Number getPitch()
public String getSprite()
public String getGlyphs()
"glyphs": "mapbox://fonts/mapbox/{fontstack}/{range}.pbf"
public Map<String,MBSource> getSources()
MBSourceinstances.
MBSource
public StyledLayerDescriptor transform() | http://docs.geotools.org/stable/javadocs/org/geotools/mbstyle/MBStyle.html | 2022-05-16T19:21:46 | CC-MAIN-2022-21 | 1652662512229.26 | [] | docs.geotools.org |
About atmopy library¶
Why atmospheric modelling¶
Air drag plays a big role when it comes to aircraft performance or spacecraft orbital decay. Since this force is proportional to air density, several models have tried all along history to model the Earth’s atmosphere to achieve high accurate results. They might consider the air to be static, different solar conditions such us day and night and many other variables.
But atmospheric modelling does not only play a big role in our planet. It is possible to find other celestial bodies in the Solar System such us Venus, who has a very dense and hot atmosphere, or Jupiter moons. Those models, although more difficult to generate because of lack of data compared to Earth one, are important when it comes to spacecraft landing or any other maneuver in which drag is not negligible.
Old models: software archaeology¶
It is quite difficult to understand latest atmospheric mathematical models without having a look at all previous developed models. You should be aware that atmospheric modelling started together with the development of the first rockets and missiles. Data registers from sensors located in those machines, enable to generate different mathematical approaches.
Therefore, the reasons behind old models implemented within atmopy are not because they are useful, but for historical reasons. One of the objectives is also to create some kind of software atmospheric library, not only hosting source code but also original papers.
Finally, having a collection of models implemented in Python improves readability a lot. Some of them were initially implemented in Fortran or C/C++, programming languages which can be a little bit confusing for amateur users. | https://atmopy.readthedocs.io/en/latest/about.html | 2022-05-16T19:23:20 | CC-MAIN-2022-21 | 1652662512229.26 | [] | atmopy.readthedocs.io |
Customize device settings
Customize the settings for user devices at the device level such as device lifespan, auto-login, screen resolution, deployment, and other advanced options.
These configurations are available for editing only if the settings are enabled by the Control Room administrator.
Procedimento
- Navigate to .
- Select the setting tab for configuration at the device level:
- Update the Nickname and Description parameters, and select either Persistent or Temporary in the General Settings.Device lifespan can be Persistent or Temporary. Temporary devices are created to support non-persistent virtual desktop infrastructure (VDI) and are automatically deleted after a specified time when the device is disconnected from the Control Room. You can specify the time to automatically delete temporary devices in the Control Room devices settings.
Persistent devices are created to support persistent virtual desktop infrastructure (VDI) and are not deleted after the device is disconnected from the Control Room.
- To update the settings for Auto Login to run bots on user sessions at the device level, select Use custom settings.Either create new user sessions or use existing user sessions after a bot finishes running.You can customize the auto login settings at the device level only if the allow a user to change these settings individually for each device in devices page option is enabled by the Control Room administrator.
- Update the Screen resolution settings for existing device sessions.The screen resolution settings are used to begin user sessions at the resolution specified in this setting. You can customize the screen resolution settings at the device level only if the allows users to override the resolution option is enabled by the Control Room administrator.
- Update the Deployment settings for single or multiple user devices and use the Use custom settings option to set the threshold settings for CPU and memory utilization at the device level.For single users installed at the system level, the default deployment type is regular; you can change this to RDP. For multiple users installed at system level, the default deployment is RDP. For single users installed at the user level and multiple users at the system level, you cannot updated the deployment type.For multiple user devices, additionally configure the Concurrent sessions supported. The minimum concurrent session value is 2.You can customize the threshold settings only if the enable threshold settings option is enabled by the Control Room administrator.
- Update the Advanced Options at the device level such as auto-login timeout, bot launcher JVM options, console readiness wait time, bot response wait time, RDP command options and log collection by selecting the Use custom settings option.You can customize the advanced options only if the allow changes in the edit device page option is enabled by the Control Room administrator.
- Click Save changes.If the user device is offline, these changes take effect when the device reconnects to the Control Room. | https://docs.automationanywhere.com/pt-BR/bundle/enterprise-v2019/page/enterprise-cloud/topics/control-room/devices/edit-devices.html | 2022-05-16T19:36:09 | CC-MAIN-2022-21 | 1652662512229.26 | [] | docs.automationanywhere.com |
Configuring and Running Cloudera Director The topics in this section explain how to configure and run Cloudera Director. Continue reading: Auto-Repair for Failed or Terminated Instances Deploying a Java 8 Cluster Setting Cloudera Director Properties Pausing Cloudera Director Instances Configuring Cloudera Director for a New AWS Instance Type Configuring Cloudera Director to Use Custom Tag Names on AWS Categories: Altus Director | Configuring | All Categories Creating Highly Available Clusters With Cloudera Director Auto-Repair for Failed or Terminated Instances | https://docs.cloudera.com/documentation/director/2-6-x/topics/director_advanced_config.html | 2022-05-16T19:22:13 | CC-MAIN-2022-21 | 1652662512229.26 | [] | docs.cloudera.com |
flytekit.workflow¶
- flytekit.workflow(_workflow_function=None, failure_policy=None, interruptible=False)[source]¶
This decorator declares a function to be a Flyte workflow. Workflows are declarative entities that construct a DAG of tasks using the data flow between tasks.
Unlike a task, the function body of a workflow is evaluated at serialization-time (aka compile-time). This is because while we can determine the entire structure of a task by looking at the function’s signature, workflows need to run through the function itself because the body of the function is what expresses the workflow structure. It’s also important to note that, local execution notwithstanding, it is not evaluated again when the workflow runs on Flyte. That is, workflows should not call non-Flyte entities since they are only run once (again, this is with respect to the platform, local runs notwithstanding).
Example:
@workflow def my_wf_example(a: int) -> typing.Tuple[int, int]: """example Workflows can have inputs and return outputs of simple or complex types. :param a: input a :return: outputs """ x = add_5(a=a) # You can use outputs of a previous task as inputs to other nodes. z = add_5(a=x) # You can call other workflows from within this workflow d = simple_wf() # You can add conditions that can run on primitive types and execute different branches e = conditional("bool").if_(a == 5).then(add_5(a=d)).else_().then(add_5(a=z)) # Outputs of the workflow have to be outputs returned by prior nodes. # No outputs and single or multiple outputs are supported return x, e
Again, users should keep in mind that even though the body of the function looks like regular Python, it is actually not. When flytekit scans the workflow function, the objects being passed around between the tasks are not your typical Python values. So even though you may have a task
t1() -> int, when
a = t1()is called,
awill not be an integer so if you try to
range(a)you’ll get an error.
Please see the user guide for more usage examples.
- Parameters
_workflow_function – This argument is implicitly passed and represents the decorated function.
failure_policy (Optional[flytekit.core.workflow.WorkflowFailurePolicy]) – Use the options in flytekit.WorkflowFailurePolicy
interruptible (bool) – Whether or not tasks launched from this workflow are by default interruptible | https://docs.flyte.org/projects/flytekit/en/latest/generated/flytekit.workflow.html | 2022-05-16T19:40:22 | CC-MAIN-2022-21 | 1652662512229.26 | [] | docs.flyte.org |
Viewing Chronograf dashboards in presentation mode
This page documents an earlier version of Chronograf. Chronograf v1.9 is the latest stable version. View this page in the v1.9 documentation.
Presentation mode allows you to view Chronograf in full screen, hiding the left and top navigation menus so only the cells appear. This mode might be helpful, for example, for stationary screens dedicated to monitoring visualizations.
Entering presentation mode manually
To enter presentation mode manually, click the icon in the upper right:
To exit presentation mode, press
ESC.
Using the URL query parameter
To load the dashboard in presentation mode, add URL query parameter
present=true to your dashboard URL. For example, your URL might look like this:
Note that if you use this option, you won’t be able to exit presentation mode using
ESC.
Was this page helpful?
Thank you for your feedback!
Support and feedback
Thank you for being part of our community! We welcome and encourage your feedback and bug reports for Chronograf and this documentation. To find support, use the following resources: | https://docs.influxdata.com/chronograf/v1.6/guides/presentation-mode/ | 2022-05-16T19:06:52 | CC-MAIN-2022-21 | 1652662512229.26 | [] | docs.influxdata.com |
Rhino toolbars can be edited to customize your workspace.
To customize a toolbar
Right-click a tab, the group handle area, or click the options/gear icon.
Toolbar Options menu
Show Toolbar
Opens a toolbar as a free-standing group.
New Button
Adds a new button and opens the Button Editor.
New Separator
Inserts a button separator.
Note: Separators only show in horizontal, one-row toolbars. They act like buttons, so you can move or delete them.
Edit Button
Opens the selected button in the Button Editor.
New Tab
Creates a new blank tabbed toolbar.
Show or Hide Tabs
Shows/hides the toolbar in the current group.
Size to content
Expands a toolbar to show all buttons.
Properties
Opens the Toolbar Properties dialog box.
See also
Using toolbars and buttons
Edit toolbar bitmap
Toolbars Options
Rhinoceros 5 © 2010-2015 Robert McNeel & Associates. 17-Sep-2015 | https://docs.mcneel.com/rhino/5/help/en-us/toolbarsandmenus/customize_toolbars.htm | 2022-05-16T17:59:24 | CC-MAIN-2022-21 | 1652662512229.26 | [] | docs.mcneel.com |
Last updated 14th April, 2022.
A Docker Registry is a system that lets you store and distribute your Docker images. The best known Registry is the official Docker Hub, where you can find official public images such as Alpine, Golang or Debian.
Today, OVHcloud allows you to spawn your own authenticated Docker Registry where you can privately store your Docker images. This is the best way to use your private images with our OVHcloud Managed Kubernetes Service offer without exposing them to everyone.
Objective
OVHcloud Managed Private Registry service provides you a managed, authenticated Docker registry where you can privately store your Docker images. This guide will explain how to create your Private Registry.
Instructions
First, log in to the OVHcloud Control Panel.
In the left menu, in the
Containers & Orchestrationsection, select
Managed Private Registry.
Then click on
Create a private registry.
In the following menu, choose a region to deploy your private registry in, and click on
Choose the registry name (
my-registryin my example), and click on
Choose your plan between the three available plans, and click on
Your private registry is being created...
When status switches to
OK, click on the right
Generate identification details.
Then click on
Confirmto generate new credentials.
Credentials will be shown on the page. Please write then down, you will need them in order to use your private registry.
Congratulations, you have now a working OVHcloud Managed Private Registry.
Go further
To go further you can look at our guide on connecting to the U. | https://docs.ovh.com/gb/en/private-registry/creating-a-private-registry/ | 2022-05-16T19:29:48 | CC-MAIN-2022-21 | 1652662512229.26 | [] | docs.ovh.com |
How long does InspectAll keep my data?
We want to make sure that your data is available to you whenever you need it for as long as you have an account with InspectAll, so we hold all of your text, image, and other media data until you delete it or your account is canceled.
To learn more about how InspectAll saves information, click here. | http://docs.inspectall.com/article/140-how-long-does-inspectall-keep-my-data | 2017-12-11T02:21:27 | CC-MAIN-2017-51 | 1512948512054.0 | [] | docs.inspectall.com |
Reconstructing Genomes from Metagenomic Samples Using the RAST Binning Service (RBS)¶
The ability to reconstruct fairly complete genomes from metagenomic samples is almost certainly a key technology that will accelerate our characterization of the tree of life. A number of research groups have now generated valuable reconstructions of genomes from metagenomic samples, and this is having an impact on estimating genomic sequences for unculturable organims. For background we particularly recommend
Recovery of nearly 8,000 metagenome-assembled genomes substantially expands the tree of life.
by
Parks DH, Rinke C, Chuvochina M, Chaumeil PA, Woodcroft BJ, Evans PN, Hugenholtz P, Tyson GW. Nat Microbiol. 2017 PMID: 28894102
but there have been several other excellent efforts that have been reported over the last few years.
RBS is a server that grew out of the RAST annotation project; it is now supported and maintained as part of the PATRIC project.
The Server: an Overview¶
As input to the server, a user supplies a metagenomic sample in one of the following forms:
- two files containing paired-end reads,
- the ID for a sample in the sample archives, or
- a file of contigs produced by a cross-assembler. More precisely it can be a set of cross-assemblies, where each cross-assembly was constructed from the reads associated with one or more samples. It should be noted that each cross-assembly should include an estimate of coverage for every contig. You can use crAss to generate the input cross-assemblies, you can use your favorite assembler software, or you can use the assembly server in the PATRIC web site.
The output is a set of Genome Packages. Each genome package is a Rast Genome along with an Evaluation Report which estimates the quality of the RAST genome.
The goal is to reconstruct complete (or near-complete) genomes from the sample data. We will constrain ourselves to only extracting genomes with an average coverage of at least 20, but you may wish to change that somewhat arbitrary value. In any event, you should note that, at 20-fold coverage, the regions in the actual genome that contain no large repeats tend to be unbroken. Reconstructing a complete genome including the large repeats is usually quite difficult, so we shoot for capturing the regions between large repeats.
It should be noted that the number of genomes that we expect to extract will depend on the samples we use as input. If we wish to explore the proposed technology, it would make sense to begin with a very limited collection of samples. On the other hand, if the goal were to extract as many new gnomes as possible, one might select hundreds (or even thousands) of samples as input. You should carefully note that this document describes one approach to extracting genomes from samples. There are a number of approaches emerging, and it is likely that aspects of the process we describe may well turn out to be non-optimal. We do hope that you will explore the approach and have fun in the process.
Here then is basically how we build the reconstructed genomes.
Step 1: Construct a Representative Collection of a Universal Protein¶
First, pick a functional role that satisfies the following criteria:
- Exactly one gene encodes the role in all (or at least most) prokaryotic genomes.
- A role encoded by a long gene is preferred.
We picked
Phenylalanyl-tRNA synthetase alpha chain (EC 6.1.1.20)
but there are other good choices. Once we have selected a role, which we will call the seed role, we construct a blast database containing a representative collection of protein sequences that implement the seed role. This can be done using command-line access to the PATRIC collection of genomic data. We recommend using the SEEDtk routine
bins_protein_database -R IdOfProtein OutputFileName
to compute a representative subset of your collection (which should be fairly large). This script uses the PATRIC database to find all occurrences of the identified protein. For Phenylalanyl-tRNA synthetase alpha chain, the command would be
bins_protein_database -R PhenTrnaSyntAlph seed_protein.fna
You can, of course, use any genome database to find the proteins and any utility that produces a DNA fasta file of the relevant sequences. In our program, the sequence ID in the FASTA file is the feature ID, and the comment is the genome ID and scientific name.
Step 2: For Each Sample, Construct a set of Contig Bins¶
The second step is to find all instances of the seed role in your collection of cross-assemblies. This is done using blast. For each sample you blast the representative set of instances of the seed role against the cross-assembly associated with the sample. Filter out hits which do not adequately cover the known seed role instance. Similarly, remove hits against contigs that have less that 20-fold average coverage. The hits that remain each represent a bin that will eventually be expanded into a reconstructed genome. A bin is normally thought of as containing a single genome, but when we cannot reasonably resolve two bins, we occasionally merge them into one This data is represented by a table containing
- Bin Id
- Sample Id
- Contig Id
- Start of hit
- End of hit
- Coverage of contig containing hit
Step 3: For Each Sample, Compute a Set of Reference Genomes¶
We have computed a set of bins, where each bin conceptually contains a single seed role and the contig that contains that seed role. Our overall objective is to determine which contigs from the cross-assembly associated with the sample should be placed into each bin. That is, we need to split the contig pool into subsets that go with each bin, and an extra set of contigs that could not be placed (there may be many of these coming from the non-abundant organisms included in the sample, as well as those contigs whose placement would be ambiguous). There are several possible strategies that could be used to place contigs into bins. We have elected to use reference genomes. This involves associating a known, sequenced reference genome with each of the bins. These reference genomes play a central role in the next step, which involves actually splitting up the pool of contigs in the cross-assembly.
So, how should we go about assigning a sequenced reference genome to each bin? We will attempt to find a reference genome that is phylogenetically close to each bin. To be useful, the reference genome will need to be substantially closer to the genome represented by the bin than to any of the genomes represented by other bins. Here we can used estimates of phylogenetic distance based on the instances of the seed roles that are stored in the bins. It is clear that in many situations, it will be impossible to find such a discriminating genome from existing genome repositories. When a discriminating reference genome cannot be found, the corresponding bin should be marked and placed aside (until a larger collection of potential reference genomes containing a useful reference can be generated).
We pick the most dsicriminating reference genome for each bin, but nothing prohibits one from selecting multiple reference genomes for a bin.
Step 4: For Each Sample, Place Contigs Into Bins¶
Once reference genomes have been determined for each bin, we can partition the contigs from the cross-assembly for each sample into the bins. The use of reference bins involves computing some measure of the similarity between a contig and the set of contigs in reference genomes. This similarity measure can be based on any of a number of algorithms, including the use of blast scores or the number of k-mers in common. Assuming that we have a suitable measure, we can build a scheme for partitioning the contgs in a cross-assembly. For our purposes, we measure the similarity between two DNA contigs as follows:
- We translate the DNA to protein sequence (using 6-frames).
- We count the number of amino acid 12-mers in common, and this is the score.
- We consider the two DNA contigs as similar if they generate a score of 10 or more amino acid 12-mers.
We consider the similarity score beween one contig and a set of contigs to be the maximum score between the contig and a contig from the set.
A contig C should be copied into bin B if and only if
- the similarity of C against the contigs of the reference genomes for B exceeds a specified threshold, and it is greater than the similarity to other reference genomes. That is, C is put into the bin belonging to reference genome G if C is most similar to G and the similarity exceeds the threshold.
- the difference in coverage between C and the average coverage in the similar contig from B is small.
Using this simple logic, we have experimented with a range of thresholds.
Step 5: Evaluate the Quality of Each Bin¶
At this point, each bin contains a set of contigs that have tentatively been labeled as coming from a single clonal population. There are numerous possible sources of error, so how might we evaluate the quality of a bin? Fortunately, several such tools exist. The most notable is checkM (which we have found extremely useful):
Parks DH, Imelfort M, Skennerton CT, Hugenholtz P, Tyson GW. 2014. Assessing the quality of microbial genomes recovered from isolates, single cells, and metagenomes. Genome Research, 25: 1043-1055.
We have written a tool that can be used to produce a single score that measures the quality of a prokaryotic genome. Using these tools, one can simply keep only high-scoring bins. This is an important point: as long as your quality assessments are reasonably accurate, you can throw out numerous bins and still be able to harvest thousands of new, fairly accurate, genomes.
Step 6: For Each Bin, Remove Questionable Contigs¶
At the end of Step 5, we have accumulated a set of bins, along with estimates of quality. We have developed a simple test for attempting to spot contaminating contigs, so we run our algorithm, removing highly questionable contigs (when such removals improve our estimates of quality).
Step 7: Annotate High-Quality Bins¶
We submit the contigs in each high-quality bin to the PATRIC server to annotate the genome associated with the bin.
Summary¶
In this document, we sketch out a plan for reconstructing thousands of genomes from metagenomic samples. There are several alternative plans being developed by the research community. Here is a brief summary of a plan implemented by a European team that included Bjorn Nielsen, Dusko Ehrlich and Peer Bork (see “Identification and Assembly of Genomes and Genetic Elements in Complex Metagenomic Samples Without Using Reference Genomes”). ‘baits’ for identifying groups of genes that correlate (PCC > 0.9) to the abundance profile of the bait genes. The fixed PCC distance threshold is called a canopy. To center the canopy on a co-abundance gene group (CAG), the median gene abundance profile of the genes within the original seed canopy (or subsequent canopies) is used iteratively to recapture a new canopy until it settles on a particular profile. The gene content of a settled canopy.
It seems likely that we will be able to harvest thousands of genomes from metagenomic samples. The number of potentially useful samples is growing exponentially, the desire to gain genomes for unculturable organisms is growing, and our ability to extract reconstructed genomes is improving. I believe that the quality produced by current algorithms has reached the point where it is “good enough”. Further improvements will inevitably increase the fraction of bins that can be salvaged. | https://docs.patricbrc.org/tutorial/binning_overview.html | 2017-12-11T02:22:27 | CC-MAIN-2017-51 | 1512948512054.0 | [] | docs.patricbrc.org |
You are viewing documentation for version 3 of the AWS SDK for Ruby. Version 2 documentation can be found here.
Class: Aws::CodeDeploy::Types::StopDeploymentOutput
- Defined in:
- gems/aws-sdk-codedeploy/lib/aws-sdk-codedeploy/types.rb
Overview
Represents the output of a StopDeployment operation.
Instance Attribute Summary collapse
- #status ⇒ String
The status of the stop deployment operation:.
- #status_message ⇒ String
An accompanying status message.
Instance Attribute Details
#status ⇒ String
The status of the stop deployment operation:
Pending: The stop operation is pending.
Succeeded: The stop operation was successful.
#status_message ⇒ String
An accompanying status message. | http://docs.aws.amazon.com/sdk-for-ruby/v3/api/Aws/CodeDeploy/Types/StopDeploymentOutput.html | 2017-12-11T02:24:13 | CC-MAIN-2017-51 | 1512948512054.0 | [] | docs.aws.amazon.com |
Help Center
Local Navigation
Search This Document
CSS property: top
The top property is a positioning property that specifies the distance of the top edge of an element relative to the top edge of the containing element.
Considerations
If the position property for the element has a value of static, then the top property has no effect.
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/developers/deliverables/6153/Top_564528_11.jsp | 2012-05-26T20:02:47 | crawl-003 | crawl-003-019 | [] | docs.blackberry.com |
Scripted authentication
This documentation does not apply to the most recent version of Splunk. Click here for the latest version.
Contents
Scripted authentication
Splunk ships with support for three authentication systems: Splunk's built-in system, LDAP and a new.".. | http://docs.splunk.com/Documentation/Splunk/3.4.13/Admin/ScriptedAuthentication | 2012-05-26T18:34:13 | crawl-003 | crawl-003-019 | [] | docs.splunk.com |
Scripted Alerts
This documentation does not apply to the most recent version of Splunk. Click here for the latest version.
Contents = This option has been deprecated and is no longer used as of Splunk 3.4.6.
- $8 = file where the results for this search are stored (contains raw results).
Note: If there are no saved tags, $7 becomes the name of the file containing the search results ($8). This note is applicable to Splunk versions 3.3-3.5. | http://docs.splunk.com/Documentation/Splunk/3.4.13/Admin/ScriptedAlerts | 2012-05-26T18:34:06 | crawl-003 | crawl-003-019 | [] | docs.splunk.com |
.0 # #>] *.. schedule = <string> * Cron style schedule (i.e. */12 * * * *).. = a list of tags belonging to this saved search. * $8 = file where the results for this search are stored (contains raw results). Note: If there are no saved tags, $7 becomes the name of the file containing the search results ($8).
# Copyright (C) 2005-2008 Splunk Inc. All Rights Reserved. Version 3. [Invalid 3months notshared db test2] action_rss = 0 search = * search = * error Bus startminutesago=15 relation = greater than schedule = */12 * * * * sendresults = 1 userid = 1 viewstate.prefs.selectedKeys = source host sourcetype [kCGError 3months shared db test1] action_rss = 0 search = * search = * search = *] search =. | http://docs.splunk.com/Documentation/Splunk/3.4.13/Admin/Savedsearchesconf | 2012-05-26T18:34:00 | crawl-003 | crawl-003-019 | [] | docs.splunk.com |
Thread local variables must be declared at a global scope.
program Produce; procedure NoTLS; threadvar x : Integer; begin end; begin end.
A thread variable cannot be declared local to a procedure.
program Solve; threadvar x : Integer; procedure YesTLS; var localX : Integer; begin end; begin end.
There are two simple alternatives for avoiding this error. First, the threadvar section can be moved to a local scope. Secondly, the threadvar in the procedure could be changed into a normal var section. Note that if compiler hints are turned on, a hint about localX being declared but not used will be emitted. | http://docs.embarcadero.com/products/rad_studio/radstudio2007/RS2007_helpupdates/HUpdate4/EN/html/devcommon/cm_local_threadvar_xml.html | 2012-05-27T01:44:17 | crawl-003 | crawl-003-019 | [] | docs.embarcadero.com |
This error message is issued if you try to assign a local procedure to a procedure variable, or pass it as a procedural parameter.
This is illegal, because the local procedure could then be called even if the enclosing procedure is not active. This situation would cause the program to crash if the local procedure tried to access any variables of the enclosing procedure.
program Produce; var P: Procedure; procedure Outer; procedure Local; begin Writeln('Local is executing'); end; begin P := Local; (*<-- Error message here*) end; begin Outer; P; end.
The example tries to assign a local procedure to a procedure variable. This is illegal because it is unsafe at runtime.
program Solve; var P: Procedure; procedure NonLocal; begin Writeln('NonLocal is executing'); end; procedure Outer; begin P := NonLocal; end; begin Outer; P; end.
The solution is to move the local procedure out of the enclosing one. | http://docs.embarcadero.com/products/rad_studio/radstudio2007/RS2007_helpupdates/HUpdate4/EN/html/devcommon/cm_local_procvar_xml.html | 2012-05-27T01:44:10 | crawl-003 | crawl-003-019 | [] | docs.embarcadero.com |
This error message is given when the length of a line in the source file exceeds 255 characters.
Usually, you can divide the long line into two shorter lines.
If you need a really long string constant, you can break it into several pieces on consecutive lines that you concatenate with the '+' operator. | http://docs.embarcadero.com/products/rad_studio/radstudio2007/RS2007_helpupdates/HUpdate4/EN/html/devcommon/cm_line_too_long_xml.html | 2012-05-27T01:44:00 | crawl-003 | crawl-003-019 | [] | docs.embarcadero.com |
C++ lets you redefine the actions of most operators, so that they perform specified functions when used with objects of a particular class. As with overloaded C++ functions in general, the compiler distinguishes the different functions by noting the context of the call: the number and types of the arguments or operands.
All the operators can be overloaded except for:
. .* :: ?:
The following preprocessing symbols cannot be overloaded.
# ##
The =, [ ], ( ), and -> operators can be overloaded only as nonstatic member functions. These operators cannot be overloaded for enum types. Any attempt to overload a global version of these operators results in a compile-time error.
The keyword operator followed by the operator symbol is called the operator function name; it is used like a normal function name when defining the new (overloaded) action for the operator.
A function operator called with arguments behaves like an operator working on its operands in an expression. The operator function cannot alter the number of arguments or the precedence and associativity rules applying to normal operator use. | http://docs.embarcadero.com/products/rad_studio/delphiAndcpp2009/HelpUpdate2/EN/html/devwin32/ops_overload_xml.html | 2012-05-26T23:54:48 | crawl-003 | crawl-003-019 | [] | docs.embarcadero.com |
.
Support for rational numbers can be implemented in Python. For an example, see the Rat module, provided as Demos/classes/Rat.py in the Python source distribution..
See Also: | http://docs.python.org/release/2.1.2/lib/module-mpz.html | 2012-05-27T02:17:38 | crawl-003 | crawl-003-019 | [] | docs.python.org |
Configure distributed search via distsearch.conf
This documentation does not apply to the most recent version of Splunk. Click here for the latest version.
Contents
Configure distributed search via distsearch.conf
The most advanced specifications for distributed are available in distsearch.conf. Edit this file in
$SPLUNK_HOME/etc/system/local/, or your own custom application directory in
$SPLUNK_HOME/etc/apps/. For more information on configuration files in general, see how configuration files work.
Configuration
Spol.'
allowDescent = [True |.
- Server names are the 'server name' that is created for you at startup time.
blacklistURLs =
- Comma-delimited lists of blacklisted discovered servers.
- You can black list on server name (above) or server URI (x.x.x.x:port).
Example
. | http://docs.splunk.com/Documentation/Splunk/3.4/Admin/ConfigureDistributedSearchViaDistsearchconf | 2012-05-27T01:31:02 | crawl-003 | crawl-003-019 | [] | docs.splunk.com |
User Guide
- Quick Help
- Tips and shortcuts
- Phone
- Voice commands
- Messages
- Files and attachments
- Media
- Ring tones, sounds, and alerts
- Browser
- Calendar
- Contacts
- Clock
- Tasks and memos
- Typing
- Keyboard
- Language
- Screen display
- Applications
- BlackBerry ID
- BlackBerry Device Software
- Manage Connections
- Bluetooth 9220 Smartphone - 7.1
How to: Power and battery
Related information
Next topic: Inserting the SIM card, media card, and battery
Previous topic: About the battery
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/smartphone_users/deliverables/41285/1489526.jsp | 2014-10-20T08:26:01 | CC-MAIN-2014-42 | 1413507442288.9 | [] | docs.blackberry.com |
Hibernate.orgCommunity Documentation.0.0.Final</version>
</dependency>
Example 1.2. Optional Maven dependencies for Hibernate Search
<dependency>
<!-- If using JPA (2), add: -->
<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-entitymanager</artifactId>
<version>4.0.0.CR6</version>
</dependency>
<!-- Additional Analyzers: -->
<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-search-analyzers</artifactId>
<version>4.0.0.Final</version>
</dependency>
<!-- Infinispan integration: -->
<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-search-infinispan</artifactId>
<version>4.0..0.0.Final \ -DarchetypeRepository=.7. Using Hibernate Session to index data
FullTextSession fullTextSession = Search.getFullTextSession(session);
fullTextSession.createIndexer().startAndWait();
Example 1.8. 1.11. 9.4, “Sharding indexes”). | http://docs.jboss.org/hibernate/search/4.0/reference/en-US/html/getting-started.html | 2014-10-20T08:33:11 | CC-MAIN-2014-42 | 1413507442288.9 | [] | docs.jboss.org |
ORAC-DR is copyright (C) 1998-2003 PPARC (the UK Particle Physics and Astronomy Research Council). It is distributed by Starlink under the GNU General Public License as published by the Free Software Foundation.
If you have used ORAC-DR for your data reduction please acknowledge it
in your publications. It costs you nothing and gives us a warm fuzzy
feeling.
ORAC-DR: Overview and General IntroductionORAC-DR: Overview and General Introduction | http://docs.jach.hawaii.edu/star/sun230.htx/node53.html | 2014-10-20T08:06:16 | CC-MAIN-2014-42 | 1413507442288.9 | [] | docs.jach.hawaii.edu |
cupy.searchsorted¶
- cupy.searchsorted(a, v, side='left', sorter=None)[source]¶
Finds indices where elements should be inserted to maintain order.
Find the indices into a sorted array
asuch that, if the corresponding elements in
vwere inserted before the indices, the order of
awould be preserved.
- Parameters
a (cupy.ndarray) – Input array. If
sorteris
None, then it must be sorted in ascending order, otherwise
sortermust be an array of indices that sort it.
v (cupy.ndarray) – Values to insert into
a.
side – {‘left’, ‘right’} If
left, return the index of the first suitable location found If
right, return the last such index. If there is no suitable index, return either 0 or length of
a.
sorter – 1-D array_like Optional array of integer indices that sort array
ainto ascending order. They are typically the result of
argsort().
- Returns
Array of insertion points with the same shape as
v.
- Return type
-
Note
When a is not in ascending order, behavior is undefined.
See also | https://docs.cupy.dev/en/stable/reference/generated/cupy.searchsorted.html | 2021-07-23T20:36:35 | CC-MAIN-2021-31 | 1627046150000.59 | [] | docs.cupy.dev |
DataStax Community release notes
Release notes for DataStax Community.
New features, improvements, and notable changes are described in What's new in Cassandra 2.1.
The latest version of Cassandra 2.1 is 2.1.13. The CHANGES.txt describes the changes in detail. You can view all version changes by branch or tag in the branch drop-down list:
| https://docs.datastax.com/en/cassandra/2.1/cassandra/releaseNotes.html | 2017-12-11T09:35:52 | CC-MAIN-2017-51 | 1512948513330.14 | [array(['images/screenshots/rn_c_changes_tag.png', None], dtype=object)] | docs.datastax.com |
Set in Stone
A cooperative adventure game for parents to play with their kids that respectively challenges both.
Fyo Table
A gaming platform embedded in a table in which you use your cell phone as the game controller.
Playlist of Game Jams
All of the game jams I've done over the years in order from most recent to oldest | http://docs.teamopifex.com/projects/ | 2017-12-11T09:09:26 | CC-MAIN-2017-51 | 1512948513330.14 | [] | docs.teamopifex.com |
Advanced Upgrade Maneuvers
Online Upgrade
For the vast majority of customers, the actual upgrade process itself is relatively short. Quite often the database migration can be completed in less than an hour. But for customers with larger data sets who are upgrading from a version earlier than 2015.1.x, the amount of schema changes involved make it infeasible to complete the upgrade in a reasonable downtime window.
For this reason, we offer an alternate "online" mode for the upgrade (sometimes called a "phased" upgrade), which allows the majority of the upgrade to be completed without you having to take downtime on your production system. There will still need to be a "cutover" phase that will require you to take downtime, but the downtime will be vastly reduced. This comes at a cost, however; an online upgrade is far more complicated than a traditional upgrade and will typically require close communication between us, your application developers, and your DBA.
The documentation given here is intended merely as a reference; every customer who has enough data to warrant an online upgrade is going to have unique needs. We will instruct you on configuration options to make the online upgrade faster, and will likely need to discuss many options that can improve performance by skipping steps.
Configuration
The online upgrade will more or less require copying the data in every table in Engine to a new version of that table. This means that if you use the default behavior of the tool, the upgrade tool will essentially double the size of your database.
Some customers find this hard to manage, instead finding it easier to let the upgrade tool copy the data to a different database instance or even a different server. To enable this behavior, you will need to specify two connection strings in your configuration file instead of one. The setting names for these connection strings are
SourceSystemDatabaseConnectionString and
TargetSystemDatabaseConnectionString, representing the original database and the new database, respectively.
Generally speaking, customers with enough data to justify an online upgrade would probably benefit from built-in performance enhancements through configuration options. The specific settings are beyond the scope of this documentation; contact us directly and we will recommend settings based on your system specifications.
Other Caveats
The online upgrade works by copying data from your current tables to updated versions of those tables. From the earliest days of Engine, every row in Engine has had an "update_dt" column that is updated with the current time (in the Engine server's current timezone) whenever that row is created or updated. The values in that column are used to discern which rows have been updated since the last time the tool was run, and thus need to be copied to the new tables.
One thing that the upgrade tool currently cannot do very well, however, is track deletes. Whether or not this is a problem depends on your particular integration. Many customers do not expose Engine functionality that can delete data to their end users. If you do, however, we would strongly recommend that you consider disabling that functionality while you are doing the migration. As always, feel free to discuss these issues with us.
Step-by-Step
The phased upgrade will require running the tool at least four times, with different arguments each time. The overall process is discussed below. For each phase, we include an in-depth description of its purpose and include the command to execute it. Any given step of the upgrade can be cancelled at any time; the upgrade periodically saves progress and does as much as it can not to repeat work it has already done. That said, you cannot undo a step and go back to an earlier one. In particular, during phase 3, the cutover phase, you must make sure the step completes; once you have started it you cannot go back to your old version of Engine without restoring from a backup.
Now we will describe the process step-by-step. Note that there are more logical steps than phases, so the phase number won't always match the number of the step.
- A gentle reminder: Back up your database before running any of the steps. You will not need to back up again, however, until the cutover step.
- [Phase One] The first step is to add indexes to your current production schema that will be used to support later steps. The tool can do this automatically, but you may require some downtime to execute this step, depending on your DBMS. The step can always be completed online if you are running SqlServer. If you are running MySQL, you must explicitly specify OnlineIndexingSupported in your configuration file with a value of "true" to enable online indexing, but this will only work if your database server has always been running MySQL 5.6 or later. (If you are running the correct version of MySQL but the database was originally MySQL 5.5 or earlier, you will need to talk to us first.) If you are running Oracle, you will need to consult your specific version's documentation and your license to see whether online indexing is supported; online indexing is only enabled if your license permits it. If your server does support it, you will also need to add OnlineIndexingSupported to your configuration file with a value of "true". If you are running PostgreSQL, you will have to discuss your possibilities with us.
When you are ready to run the indexing phase, first bring your engine server down if your DBMS requires you to take downtime based on the above considerations. Then, when you run the upgrade tool, include the argument
EngineInstall.exe EngineInstall.xml -p1index
java -Dlogback.configurationFile=logback.xml -cp "lib/*" RusticiSoftware.ScormContentPlayer.Logic.Upgrade.ConsoleApp EngineInstall.xml -p1index.
- [Phase Two] The second phase can always be done online, and will comprise the bulk of the migration. This step will copy rows from your source tables to the "target" tables (which may or may not be in the same database; see the configuration section, above. The argument to run this phase is
EngineInstall.exe EngineInstall.xml -p2onlinerowcopy
java -Dlogback.configurationFile=logback.xml -cp "lib/*" RusticiSoftware.ScormContentPlayer.Logic.Upgrade.ConsoleApp EngineInstall.xml -p2onlinerowcopy.
- Once phase two completes for the first time, you are ready to schedule the cutover phase. In the meantime, we recommend running phase two once daily to bring in any new rows that have come in since the last time you ran it. On the day of the cutover phase, we recommend running phase two every couple of hours to minimize the number of rows the cutover phase will have to process.
- When the scheduled cutover time arrives, bring down your current Engine server. Nothing can use Engine during the cutover.
- Back up your current production database right after you bring the Engine server down.
- [Phase Three] You are now ready to run phase three. Phase three will run phase two to bring in any rows that have come in since the last time phase two was run; there is no need to run it yourself. Phase three will also add foreign key relationships to the target schema, clean up orphan rows, and do a number of audits to make sure the upgrade completed successfully. The argument to run phase three is
EngineInstall.exe EngineInstall.xml -p3offlineschema
java -Dlogback.configurationFile=logback.xml -cp "lib/*" RusticiSoftware.ScormContentPlayer.Logic.Upgrade.ConsoleApp EngineInstall.xml -p3offlineschema.
- Once phase three completes, you are ready to deploy the current version of Engine. If you did a two-database upgrade, make sure your new version of Engine is pointing at the new database. Bring the engine server up internally so that you can smoke test the new deployment before bringing up the server to the outside world.
- [Phase Four] The last upgrade phase performs a number of migrations that could not be performed until after the schema was in a known state. Phase four can be completed online, but many customers go ahead and run this during their cutover phase. The important consideration here is that some xAPI data may not be available until phase four completes. Regardless of whether you are running phase four while Engine is online or not, the argument needed to run it is
EngineInstall.exe EngineInstall.xml -p4onlinedeferred
java -Dlogback.configurationFile=logback.xml -cp "lib/*" RusticiSoftware.ScormContentPlayer.Logic.Upgrade.ConsoleApp EngineInstall.xml -p4onlinedeferred.
Integrated Upgrade
Most customers will run the upgrade tool as a command-line tool, but for many others, this solution is impractical. For example, some Engine customers integrate Engine into an "off-the-shelf" LMS that is sold to other customers and may even have its own upgrade tool. For that reason, we have made it possible to integrate the upgrade tool into another .NET or Java application.
Design Considerations
It is absolutely imperative, even when integrating the upgrade tool into your application, that you back up your production data before using the upgrade tool with it. The upgrade tool makes changes to the schema, and despite our best efforts there are always going to be certain failure modes (e.g., power failures at certain moments) from which we cannot recover. If your application's upgrade tool does not already include a backup step, we strongly recommend adding one.
We also recommend against doing "silent upgrades"—that is, having the upgrade tool automatically run in a context where the upgrading customer is not aware that it would be running (e.g., at application startup). If this is part of your workflow, make sure that your users understand the risks and are taking frequent backups.
Overview
Initial Set-up
The upgrade tool requires the use of global environment; most notably, changing the way configuration settings and the integration layer work. To do this, all upgrade code must be enclosed in a block like the following:
using (EngineUpgradeManager upgradeManager = new EngineUpgradeManager()) { // All configuration of the upgrade goes here // As does running the actual upgrade }
The
EngineUpgradeManager constructor itself can throw an exception, so you will either have to surround this method with a further try-catch block, or declare its containing method as
throws Exception:
try (EngineUpgradeManager upgradeManager = new EngineUpgradeManager()) { // All configuration of the upgrade goes here // As does running the actual upgrade }
EngineUpgradeManager is contained in the main Engine library (
RusticiSoftware.Engine.dll
scplogic.jar), and located under the package/namespace
RusticiSoftware.ScormContentPlayer.Logic.Upgrade.
Configuration
Within the upgrade management block, you must now configure the upgrade. Setting a value for a given configuration setting looks something like
upgradeManager.Settings.SettingName = value;
upgradeManager.getSettings().setSettingName(value);
The value will be the type associated with the setting, so numeric settings will require numbers and text settings will require strings. A complete reference of settings will follow, but at a bare minimum you will need to specify at least two settings for establishing database connectivity:
- SourceDataPersistenceEngine: A string (
sqlserver,
mysql,
postgresql, or
oracle) corresponding to the DBMS flavor in use for the source database.
- SourceSystemDatabaseConnectionString: The connection string to use when connecting to the system schema on the source database. (Java customers take note: if you are running the context of an application container and can use JNDI resources, you can use the JNDI name of your connection information here. Pooling information will be inherited therefrom.)
The following are also used to supply connection strings in case you need more than one:
- SourceTenantDatabaseConnectionString: If your source database has separate source and tenant connection strings, then an alternate connection string should be supplied here.
- TargetDataPersistenceEngine: If using separate source and target databases, this represents the DBMS flavor for your target database.
- TargetSystemDatabaseConnectionString: If using separate source and target databases, this is the connection string to your target system database.
- TargetTenantDatabaseConnectionString: If using separate source and target databases and separate system and tenant databases, this is the connection string to your target tenant database.
Finally, you will need to specify some options related to tenancy if you are using the upgrade tool; these settings have the same name as they do regular upgrade configuration file, described above. If you have xAPI data, you will also need to specify xAPIFilesPath, as described in that section as well.
Running the Upgrade
Lastly, you will need to run the upgrade. Generally speaking, you will need to run one of two methods:
upgradeManager.Install(); to run the install tool, or
upgradeManager.FullUpgrade(); to run the full upgrade. The code is exactly the same in both .NET and Java.
Both methods will return an
UpgradeStatus object with
Warnings and
Errors properties (
getWarnings() and
getErrors() on Java) that can be used to report on the status of the upgrade. Upgrades with even a single error are considered to have failed.
Example Upgrades
Different System and Tenant Databases
using (EngineUpgradeManager upgradeManager = new EngineUpgradeManager()) { upgradeManager.Settings.SourceDataPersistenceEngine = "mysql"; upgradeManager.Settings.SourceSystemDatabaseConnectionString = "server=myserver.net;Uid=engine;pwd=secret;Database=rusticisystem;"; upgradeManager.Settings.SourceTenantDatabaseConnectionString = "server=myserver.net;Uid=engine;pwd=secret;Database=rusticitenant;";().setSourceTenantDatabaseConnectionString("jdbc/source-tenant"); UpgradeStatus status = upgradeManager.Install(); if (status.getHasErrors()) { // report on the failure } }
Single-Tenant, Copy Upgrade
using (EngineUpgradeManager upgradeManager = new EngineUpgradeManager()) { upgradeManager.Settings.SourceDataPersistenceEngine = "mysql"; upgradeManager.Settings.SourceSystemDatabaseConnectionString = "server=myserver.net;Uid=engine;pwd=secret;Database=source-system;"; upgradeManager.Settings.TargetSystemDatabaseConnectionString = "server=myserver.net;Uid=engine;pwd=secret;Database=target-system;"; upgradeManager.Settings.TargetTenant = "default";().setTargetSystemDatabaseConnectionString("jdbc/target-system"); upgradeManager.getSettings().setTargetTenant("default"); UpgradeStatus status = upgradeManager.Install(); if (status.getHasErrors()) { // report on the failure } }
Multi-Tenant, In-place Upgrade
using (EngineUpgradeManager upgradeManager = new EngineUpgradeManager()) { upgradeManager.Settings.SourceDataPersistenceEngine = "sqlserver"; upgradeManager.Settings.SourceSystemDatabaseConnectionString = "server=localhost\SQLEXPRESS;uid=sa;pwd=secret;database=RusticiEngine"; upgradeManager.Settings.XapiFilesPath = @"C:\\server\store\xapi\"; upgradeManager.Settings.PackageToTenantQuery = @" SELECT customer_id FROM SCHEMA_PREFIX.ScormPackage WHERE scorm_package_id = @scorm_package_id "; UpgradeStatus status = upgradeManager.FullUpgrade();().setXapiFilesPath("/path/to/xapi/files"); upgradeManager.getSettings().setPackageToTenantQuery( "SELECT" + "customer_id" + "FROM" + "SCHEMA_PREFIX.ScormPackage" + "WHERE" + "scorm_package_id = @scorm_package_id;"); UpgradeStatus status = upgradeManager.FullUpgrade(); if (status.getHasErrors()) { // report on the failure } }
Migrating To API Integration
Starting with Engine 2015.1, new customers are able to integrate Engine into their applications exclusively through Engine's API. This allowed them to loosely couple Engine with their application and forgo the need for tightly-coupled custom integration layers that older customers had to write. We now offer the ability to configure the upgrade tool to allow customers that are currently using these custom integration layers to migrate to an integration that exclusively uses the REST API. This process involves changing the database's schema by removing the columns that are defined in the implementations of
ExternalPackageId and
ExternalRegistrationId and adding the columns and tables that are required for Engine's API integration.
Please reach out to our support team before attempting to upgrade in this way. There are a number of mitigating factors that can potentially prohibit your upgrade from using this process, so let us know that you're interested in this process before starting. We can look at your integration layer and let you know if this path seems like an appropriate option for your situation.
To enable this migration in your upgrade, you have to provide values for the settings below. Please note that our SQL parameters and column names use underscore_case, while the properties in your
ExternalId classes follow PascalCase.
IntegrationToApiRegistrationIdQuery
The value of this setting should be a SQL query that returns a unique identifier that will be used when communicating with the API about a given registration. The columns of
ScormRegistration, including the column(s) that are defined based on the implementation of
ExternalRegistrationId, will be passed into the query as parameters and should be used by the query to return an identifier for that specific registration.
For example, if an integration layer's implementation of
ExternalRegistrationId has two properties called
int CourseId and
string UserName, then the SQL query that customer provides might be
SELECT CONCAT(@course_id, '-', @user_name) or
SELECT u.ApiRegistrationId
FROM ApplicationSchema.Users u
WHERE u.CourseId = @course_id
AND u.UserName = @user_name
IntegrationToApiCourseIdQuery
This is similar to the above setting, but it is used to uniquely identify courses instead of registrations. Like above, the columns of
ScormPackage are passed into this query as parameters, and this query should return an identifier that corresponds to the package identified by those columns. For integrations that only provide a single property on
ExternalPackageId, this query could be something as simple as
SELECT @course_id.
RegistrationToApiLearnerQuery
Given a row from ScormRegistration, including the columns defined by the implementation of
ExternalRegistrationId, this query should return a learner_id (which must be a unique varchar), learner_first_name, and learner_last_name for the learner associated with that registration. If an
ExternalRegistrationId that includes a property
string UserName to identify individual learners, this query could look like:
SELECT u.Id as learner_id, ui.FirstName AS learner_first_name, ui.LastName AS learner_last_name FROM ApplicationSchema.UserInfo ui JOIN ApplicationSchema.Users u ON ui.UserId = u.Id WHERE u.UserName = @user_name;
IncludeApiIntegration
This setting should be provided with a value of
true. This will tell the upgrade tool that is needs to include the columns needed for an API integration and to not copy the integration layer columns to the target database.
Once the upgrade tool has been run this way, you should remove the settings
LogicIntegrationAssemblyName and
LogicIntegrationClassName from your Engine config file, as you will not be able to use your integration layer with your upgraded database. You will also need to replace any logic in your integration layer with calls to Engine's API. You can replace overrides of
RollupRegistration and
RollupRegistrationOnExit by setting up an endpoint for Engine's postback system to post rollups to. You do not need to replace the logic provided by
GetLearnerInformation; the upgrade tool will use the query provided in
RegistrationToApiLearnerQuery to populate the learner information of previously created registrations. Refer to the API documentation for a full reference of the endpoints available for your application to make use of. | https://rustici-docs.s3.amazonaws.com/engine/2018.1.x/Upgrading/Advanced-Upgrade.html | 2021-02-25T02:13:32 | CC-MAIN-2021-10 | 1614178350706.6 | [] | rustici-docs.s3.amazonaws.com |
Authentication API Documentation
To use the IAGL Authorisation.
Authentication Service Registration
Each API endpoint may require one or more of these authentication types:
- No authentication; the access permission is uniquely controlled by the API key
- Member authentication
- Client authentication
Please consult the documentation of each endpoint to see what type of authentication to use.
You will be provided, for each environment (see BAEC Environments and Authentication Endpoints), with:
- A numeric client identifier (identical for both Member and Client authentication)
- A password for the client identifier
These credentials are to be kept secret.
In order to use of the BAEC Member Authentication Service you will be asked to provide for each environment (see BAEC Environments and Authentication Endpoints):
- One or more URI for the web service in your application to receive a callback from the BAEC Authentication Service with an authorisation code
Member and client authentication are described below.
Member Authentication
Step 1. Obtain an Authorisation Code
Your application direct the member to the member login form:
URI Parameters
Authorisation Code Endpoint
'GET https://{ENV-DOMAIN-NAME}/auth/login?client_id=aviosapipartners-{CLIENT-ID}&response_type=code&redirect_uri={URI-FOR-CALLBACK-WITH-CODE}'
Example:
GET
Response
The response is a 302 redirect to the specified URI.
https://{REDIRECT_URI}?code={CODE_FOR_TOKEN_REQUEST}
Main Error Cases
The error responses are also 302 redirect to the specified URI.
http://{REDIRECT_URI}?error={ERROR_CODE}
Most error cases are displayed to the user:
- Misspelled client id ‘aviosapipartners’ will result in an HTTP 400 error
- Bad username or password will result in a message displayed on the login page
- Unregistered redirect URI will result in a “Security Issue” page to be displayed
Step 2. Obtain a Member Token
Once you have received the user’s Authorisation Code (from Step 1. above) a request must be made for a Token using the Grant service.
Your service will then form a request to obtain a token, using the with:
- Grant type:
authorization_code
- Code: supplied in the authentication server redirect request
Request Headers
Example request
'POST https://{ENV-DOMAIN-NAME}/api/grant'
Request Elements
Example request body
{ "grant_type": "authorization_code", "code": "1eebd3387c26f94539e8bd288f68cbcb" }": 600,
Step 3. Refresh a Member Token
Your service can request a new token, using the refresh_token grant of the Grant service with:
- Grant type:
refresh_token
- Refresh token: supplied in the response for your initial token grant
Request Headers
Example request
' POST https://{ENV-DOMAIN-NAME}/api/grant'
Request Elements
Example request
{ "grant_type": "refresh_token", "refresh_token": "ad2b204bb782e873fda70ee8d0ec96a6" }
Example responses
{ "access_token": "5c977c8c48fe225845919a9494d53127", "refresh_token": "74edddd818b301101841ade2e483cd2f", "scope": "AUTHN-LEVEL-12", "ba_refresh_expires_in": 2592000,
Client authentication
Obtain a client token
Your service will obtain a member token using the Grant service with:
- Grant type:
client_credentials
Request Headers
Example request
'POST'
Request Elements
Example response
{ "access_token": "324d27b0dc5cdae6019c8c601ac5392b", "scope": "AUTHN-LEVEL-15", "token_type": "bearer", "expires_in": 3600
Grant Service
Request Headers
Example request
'POST https://{ENV-DOMAIN-NAME}/api/grant'
Request Elements
Response Elements": 36000,
A success will return the HTTP code 200 with the following JSON payload.
Main Error Cases
BAEC Environments and Authentication Endpoints
The BAEC Authentication Service is available in 2 environments. Their domain name and endpoints are as following: | https://docs.iagloyalty.com/docs/authentication?programme=baec | 2021-02-25T01:54:07 | CC-MAIN-2021-10 | 1614178350706.6 | [] | docs.iagloyalty.com |
ScatterAreaSeries
The ScatterAreaSeries is visualized on the screen as a straight line connecting all data points and the area between the line and the axis is colored in an arbitrary way. By default, the colors of the line and the area are the same. As all scatter series, this one also requires the RadCartesianChart to define two LinearAxis.
Declaratively defined series
You can use the definition from Example 1 to display a ScatterAreaSeries.
Example 1: Declaring a ScatterAreaSeries in XAML
<telerik:RadCartesianChart AreaSeries> <telerik:ScatterAreaSeries.DataPoints> <telerik:ScatterDataPoint <telerik:ScatterDataPoint <telerik:ScatterDataPoint <telerik:ScatterDataPoint <telerik:ScatterDataPoint <telerik:ScatterDataPoint <telerik:ScatterDataPoint </telerik:ScatterAreaSeries.DataPoints> </telerik:ScatterAreaSeries> </telerik:RadCartesianChart.Series> </telerik:RadCartesianChart>
Figure 1: ScatterAreaSeries visual appearance
Properties
- YValueBinding: A property of type DataPointBinding that gets or sets the property path that determines the Y value of the data point.
- XValueBinding: A property of type DataPointBinding that gets or sets the property path that determines the X value of the data point.
- Fill: A property of type Brush that gets or sets the color of the ScatterAreaSeries area.
- DashArray: A property of type DoubleCollection that gets or sets the dash pattern applied to the stroke of the area.
- Stroke: A property of type Brush that gets or sets the outline stroke of the ScatterAreaSeries area. You can control the thickness of the line via the StrokeThickness property.
-.
Data Binding
You can use the YValueBinding and XValueBinding properties of the ScatterAreaSeries to bind the DataPoints’ properties to the properties from your view models.
Example 2: Defining the view model
public class PlotInfo { public double XValue { get; set; } public double YValue { get; set; } } //....... this.DataContext = new ObservableCollection<PlotInfo> { new PlotInfo() { XValue = 0, YValue = 2}, //.... };
Example 3: Specify a ScatterAreaSeries in XAML
<telerik:ScatterAreaSeries
See the Create Data-Bound Chart for more information on data binding in the RadChartView suite.
Styling the Series
You can see how to style scatter area series using different properties in the ScatterAreaSeries section of the Customizing CartesianChart Series help article.
Additionally, you can use the Palette property of the chart to change the colors of the ScatterAreaSeries on a global scale. You can find more information about this feature in the Palettes section in our help documentation. | https://docs.telerik.com/devtools/silverlight/controls/radchartview/series/cartesianchart-series/area-series/scatterareaseries | 2021-02-25T02:26:12 | CC-MAIN-2021-10 | 1614178350706.6 | [array(['images/radchartview-series-scatterlineareaseries.png',
'radchartview-series-scatterlineareaseries'], dtype=object)] | docs.telerik.com |
A PERIOD(DATE) or PERIOD(TIMESTAMP(n) [WITH TIME ZONE]) value can be cast as DATE using the CAST function. The source last value must be equal to the source beginning bound; otherwise, an error is reported.
If the source type is PERIOD(DATE), the result is the source beginning bound.
If the source type is PERIOD(TIMESTAMP(n) [WITH TIME ZONE]), the result is the date portion of the source beginning bound after adjusting to the current session time zone.
If the source type is PERIOD(TIME(n) [WITH TIME ZONE]), an error is reported. | https://docs.teradata.com/r/~_sY_PYVxZzTnqKq45UXkQ/OmC3TCSLEWrhZSUhcRhA5w | 2021-02-25T03:17:24 | CC-MAIN-2021-10 | 1614178350706.6 | [] | docs.teradata.com |
The Pin Payments Gateway extension allows you to use Australian-based Pin Payments as a credit card processor without a bank merchant account. Using Pin Payment’s secure Pin.js payment form, whereby the credit card details are never received by your server, you can process payments in any currency supported by Pin.
Pin Payments supports the Subscriptions extension.
It also supports re-using cards. When a visitor pays, they are set up in Pin Payments as a customer and use the same card on a future order. This is a timesaver for returning customers.
Requirements ↑ Back to top
- Active Pin Payments account
- Store set to currency supported by Pin Payments
- SSL certificate
Installing Pin Payments
- Log in to your Pin Payments dashboard and click Account from the menu.
- Make note of your API Keys, both Publishable (used with Pin.js) and Secret (used for server-based calls).
- If you have completed the activation process, you will have both test and live API key sets. If you have not, you will need to follow this process again with your live API keys once you’ve been activated.
- Log into your WordPress Dashboard.
- Click on WooCommerce | Settings from the left hand menu, then the top tab “Payment Gateways”. You should see “Pin Payments” as an option in the list.
- You should see “Pin Payments” “Pin Payments” or “Credit card”.
- The “Enable Test mode” option allows you to test transactions with this payment gateway to ensure your setup is correct.
- Enter your “Publishable API Key” and “Secret API Key” (both test and live if you have them).
- Save changes in WordPress and you’re done.
Test mode ↑ Back to top
In test mode, you can use Pin Payments Test Cards to simulate different transaction statuses. To learn more, see: Pin Payment Test Cards.
Questions & Support ↑ Back to top
Have a question before you buy? Please fill out this pre-sales form.
Already purchased and need some assistance? Get in touch the developer via the Help Desk. | https://docs.woocommerce.com/document/pin-payments-payment-gateway/ | 2021-02-25T02:57:57 | CC-MAIN-2021-10 | 1614178350706.6 | [] | docs.woocommerce.com |
BMC Helix Remedyforce 20.19.02 sandbox testing guidelines
BMC Helix Remedyforce 20.19.02 release has been pushed to sandboxes and is now available for customer and partner validation before BMC pushes it to the production organizations.
This document provides recommendations for testing related to new features, enhancements, and defects addressed in the BMC Helix Remedyforce 20.19.02 release. Note that these guidelines might not cover all customization or configuration specific to your organization. Hence, it is recommended that this document should be used as a reference for understanding the possible product areas that might have undergone changes. Partners and customers are expected to not limit their validations to the ones mentioned in this documentation. They should also execute their test suites to ensure all use cases important to business continue to work as expected.
For more information on the release details, refer BMC Helix Remedyforce 20.19.02 release notes. | https://docs.bmc.com/docs/BMCHelixRemedyforce/201902/en/bmc-helix-remedyforce-20-19-02-sandbox-testing-guidelines-884398023.html | 2021-02-25T01:26:16 | CC-MAIN-2021-10 | 1614178350706.6 | [] | docs.bmc.com |
use. seeds. * * @return void */ public function run() { $this->call([ UsersTableSeeder::class, PostsTableSeeder::class, CommentsTableSeeder::class, ]); }
Running Seeders | https://docs.laravel-dojo.com/laravel/5.6/seeding | 2021-02-25T02:55:32 | CC-MAIN-2021-10 | 1614178350706.6 | [] | docs.laravel-dojo.com |
7. API Development with Observability
In this section, we will select a sample request from the API Catalog, choose the execution flow configuration, and send the request to the service in the IDE.
Select request from API catalog
We have included some example requests for the movieinfo service to get you started. Pick a request to send to the movieinfo service from the API catalog.
- Click on the API catalog icon on the left nav bar. From the API catalog, you can access all stored API requests, whether they are coming from test suites (golden sets), or from requests captured from your CI tests, or from collections created by engineers.
- Select Golden from the Source dropdown, and then select the SampleCollection collection.
- From the list of services, select movieinfo. From the list of APIs, select minfo/listmovies. The stored requests for the API will be shown in the Requests table.
- Select one or more of the requests in the API table by clicking on the checkboxes, and then click the View button. Once you click the View button, each request selected will open a tab in the API editor.
- Each tab shows a trace of the stored request consisting of:
- The request to the API
- The response from the API
- All the egress requests from the service to upstream services
- The responses from the upstream services
Setup Environment to Run Request
Start service in IDE
The exact process will depend on your IDE.
Example: Eclipse IDE: Go the Servers tab, right-click on the Tomcat server, and clicks on Start.
Select target for request
Configure API Studio to send requests to service in IDE by selecting the localhost environment from the environment dropdown.
Select egress request destinations
- Select the DevCluster egress request configuration from the proxy settings dropdown.
- Make sure that all egress requests are configured to use the dev cluster and not use mocks. If any of the services are configured to be mocked, change the configuration by unchecking the Mock box for the service.
Send request to service in IDE
Click the Run button to send the request to the service in the IDE. After the request executes, the results will be populated in the right-hand pane in the tab.
How to read the results
The window is divided into two parts. The left-hand side is the trace for the stored request. The right-hand side is the trace for the test request.
- Full details of all the requests and responses in the trace for both API requests. Clicking through the requests in the trace provides complete observability of the request execution at the ingress and egress boundaries of the service for the API.
- A side-by-side as well as a diff view of the requests and responses in the trace, making it easy to spot changes.
- A trace of the ingress and egress requests sorted by the order of execution. Each trace in the API Studio includes all available information about the request to the API, egress requests and corresponding responses, and the response from API in the IDE, making it easy to understand the flow of the code execution for the request. The ability to diff any of requests or responses makes it easy to spot changes in API behavior quickly.
Congratulations! You have run your first request from the API Studio!
Next steps
Next, we will step through an example of how developers can leverage API Studio for auto-creating mocks during development. | https://docs.meshdynamics.io/article/cgt20kpktu-sending-your-first-api-request | 2021-02-25T01:54:32 | CC-MAIN-2021-10 | 1614178350706.6 | [array(['https://files.helpdocs.io/ugai2oq7pm/articles/cgt20kpktu/1604177167621/picture-21.png',
None], dtype=object)
array(['https://files.helpdocs.io/ugai2oq7pm/articles/cgt20kpktu/1604177548556/picture-22.png',
None], dtype=object)
array(['https://files.helpdocs.io/ugai2oq7pm/articles/cgt20kpktu/1604177867896/picture-23.png',
None], dtype=object)
array(['https://files.helpdocs.io/ugai2oq7pm/articles/cgt20kpktu/1604178159095/picture-24.png',
None], dtype=object)
array(['https://files.helpdocs.io/ugai2oq7pm/articles/cgt20kpktu/1604179175813/picture-25.png',
None], dtype=object)
array(['https://files.helpdocs.io/ugai2oq7pm/articles/cgt20kpktu/1604179638968/picture-26.png',
None], dtype=object)
array(['https://files.helpdocs.io/ugai2oq7pm/articles/cgt20kpktu/1604183190880/picture-27.png',
None], dtype=object)
array(['https://files.helpdocs.io/ugai2oq7pm/articles/cgt20kpktu/1604182965727/picture-28.png',
None], dtype=object)
array(['https://files.helpdocs.io/ugai2oq7pm/articles/cgt20kpktu/1604185228611/picture-29.png',
None], dtype=object) ] | docs.meshdynamics.io |
).
Anarduino MiniWireless has on-board debug probe and IS READY for debugging. You don’t need to use/buy external debug probe. | https://docs.platformio.org/en/v5.0.4/boards/atmelavr/miniwireless.html | 2021-02-25T02:23:29 | CC-MAIN-2021-10 | 1614178350706.6 | [] | docs.platformio.org |
Why code search is still needed for monoreposWhy code search is still needed for monorepos
Developers who work in monorepos sometimes think code search (such as Sourcegraph) isn’t useful for them:
“We use a monorepo, so we don’t need a code search tool. I can search the entire monorepo in my editor or using
ripgrep/
grep. My editor gives me go-to-definition and find-references across the entire monorepo.”
Many developers told us they initially felt this way, but they later came to love using a code search tool on their monorepo. We asked them to find out what arguments would have convinced them earlier.
Here are the best arguments for using a code search tool on a monorepo, from monorepo-using devs who were initially skeptical but changed their mind and now love code search:
- You can share links to a code search tool
- Using a separate code search tool helps (not hurts) your flow
- Google, Facebook, and Twitter have monorepos and use code search heavily
You can share links to a code search toolYou can share links to a code search tool
Scenario: You’re deep in flow, but then a team member asks you a question about how some other code works, or something like that.
- 🤬 Using your editor’s search, you can find the answer. But then how do you share that with your team member? Screenshot your editor? Type out “see lines 38-40 of
client/web/src/user/account/AccountForm.tsx…”?
- 😊 With a code search tool, you’d just copy the URL and paste it to the other person. They can visit the link and see what you mean instantly.
Another scenario: You’re coding and come across a bug that, upon a quick search in your editor, is present in dozens of files (such as a typo, function misuse, etc.).
- 🤬 You post an issue, but how do you link to all the instances of this bug? You can’t link to your editor’s search. You could describe it (“we need to fix everywhere where
updateAccountis used in the client code …”), but that’s imprecise and hard to understand.
- 😊 With a code search tool, you can just link to a search results page with all of the places that need fixing.
Using a separate code search tool helps (not hurts) your flowUsing a separate code search tool helps (not hurts) your flow
When you’re coding and need to search or navigate to answer some question, it’s really nice to stay in flow and be able to jump right back into writing code when you get the answer.
It might seem like staying in a single tool (your editor) means staying in flow, but that’s not true. Here are some examples to illustrate the point.
Scenario: You’re writing a call to a function and want to see usage examples or patterns from elsewhere in your codebase. Your editor’s find-references panel works for simple cases.
🤬 But if you want to jump to other call sites and poke around the code for each in your editor, you’ve opened a ton of new tabs and lost your editing flow.
😊 A separate code search tool would let you preserve your blinking cursor and editor state so you can jump right back in when you find the answer.
“Preserving my editing flow is the primary reason why I use browser-based code search tools for looking up things.”
😊 You can also keep the code search tool up in a separate window (or monitor) to easily refer back to the usage examples/patterns while coding. (Editors get weird with multiple windows.)
😊 If you need to filter a long list of matches or references by subdirectory, arguments, or something else, that’s usually cumbersome or impossible in your editor (but easy in code search).
Another scenario: You’re deep in flow on your branch, but then you get a bug report and need to triage it.
🤬 You need to stash your changes and check out the main branch, then search and navigate the code in your editor. Your dev server and test watcher get messed up, and you lose your editing flow. Your editor locks up as it starts reindexing/reanalyzing a different branch.
😊 A separate code search tool would let you quickly triage a bug on any branch without changing your local branch or affecting your dev setup.
😊 The code search tool can show much more helpful code context, such as Git blame information after each line, code coverage overlays, runtime info from Datadog/LightStep/Sentry/etc., static analysis and lint results from SonarQube, and more. You could configure some of these things to display in your editor, but that’s cumbersome and they’re noisy for the majority of the time when you’re writing code.
😊 A code search tool does all the hard work (indexing and analysis) on the server beforehand, so your local machine remains fast and responsive.
“The JetBrains IDEs have great search capabilities. However, indexing a large repo is slow and draining on even a powerful MacBook Pro, and that happens every time you switch to another branch.”
😊 If you identify a likely culprit (such as a problematic line of code) via code search, it’s easy to get a permalink to that line to add to the bug report.
Google, Facebook, and Twitter have monorepos and use code search heavilyGoogle, Facebook, and Twitter have monorepos and use code search heavily
These 3 companies are known for using monorepos internally. Their devs all heavily rely on code search when working in their monorepos:
- Google has a monorepo and reports that Google devs use code search 12 times per workday on average.
- Facebook has a monorepo and code search is heavily used by devs for everyday questions and to enable refactoring and analysis.
- Twitter has a monorepo and uses code search.
These monorepos are large, of course. If you have a small monorepo, then you should probably disregard this argument (but heed the others).
FeedbackFeedback
Do you have additional or better arguments? Do you disagree with any of these arguments? Use the Edit this page link or contact us (@srcgraph) to suggest improvements.
Disclaimer: Sourcegraph is a universal code search company, so of course we would say these things, right? But our intrinsic love for code search came first. It’s why we started/joined Sourcegraph and want to bring code search to every dev. | https://docs.sourcegraph.com/adopt/code_search_in_monorepos | 2021-02-25T02:38:45 | CC-MAIN-2021-10 | 1614178350706.6 | [] | docs.sourcegraph.com |
Scope¶
Scope is a way of dynamically defining how often a connected client should receive synchronization information about a NetView, or whether or not they should even receive information at all.
In large game worlds where there are many clients and NetViews, sending all information to all clients at all times could be prohibitively expensive, both for bandwidth and processing power. To provide high efficiency, scoping must be utilized. Fortunately, MassiveNet provides this functionality right out of the box.
In a traditional multiplayer game, it is assumed that a connecting client needs to instantiate (or spawn) every network object in the game as well as receive updates for these objects at all times. The implication is that all network objects are in scope and you must explicitly override this default behavior to provide scoping functionality.
With MassiveNet, all objects are implicitly out of scope for a connection. For a connection to receive instantiation information and synchronization updates for a NetView, you must set that NetView as in-scope for the connection. When a game server is utilizing the ScopeManager component, this is handled automatically. The ScopeManager incrementally updates the scope of each NetView in relation to each connected client in order to determine which NetViews are in scope for the connection.
This isn’t the end of the importance of scoping, however. Just like how 3D models can have different levels of detail based on the camera’s distance from them, client connections can have different levels of scope for NetViews based on their distance from the client’s in-game representation. This is sometimes referred to as network LOD (Level of Detail) due to its parallels with traditional application of LOD for reducing complexity of 3D models.
MassiveNet’s ScopeManager is able to set three different levels of scope for in-scope NetViews. These three levels correspond to the distance between the client and the NetView’s game object. The first scope level means the connection will receive every synchronization message, the second, every other synchronization message, and the third, every fourth synchronization message. | https://massivedocs.readthedocs.io/en/0.2/scope.html | 2021-02-25T02:50:14 | CC-MAIN-2021-10 | 1614178350706.6 | [] | massivedocs.readthedocs.io |
Debug Magento permission errors
When using the Magento command line tool, you may experience some problems related to permissions. If you get a 500 error in your browser, readjust permissions running the following commands:
$ sudo find installdir/apps/magento/htdocs/ -type d -exec chmod 775 {} \; $ sudo find installdir/apps/magento/htdocs/ -type f -exec chmod 664 {} \; $ sudo chown -R SYSTEM_USER:APACHE_GROUP installdir/apps/magento/htdocs/
NOTE: SYSTEM_USER and APACHE_GROUP are placeholders. Replace them to set the permissions properly. If you installed the stack using admin privileges, the APACHE_GROUP placeholder can be substituted with daemon. | https://docs.bitnami.com/installer/apps/magento/troubleshooting/debug-errors/ | 2021-02-25T05:40:53 | CC-MAIN-2021-10 | 1614178350717.8 | [] | docs.bitnami.com |
Sets the index buffer for the sub-mesh.
A sub-mesh represents a list of triangles (or indices with a different MeshTopology) that are rendered using a single Material. When the Mesh is used with a Renderer that has multiple materials, you should ensure that there is one sub-mesh per Material.
SetTriangles and triangles always set the Mesh to be composed of triangle faces. Use SetIndices to create a Mesh composed of lines or points.
The
baseVertex argument can be used to achieve meshes that are larger than 65535 vertices while using 16 bit index buffers,
as long as each sub-mesh fits within its own 65535 vertex area. For example, if the index buffer that is passed to SetIndices, MeshTopology enum, indexFormat.
Sets the index buffer of a sub-mesh, using a part of the input array.
This method behaves as if you called SetIndices with an array that is a slice of the whole array, starting at
indicesStart index and being of a given
indicesLength length. The resulting sub-mesh will have
indicesLength amount of vertex indices. | https://docs.unity3d.com/es/2020.1/ScriptReference/Mesh.SetIndices.html | 2021-02-25T06:00:17 | CC-MAIN-2021-10 | 1614178350717.8 | [] | docs.unity3d.com |
The Composite Application Deployer is a collection of different artifacts bundled into a single deployable component. When deploying a C-App on any WSO2 product, it directly deploys all the relevant artifacts for the product in a programmatically manner. For more information see, Working with Composite Applications.
Powered by a free Atlassian Confluence Community License granted to WSO2, Inc.. Evaluate Confluence today. | https://docs.wso2.com/display/Carbon446/Features | 2021-02-25T05:47:09 | CC-MAIN-2021-10 | 1614178350717.8 | [] | docs.wso2.com |
Indexes¶
Imagine you’re feeling nostalgic one day and want to look for a definition in a printed dictionary; you want to know whether ‘crapulent’ really is as funny as you think it is. What makes the dictionary so practical when it comes to finding words and phrases? Well, entries are listed from A to Z. In other words, they are ordered. All you need is the knowledge of the order of the letters in the alphabet. That’s all.
It would be very inefficient if you had to read the entire dictionary until you spotted the single item you’re interested in. In the worst case you could end up reading the entire 20 volumes of the Oxford English Dictionary, at which point you probably don’t care about the assumed funniness of crapulence any longer.
The same goes for databases. To fetch and retrieve rows in database tables it makes sense to have a simple and fast way to look them up. That is what an index does. It is similar to the index in old-fashioned books: the entries are listed in alphabetical order in the appendix, but the page number to which the entries refer does not generally have any structure. That is, the physical order (when an entry is mentioned in the book) is independent of the logical order (when an entry is listed in the index, from A to Z). In databases we typically do that with the aid of a doubly linked list on the index leaf nodes, so that each node refers to its predecessor and is successor; the leaf nodes are stored in a database block or page, which smallest available storage unit in the database. This data structure makes it easy to run through the list in either direction.
The dictionary we mentioned earlier is an example of an index-organized table (IOT) in Oracle parlance; Microsoft SQL Server calls these objects clustered indexes. The entire table, or dictionary in our example, is ordered alphabetically. As you can imagine, index-organized tables can be useful for read-only lookup tables. For data sets that change frequently, the time needed to insert, update, and/or delete entries can be significant, so that IOTs are generally not recommended.
Where an index leaf node is stored is completely independent of its logical position in the index. Consequently, a database requires a second data structure to sift quickly through the garbled blocks: a balanced search tree, which is also known as a B-tree. The branch nodes of a B-tree correspond to the largest values of the leaf nodes.
When a database does an index lookup, this is what happens:
- The B-tree is traversed from the root node to the branch (or header) nodes to find the pointers to relevant leaf node(s);
- The leaf node chain is followed to obtain pointers to relevant source rows;
- The data is retrieved from the table.
The first step, the tree traversal, has an upper bound, the index depth. All that is stored in the branch nodes are pointers to the leaf blocks and the index values stored in the leaf blocks. Databases can therefore support hundreds of leaves per branch node, making the B-tree traversal very efficient; the index depth is typically not larger than 5. Steps 2 and 3 may require the database to access many blocks. These steps can therefore take a considerable amount of time.
Oh, and in case you are still wondering: crapulent isn’t that funny at all. | https://oracle.readthedocs.io/en/latest/sql/indexes/index.html | 2021-02-25T04:41:54 | CC-MAIN-2021-10 | 1614178350717.8 | [] | oracle.readthedocs.io |
What.
How to create a trace filter
-<<. | https://docs.apigee.com/api-platform/debug/using-trace-tool-0?hl=zh-Cn | 2021-02-25T05:35:04 | CC-MAIN-2021-10 | 1614178350717.8 | [array(['https://docs.apigee.com/api-platform/images/new-trace-7-17-14.png?hl=zh-Cn',
None], dtype=object)
array(['https://docs.apigee.com/api-platform/images/trace-filters-results.png?hl=zh-Cn',
'Under Transactions, four results show up that match two preset query parameters.'],
dtype=object)
array(['https://docs.apigee.com/api-platform/images/view-options.png?hl=zh-Cn',
None], dtype=object)
array(['https://docs.apigee.com/api-platform/images/view-options.png?hl=zh-Cn',
None], dtype=object)
array(['https://docs.apigee.com/api-platform/images/view-options.png?hl=zh-Cn',
None], dtype=object) ] | docs.apigee.com |
.
Create new tenant users on the Manage Users page.
Clickadd new user icon at the top of the user list.
User names can contain characters, numbers, dot (.), hyphen (-) and the underscore (_). User names can only be started with a character, number or underscore (_) as the first first character. (mary)/workflowsflows. A tenant admin with roles may be accidentally assigned a task actually intended for other non-admin users who have the same role, and the tenant admin could perform the task and thereby disrupt or compromise the workflow/workflows.
There are three places to set the maximum size of attachments in frevvo. tenant admin can edit the profile for any tenant user and perform such functions as resetting passwords. To reset a password:
Login to the tenant admin's account.
Users can also click "Forgot Password" on the login screen to reset their own password. Passwords for tenant admins and the superuser (admin@d) for in-house installations can also be changed using this procedure. Remember, the user must have an email address configured in frevvo for this method to work.
Logged in users can change their password using the Manage Personal Information link under My Account on the top right of the screen. Tenant and server admins can change their passwords by clicking on the icon from the Manage Users#EditAdminUsers screen.". frevvo spaces, you can add a link to this screen to the space menu. | https://docs.frevvo.com/d/pages/viewpage.action?pageId=26714699 | 2021-02-25T05:00:14 | CC-MAIN-2021-10 | 1614178350717.8 | [] | docs.frevvo.com |
Deproject¶
Deproject is a CIAO Sherpa extension package to facilitate
deprojection of two-dimensional circular annular X-ray spectra to recover the
three-dimensional source properties. For typical thermal models this would
include the radial temperature and density profiles. This basic method
has been used extensively for X-ray cluster analysis and is the basis for the
XSPEC model projct. The
deproject module brings this
functionality to Sherpa as a Python module that is straightforward to use and
understand.
The module can also be used with the standalone Sherpa release, but as it requires support for XSPEC models - for the thermal plasma emission codes - the documentation will focus on using Sherpa with a CIAO environment.
Introduction
Examples | https://deproject.readthedocs.io/en/latest/ | 2021-02-25T05:32:46 | CC-MAIN-2021-10 | 1614178350717.8 | [] | deproject.readthedocs.io |
Installation¶
We recommend installing faucet with apt for first time users and provide a Installing faucet for the first time tutorial which walks you through all the required steps for setting up faucet and gauge for the first time.
Once installed, see Configuration for documentation on how to configure faucet. Also, see Vendor-specific Documentation for documentation on how to configure your switch.
More advanced methods of installing faucet are also available here:
Installation using APT¶
We maintain a apt repo for installing faucet and its dependencies on Debian-based Linux distributions.
Here is a list of packages we supply:
Installation on Debian/Raspbian 9+ and Ubuntu 16.04+¶
Then to install all components for a fully functioning system on a single machine:
sudo apt-get install faucet-all-in-one
or you can install the individual components:
sudo apt-get install faucet sudo apt-get install gauge
Installation with Docker¶
We provide official automated builds on Docker Hub so that you can easily run Faucet and it’s components in a self-contained environment without installing on the main host system.
The docker images support the following architectures:
amd64
386
arm/v6
arm/v7
arm64/v8
ppc64le
s390x
Installing docker¶
We recommend installing Docker Community Edition (CE) according to the official docker engine installation guide.
Configuring dockers¶
First, we need to create some configuration files on our host to mount inside the docker containers to configure faucet and gauge:
sudo mkdir -p /etc/faucet sudo vi /etc/faucet/faucet.yaml sudo vi /etc/faucet/gauge.yaml
See the Configuration section for configuration options.
Starting dockers¶
We use Docker tags to differentiate between versions of Faucet. The latest
tag will always point to the latest stable release of Faucet. All tagged
versions of Faucet in git are also available to use, for example using the
faucet/faucet:1.8.0 Docker will run the released version 1.8.0 of Faucet.
By default the Faucet and Gauge images are run as the faucet user under
UID 0, GID 0. If you need to change that it can be overridden at runtime with
the Docker flags:
-e LOCAL_USER_ID and
-e LOCAL_GROUP_ID.
To pull and run the latest version of Faucet:
mkdir -p /var/log/faucet/ docker pull faucet/faucet:latest docker run -d \ --name faucet \ --restart=always \ -v /etc/faucet/:/etc/faucet/ \ -v /var/log/faucet/:/var/log/faucet/ \ -p 6653:6653 \ -p 9302:9302 \ faucet/faucet
Port 6653 is used for OpenFlow, port 9302 is used for Prometheus - port 9302 may be omitted if you do not need Prometheus.
To pull and run the latest version of Gauge:
mkdir -p /var/log/faucet/gauge/ docker pull faucet/gauge:latest docker run -d \ --name gauge \ --restart=always \ -v /etc/faucet/:/etc/faucet/ \ -v /var/log/faucet/:/var/log/faucet/ \ -p 6654:6653 \ -p 9303:9303 \ faucet/gauge
Port 6654 is used for OpenFlow, port 9303 is used for Prometheus - port 9303 may be omitted if you do not need Prometheus.
Additional arguments¶
You may wish to run faucet under docker with additional arguments, for example: setting certificates for an encrypted control channel. This can be done by overriding the docker entrypoint like so:
docker run -d \ --name faucet \ --restart=always \ -v /etc/faucet/:/etc/faucet/ \ -v /etc/ryu/ssl/:/etc/ryu/ssl/ \ -v /var/log/faucet/:/var/log/faucet/ \ -p 6653:6653 \ -p 9302:9302 \ faucet/faucet \ faucet \ --ctl-privkey /etc/ryu/ssl/ctrlr.key \ --ctl-cert /etc/ryu/ssl/ctrlr.cert \ --ca-certs /etc/ryu/ssl/sw.cert
You can get a list of all additional arguments faucet supports by running:
docker run -it faucet/faucet faucet --help
Docker compose¶
This is an example docker-compose file that can be used to set up gauge to talk to Prometheus and InfluxDB with a Grafana instance for dashboards and visualisations.
It can be run with:
docker-compose pull docker-compose up
The time-series databases with the default settings will write to
/opt/prometheus/
/opt/influxdb/shared/data/db you can edit these locations
by modifying the
docker-compose.yaml file.
On OSX, some of the default shared paths are not accessible, so to overwrite
the location that volumes are written to on your host, export an environment
varible name
FAUCET_PREFIX and it will get prepended to the host paths.
For example:
export FAUCET_PREFIX=/opt/faucet
When all the docker containers are running we will need to configure Grafana to
talk to Prometheus and InfluxDB. First login to the Grafana web interface on
port 3000 (e.g) using the default credentials of
admin:admin.
Then add two data sources. Use the following settings for prometheus:
Name: Prometheus Type: Prometheus Url:
And the following settings for InfluxDB:
Name: InfluxDB Type: InfluxDB Url: With Credentials: true Database: faucet User: faucet Password: faucet
Check the connection using test connection.
From here you can add a new dashboard and a graphs for pulling data from the
data sources. Hover over the
+ button on the left sidebar in the web
interface and click
Import.
We will import the following dashboards, just download the following links and upload them through the grafana dashboard import screen:
Installation with Pip¶
You can install the latest pip package, or you can install directly from git via pip.
Installing faucet¶
First, install some python dependencies:
apt-get install python3-dev python3-pip pip3 install setuptools pip3 install wheel
Then install the latest stable release of faucet from pypi, via pip:
pip3 install faucet
Or, install the latest development code from git, via pip:
pip3 install git+
Starting faucet manually¶
Faucet includes a start up script for starting Faucet and Gauge easily from the command line.
To run Faucet manually:
faucet --verbose
To run Gauge manually:
gauge --verbose
There are a number of options that you can supply the start up script for changing various options such as OpenFlow port and setting up an encrypted control channel. You can find a list of the additional arguments by running:
faucet --help
Starting faucet With systemd¶
Systemd can be used to start Faucet and Gauge at boot automatically:
$EDITOR /etc/systemd/system/faucet.service $EDITOR /etc/systemd/system/gauge.service systemctl daemon-reload systemctl enable faucet.service systemctl enable gauge.service systemctl restart faucet systemctl restart gauge
/etc/systemd/system/faucet.service should contain:
[Unit] Description="Faucet OpenFlow switch controller" After=network-online.target Wants=network-online.target [Service] EnvironmentFile=/etc/default/faucet User=faucet Group=faucet ExecStart=/usr/local/bin/faucet --ryu-config-file=${FAUCET_RYU_CONF} --ryu-ofp-tcp-listen-port=${FAUCET_LISTEN_PORT} ExecReload=/bin/kill -HUP $MAINPID Restart=always [Install] WantedBy=multi-user.target
/etc/systemd/system/gauge.service should contain:
[Unit] Description="Gauge OpenFlow statistics controller" After=network-online.target Wants=network-online.target [Service] EnvironmentFile=/etc/default/gauge User=faucet Group=faucet ExecStart=/usr/local/bin/gauge --ryu-config-file=${GAUGE_RYU_CONF} --ryu-ofp-tcp-listen-port=${GAUGE_LISTEN_PORT} --ryu-wsapi-host=${WSAPI_LISTEN_HOST} --ryu-app=ryu.app.ofctl_rest Restart=always [Install] WantedBy=multi-user.target
Installing on Raspberry Pi¶
We provide a Raspberry Pi image running FAUCET which can be retrieved from the latest faucet release page on GitHub. Download the faucet_VERSION_raspbian-lite.zip file.
The image can then be copied onto an SD card following the same steps from the official Raspberry Pi installation guide.
Once you have booted up the Raspberry Pi and logged in using the default credentials (username: pi, password: raspberry) you can follow through the Installing faucet for the first time tutorial starting from Configure prometheus to properly configure each component.
Note
It is strongly recommended to use a Raspberry Pi 3 or better.
Installing with Virtual Machine image¶
We provide a VM image for running FAUCET for development and learning purposes. The VM comes pre-installed with FAUCET, GAUGE, prometheus and grafana.
Openstack’s diskimage-builder (DIB) is used to build the VM images in many formats (qcow2,tgz,squashfs,vhd,raw).
Downloading pre-built images¶
Pre-built images are available on github, see the latest faucet release page on GitHub and download the faucet-amd64-VERSION.qcow2 file.
Building the images¶
If you don’t want to use our pre-built images, you can build them yourself:
Install the latest disk-image-builder
Run build-faucet-vm.sh from the
images/vm/directory.
Security considerations¶
This VM is not secure by default, it includes no firewall and has a number of network services listening on all interfaces with weak passwords. It also includes a backdoor user (faucet) with weak credentials.
Services
The VM exposes a number of ports listening on all interfaces by default:
Default Credentials
Post-install steps¶
Grafana comes installed but unconfigured, you will need to login to the grafana
web interface at and configure a data source and some dashboards.
After logging in with the default credentials shown above, the first step is to add a prometheus:
You will need to supply your own faucet.yaml and gauge.yaml configuration in the VM. There are samples provided at /etc/faucet/faucet.yaml and /etc/faucet/gauge.yaml.
Finally you will need to point one of the supported OpenFlow vendors at the controller VM, port 6653 is the Faucet OpenFlow control channel and 6654 is the Gauge OpennFlow control channel for monitoring. | https://docs.faucet.nz/en/latest/installation.html | 2021-02-25T04:18:16 | CC-MAIN-2021-10 | 1614178350717.8 | [array(['https://packagecloud.io/images/packagecloud-badge.png',
'Private NPM registry and Maven, RPM, DEB, PyPi and RubyGem Repository · packagecloud'],
dtype=object) ] | docs.faucet.nz |
Use geo-restore to recover a multitenant SaaS application from database backups
Azure SQL Database
This tutorial explores a full disaster recovery scenario for a multitenant SaaS application implemented with the database per tenant model. You use geo-restore to recover the catalog and tenant databases from automatically maintained geo-redundant backups into an alternate recovery region. After the outage is resolved, you use geo-replication to repatriate changed databases to their original region.
Geo-restore is the lowest-cost disaster recovery solution for Azure SQL Database. However, restoring from geo-redundant backups can result in data loss of up to one hour. It can take considerable time, depending on the size of each database.
Note
Recover applications with the lowest possible RPO and RTO by using geo-replication instead of geo-restore.
This tutorial explores both restore and repatriation workflows. You learn how to:
- Sync database and elastic pool configuration info into the tenant catalog.
- Set up a mirror image environment in a recovery region that includes application, servers, and pools.
- Recover catalog and tenant databases by using geo-restore.
- Use geo-replication to repatriate the tenant catalog and changed tenant databases after the outage is resolved.
- Update the catalog as each database is restored (or repatriated) to track the current location of the active copy of each tenant's database.
- Ensure that the application and tenant database are always co-located in the same Azure region to reduce latency.
Before you start this tutorial, complete the following prerequisites:
- Deploy the Wingtip Tickets SaaS database per tenant app. To deploy in less than five minutes, see Deploy and explore the Wingtip Tickets SaaS database per tenant application.
- Install Azure PowerShell. For details, see Getting started with Azure PowerShell.
Introduction to the geo-restore recovery pattern
Disaster recovery (DR) is an important consideration for many applications, whether for compliance reasons or business continuity. If there's a prolonged service outage, a well-prepared DR plan can minimize business disruption. A DR plan based on geo-restore must accomplish several goals:
- Reserve all needed capacity in the chosen recovery region as quickly as possible to ensure that it's available to restore tenant databases.
- Establish a mirror image recovery environment that reflects the original pool and database configuration.
- Allow cancellation of the restore process in mid-flight if the original region comes back online.
- Enable tenant provisioning quickly so new tenant onboarding can restart as soon as possible.
- Be optimized to restore tenants in priority order.
- Be optimized to get tenants online as soon as possible by doing steps in parallel where practical.
- Be resilient to failure, restartable, and idempotent.
- Repatriate databases to their original region with minimal impact to tenants when the outage is resolved.
Note
The application is recovered into the paired region of the region in which the application is deployed. For more information, see Azure paired regions.
This tutorial uses features of Azure SQL Database and the Azure platform to address these challenges:
- Azure Resource Manager templates, to reserve all needed capacity as quickly as possible. Azure Resource Manager templates are used to provision a mirror image of the original servers and elastic pools in the recovery region. A separate server and pool are also created for provisioning new tenants.
- Elastic Database Client Library (EDCL), to create and maintain a tenant database catalog. The extended catalog includes periodically refreshed pool and database configuration information.
- Shard management recovery features of the EDCL, to maintain database location entries in the catalog during recovery and repatriation.
- Geo-restore, to recover the catalog and tenant databases from automatically maintained geo-redundant backups.
- Asynchronous restore operations, sent in tenant-priority order, are queued for each pool by the system and processed in batches so the pool isn't overloaded. These operations can be canceled before or during execution if necessary.
- Geo-replication, to repatriate databases to the original region after the outage. There is no data loss and minimal impact on the tenant when you use geo-replication.
- SQL server DNS aliases, to allow the catalog sync process to connect to the active catalog regardless of its location.
Get the disaster recovery scripts
The DR scripts used in this tutorial are available in the Wingtip Tickets SaaS database per tenant GitHub repository. Check out the general guidance for steps to download and unblock the Wingtip Tickets management scripts.
Important
Like all the Wingtip Tickets management scripts, the DR scripts are sample quality and are not to be used in production.
Review the healthy state of the application
Before you start the recovery process, review the normal healthy state of the application.
In your web browser, open the Wingtip Tickets events hub (.<user>.trafficmanager.net, replace <user> with your deployment's user value).
Scroll to the bottom of the page and notice the catalog server name and location in the footer. The location is the region in which you deployed the app.
Tip
Hover the mouse over the location to enlarge the display.
Select the Contoso Concert Hall tenant and open its event page.
In the footer, notice the tenant's server name. The location is the same as the catalog server's location.
In the Azure portal, review and open the resource group in which you deployed the app.
Notice the resources and the region in which the app service components and SQL Database is deployed.
Sync the tenant configuration into the catalog
In this task, you start a process to sync the configuration of the servers, elastic pools, and databases into the tenant catalog. This information is used later to configure a mirror image environment in the recovery region.
Important
For simplicity, the sync process and other long-running recovery and repatriation processes are implemented in these samples as local PowerShell jobs or sessions that run under your client user login. The authentication tokens issued when you log in expire after several hours, and the jobs will then fail. In a production scenario, long-running processes should be implemented as reliable Azure services of some kind, running under a service principal. See Use Azure PowerShell to create a service principal with a certificate.
In the PowerShell ISE, open the ...\Learning Modules\UserConfig.psm1 file. Replace
<resourcegroup>and
<user>on lines 10 and 11 with the value used when you deployed the app. Save the file.
In the PowerShell ISE, open the ...\Learning Modules\Business Continuity and Disaster Recovery\DR-RestoreFromBackup\Demo-RestoreFromBackup.ps1 script.
In this tutorial, you run each of the scenarios in this PowerShell script, so keep this file open.
Set the following:
$DemoScenario = 1: Start a background job that syncs tenant server and pool configuration info into the catalog.
To run the sync script, select F5.
This information is used later to ensure that recovery creates a mirror image of the servers, pools, and databases in the recovery region.
Leave the PowerShell window running in the background and continue with the rest of this tutorial.
Note
The sync process connects to the catalog via a DNS alias. The alias is modified during restore and repatriation to point to the active catalog. The sync process keeps the catalog up to date with any database or pool configuration changes made in the recovery region. During repatriation, these changes are applied to the equivalent resources in the original region.
Geo-restore recovery process overview
The geo-restore recovery process deploys the application and restores databases from backups into the recovery region.
The recovery process does the following:
Disables the Azure Traffic Manager endpoint for the web app in the original region. Disabling the endpoint prevents users from connecting to the app in an invalid state should the original region come online during recovery.
Provisions a recovery catalog server in the recovery region, geo-restores the catalog database, and updates the activecatalog alias to point to the restored catalog server. Changing the catalog alias ensures that the catalog sync process always syncs to the active catalog.
Marks all existing tenants in the recovery catalog as offline to prevent access to tenant databases before they are restored.
Provisions an instance of the app in the recovery region and configures it to use the restored catalog in that region. To keep latency to a minimum, the sample app is designed to always connect to a tenant database in the same region.
Provisions a server and elastic pool in which new tenants are provisioned. Creating these resources ensures that provisioning new tenants doesn't interfere with the recovery of existing tenants.
Updates the new tenant alias to point to the server for new tenant databases in the recovery region. Changing this alias ensures that databases for any new tenants are provisioned in the recovery region.
Provisions servers and elastic pools in the recovery region for restoring tenant databases. These servers and pools are a mirror image of the configuration in the original region. Provisioning pools up front reserves the capacity needed to restore all the databases.
An outage in a region might place significant pressure on the resources available in the paired region. If you rely on geo-restore for DR, then reserving resources quickly is recommended. Consider geo-replication if it's critical that an application is recovered in a specific region.
Enables the Traffic Manager endpoint for the web app in the recovery region. Enabling this endpoint allows the application to provision new tenants. At this stage, existing tenants are still offline.
Submits batches of requests to restore databases in priority order.
Batches are organized so that databases are restored in parallel across all pools.
Restore requests are submitted asynchronously so they are submitted quickly and queued for execution in each pool.
Because restore requests are processed in parallel across all pools, it's better to distribute important tenants across many pools.
Monitors the service to determine when databases are restored. After a tenant database is restored, it's marked online in the catalog, and a rowversion sum for the tenant database is recorded.
Tenant databases can be accessed by the application as soon as they're marked online in the catalog.
A sum of rowversion values in the tenant database is stored in the catalog. This sum acts as a fingerprint that allows the repatriation process to determine if the database was updated in the recovery region.
Run the recovery script
Important
This tutorial restores databases from geo-redundant backups. Although these backups are typically available within 10 minutes, it can take up to an hour. The script pauses until they're available.
Imagine there's an outage in the region in which the application is deployed, and run the recovery script:
In the PowerShell ISE, in the ...\Learning Modules\Business Continuity and Disaster Recovery\DR-RestoreFromBackup\Demo-RestoreFromBackup.ps1 script, set the following value:
$DemoScenario = 2: Recover the app into a recovery region by restoring from geo-redundant backups.
To run the script, select F5.
The script opens in a new PowerShell window and then starts a set of PowerShell jobs that run in parallel. These jobs restore servers, pools, and databases to the recovery region.
The recovery region is the paired region associated with the Azure region in which you deployed the application. For more information, see Azure paired regions.
Monitor the status of the recovery process in the PowerShell window.
Note
To explore the code for the recovery jobs, review the PowerShell scripts in the ...\Learning Modules\Business Continuity and Disaster Recovery\DR-RestoreFromBackup\RecoveryJobs folder.
Review the application state during recovery
While the application endpoint is disabled in Traffic Manager, the application is unavailable. The catalog is restored, and all the tenants are marked offline. The application endpoint in the recovery region is then enabled, and the application is back online. Although the application is available, tenants appear offline in the events hub until their databases are restored. It's important to design your application to handle offline tenant databases.
After the catalog database has been recovered but before the tenants are back online, refresh the Wingtip Tickets events hub in your web browser.
In the footer, notice that the catalog server name now has a -recovery suffix and is located in the recovery region.
Notice that tenants that are not yet restored are marked as offline and are not selectable.
If you open a tenant's events page directly while the tenant is offline, the page displays a tenant offline notification. For example, if Contoso Concert Hall is offline, try to open.<user>.trafficmanager.net/contosoconcerthall.
Provision a new tenant in the recovery region
Even before tenant databases are restored, you can provision new tenants in the recovery region. New tenant databases provisioned in the recovery region are repatriated with the recovered databases later.
In the PowerShell ISE, in the ...\Learning Modules\Business Continuity and Disaster Recovery\DR-RestoreFromBackup\Demo-RestoreFromBackup.ps1 script, set the following property:
$DemoScenario = 3: Provision a new tenant in the recovery region.
To run the script, select F5.
The Hawthorn Hall events page opens in the browser when provisioning finishes.
Notice that the Hawthorn Hall database is located in the recovery region.
In the browser, refresh the Wingtip Tickets events hub page to see Hawthorn Hall included.
If you provisioned Hawthorn Hall without waiting for the other tenants to restore, other tenants might still be offline.
Review the recovered state of the application
When the recovery process finishes, the application and all tenants are fully functional in the recovery region.
After the display in the PowerShell console window indicates all the tenants are recovered, refresh the events hub.
The tenants all appear online, including the new tenant, Hawthorn Hall.
Click on Contoso Concert Hall and open its events page.
In the footer, notice that the database is located on the recovery server located in the recovery region.
In the Azure portal, open the list of resource groups.
Notice the resource group that you deployed, plus the recovery resource group, with the -recovery suffix. The recovery resource group contains all the resources created during the recovery process, plus new resources created during the outage.
Open the recovery resource group and notice the following items:
The recovery versions of the catalog and tenants1 servers, with the -recovery suffix. The restored catalog and tenant databases on these servers all have the names used in the original region.
The tenants2-dpt-<user>-recovery SQL server. This server is used for provisioning new tenants during the outage.
The app service named events-wingtip-dpt-<recoveryregion>-<user>, which is the recovery instance of the events app.
Open the tenants2-dpt-<user>-recovery SQL server. Notice that it contains the database hawthornhall and the elastic pool Pool1. The hawthornhall database is configured as an elastic database in the Pool1 elastic pool.
Change the tenant data
In this task, you update one of the restored tenant databases. The repatriation process copies restored databases that have been changed to the original region.
In your browser, find the events list for the Contoso Concert Hall, scroll through the events, and notice the last event, Seriously Strauss.
In the PowerShell ISE, in the ...\Learning Modules\Business Continuity and Disaster Recovery\DR-RestoreFromBackup\Demo-RestoreFromBackup.ps1 script, set the following value:
$DemoScenario = 4: Delete an event from a tenant in the recovery region.
To execute the script, select F5.
Refresh the Contoso Concert Hall events page (.<user>.trafficmanager.net/contosoconcerthall), and notice that the event Seriously Strauss is missing.
At this point in the tutorial, you have recovered the application, which is now running in the recovery region. You have provisioned a new tenant in the recovery region and modified data of one of the restored tenants.
Note
Other tutorials in the sample are not designed to run with the app in the recovery state. If you want to explore other tutorials, be sure to repatriate the application first.
Repatriation process overview
The repatriation process reverts the application and its databases to its original region after an outage is resolved.
The process:
Stops any ongoing restore activity and cancels any outstanding or in-flight database restore requests.
Reactivates in the original region tenant databases that have not been changed since the outage. These databases include those not recovered yet and those recovered but not changed afterward. The reactivated databases are exactly as last accessed by their tenants.
Provisions a mirror image of the new tenant's server and elastic pool in the original region. After this action is complete, the new tenant alias is updated to point to this server. Updating the alias causes new tenant onboarding to occur in the original region instead of the recovery region.
Uses geo-replication to move the catalog to the original region from the recovery region.
Updates pool configuration in the original region so it's consistent with changes that were made in the recovery region during the outage.
Creates the required servers and pools to host any new databases created during the outage.
Uses geo-replication to repatriate restored tenant databases that have been updated post-restore and all new tenant databases provisioned during the outage.
Cleans up resources created in the recovery region during the restore process.
To limit the number of tenant databases that need to be repatriated, steps 1 to 3 are done promptly.
Step 4 is only done if the catalog in the recovery region has been modified during the outage. The catalog is updated if new tenants are created or if any database or pool configuration is changed in the recovery region.
It's important that step 7 causes minimal disruption to tenants and no data is lost. To achieve this goal, the process uses geo-replication.
Before each database is geo-replicated, the corresponding database in the original region is deleted. The database in the recovery region is then geo-replicated, creating a secondary replica in the original region. After replication is complete, the tenant is marked offline in the catalog, which breaks any connections to the database in the recovery region. The database is then failed over, causing any pending transactions to process on the secondary so no data is lost.
On failover, the database roles are reversed. The secondary in the original region becomes the primary read-write database, and the database in the recovery region becomes a read-only secondary. The tenant entry in the catalog is updated to reference the database in the original region, and the tenant is marked online. At this point, repatriation of the database is complete.
Applications should be written with retry logic to ensure that they reconnect automatically when connections are broken. When they use the catalog to broker the reconnection, they connect to the repatriated database in the original region. Although the brief disconnect is often not noticed, you might choose to repatriate databases out of business hours.
After a database is repatriated, the secondary database in the recovery region can be deleted. The database in the original region then relies again on geo-restore for DR protection.
In step 8, resources in the recovery region, including the recovery servers and pools, are deleted.
Run the repatriation script
Let's imagine the outage is resolved and run the repatriation script.
If you've followed the tutorial, the script immediately reactivates Fabrikam Jazz Club and Dogwood Dojo in the original region because they're unchanged. It then repatriates the new tenant, Hawthorn Hall, and Contoso Concert Hall because it has been modified. The script also repatriates the catalog, which was updated when Hawthorn Hall was provisioned.
In the PowerShell ISE, in the ...\Learning Modules\Business Continuity and Disaster Recovery\DR-RestoreFromBackup\Demo-RestoreFromBackup.ps1 script, verify that the Catalog Sync process is still running in its PowerShell instance. If necessary, restart it by setting:
$DemoScenario = 1: Start synchronizing tenant server, pool, and database configuration info into the catalog.
To run the script, select F5.
Then to start the repatriation process, set:
$DemoScenario = 5: Repatriate the app into its original region.
To run the recovery script in a new PowerShell window, select F5. Repatriation takes several minutes and can be monitored in the PowerShell window.
While the script is running, refresh the events hub page (.<user>.trafficmanager.net).
Notice that all the tenants are online and accessible throughout this process.
Select the Fabrikam Jazz Club to open it. If you didn't modify this tenant, notice from the footer that the server is already reverted to the original server.
Open or refresh the Contoso Concert Hall events page. Notice from the footer that, initially, the database is still on the -recovery server.
Refresh the Contoso Concert Hall events page when the repatriation process finishes, and notice that the database is now in your original region.
Refresh the events hub again and open Hawthorn Hall. Notice that its database is also located in the original region.
Clean up recovery region resources after repatriation
After repatriation is complete, it's safe to delete the resources in the recovery region.
Important
Delete these resources promptly to stop all billing for them.
The restore process creates all the recovery resources in a recovery resource group. The cleanup process deletes this resource group and removes all references to the resources from the catalog.
In the PowerShell ISE, in the ...\Learning Modules\Business Continuity and Disaster Recovery\DR-RestoreFromBackup\Demo-RestoreFromBackup.ps1 script, set:
$DemoScenario = 6: Delete obsolete resources from the recovery region.
To run the script, select F5.
After cleaning up the scripts, the application is back where it started. At this point, you can run the script again or try out other tutorials.
Designing the application to ensure that the app and the database are co-located
The application is designed to always connect from an instance in the same region as the tenant's database. This design reduces latency between the application and the database. This optimization assumes the app-to-database interaction is chattier than the user-to-app interaction.
Tenant databases might be spread across recovery and original regions for some time during repatriation. For each database, the app looks up the region in which the database is located by doing a DNS lookup on the tenant server name. The server name is an alias. The aliased server name contains the region name. If the application isn't in the same region as the database, it redirects to the instance in the same region as the server. Redirecting to the instance in the same region as the database minimizes latency between the app and the database.
Next steps
In this tutorial, you learned how to:
- Use the tenant catalog to hold periodically refreshed configuration information, which allows a mirror image recovery environment to be created in another region.
- Recover databases into the recovery region by using geo-restore.
- Update the tenant catalog to reflect restored tenant database locations.
- Use a DNS alias to enable an application to connect to the tenant catalog throughout without reconfiguration.
- Use geo-replication to repatriate recovered databases to their original region after an outage is resolved.
Try the Disaster recovery for a multitenant SaaS application using database geo-replication tutorial to learn how to use geo-replication to dramatically reduce the time needed to recover a large-scale multitenant application.
Additional resources
Additional tutorials that build upon the Wingtip SaaS application | https://docs.microsoft.com/en-us/azure/azure-sql/database/saas-dbpertenant-dr-geo-restore | 2021-02-25T05:47:48 | CC-MAIN-2021-10 | 1614178350717.8 | [array(['media/saas-dbpertenant-dr-geo-restore/geo-restore-architecture.png',
'Diagram shows an original and recovery regions, both of which have an app, catalog, original or mirror images of servers and pools, automatic backups to storage, with the recovery region accepting geo-replication of backup and having server and pool for new tenants.'],
dtype=object)
array(['media/saas-dbpertenant-dr-geo-restore/geo-restore-repatriation.png',
'Geo-restore repatriation'], dtype=object) ] | docs.microsoft.com |
Last updated 05th May 2020
OVHcloud Web Hosting FAQ
What do I do if my website isn't working properly?
There are several possible reasons why your website may not be working properly. To identify the cause, start by logging in to the OVHcloud Control Panel, and check that all of your services have been successfully renewed and are active. Once you have checked this, verify that there are no ongoing maintenance tasks by visiting the Travaux webpage. If all your services are active and no maintenance tasks are affecting your website, you can carry out a more in-depth diagnostic by reading our "Diagnostic" guides.
Tips and tricks: If your website suddenly goes down after an action on your part, you can restore the content via the OVHcloud Control Panel. To do this, go to the
FTP - SSH tab on your hosting page, and click the
Restore a backup button on the right of your screen. For detailed instructions, you can use the following documentation: Restoring your Web Hosting plan’s storage space.
How do I configure my hosting space?
To configure your Web Hosting plan, log in to the OVHcloud Control Panel. In the
Hosting plans section, you can manage your SSL certificates, PHP versions, CDN, multisites, databases, and more.
Tips and tricks: To help you configure your Web Hosting plan, we recommend reading documentation from the “Getting started” section here.
How do I create or delete an element of my product/service (email account, database, etc.)?
To create or delete an element, log in to the OVHcloud Control Panel, then select the service concerned (
Database,
Modules). This way, you can scale your product as you see fit.
Tips and tricks: Via the OVHcloud Control Panel, you can create regular backups of your databases.
How do I manage my passwords?
To manage your password, start off by logging in to the OVHcloud Control Panel. If you have forgotten your username or password, click
Forgotten your username or password? in the login window. You will be sent an email with the reset procedure. Please also read our Setting and managing an account password guide.
Once you have logged in to the OVHcloud Control Panel, you can manage different types of access, such as:
- access to your FTP server and databases. To do this, access the
Hosting planssection of the OVHcloud Control Panel, and select the product/service concerned for email access.
- access to your email addresses. Go to the
How do I put my website online?
To put your website online via OVHcloud, you need to have a domain name corresponding to the address of your future website (e.g. ovh.com). You also need to have a web hosting space to set up your website. Please read the following guide: Publishing a website on your web hosting plan.
Tips and tricks: To help you build your website, OVHcloud offers 1-click modules like WordPress, PrestaShop, Joomla!, and Drupal. You can find them here. You can also use the following documentation: Setting up your website with 1-click modules.
How do I migrate my website and emails to OVHcloud?
To migrate your website and emails to OVHcloud, you need to have an OVHcloud Web Hosting plan. You can then connect to your Web Hosting plan’s FTP server, in order to transfer your website’s files on to it. If you currently have a database, it is also worth creating a dump of it.
To migrate emails, you will need to recreate your accounts at OVHcloud, then use our OMM migration tool (OVH Mail Migrator). You can find it here.
Once you have completed these steps, you will need to modify the DNS zone for your domain, so that within 24 hours it can point to our infrastructure. If you would like more information on this, please read the following guide: Migrating your website and emails to OVHcloud.
Tips and tricks: To transfer your files, you can use software programs like FileZilla and Cyberduck. You can use the following documentation for this: FileZilla user guide.
How do I host multiple websites on my Web Hosting plan?
As an experienced user, you can host several websites on the same Web Hosting plan. To do this, you need to attach another domain name or sub-domain to the hosting solution. The procedure for attaching and detaching domains is explained in this guide: Hosting multiple websites on your Web Hosting plan.
I have published my website but the OVHcloud "Congratulations" message is still being displayed
When you set up a Web Hosting plan, OVHcloud implements a preliminary index page while you move your website files on to your storage space. If you simply place your files in the "www" folder without deleting the content stored there by OVHcloud, you may encounter this issue.
To correct this, you will need to delete or rename the "index.html" file set up by OVHcloud on your hosting space. It may be useful to just rename it, so that you can re-enable it as needed to act as a temporary fallback page.
Other useful information: make sure you upload your website files to the "www" folder, so that they are visible.
How do I upgrade my web hosting plan?
If you would like to change your current solution to a higher one, go to your OVHcloud Control Panel, and go to the
Web Cloud section. Click
Hosting plans in the services bar on the left-hand side, then click on the solution concerned.
In the
General information tab, in the
Plan box, click on the
... button next to “Plan”, then click
Upgrade. Follow the instructions below to complete your order. A pro rata of the remaining time of your current solution will be added to the new. | https://docs.ovh.com/ca/en/hosting/web-hosting-faq/ | 2021-02-25T05:36:48 | CC-MAIN-2021-10 | 1614178350717.8 | [] | docs.ovh.com |
>> Enterprise software..
Once the process completes successfully, the peers use the new set of configurations, now located in their local
$SPLUNK_HOME/etc/slave-apps.
Leave the files in
$SPLUNK_HOME/etc/slave-apps.
For more details on the internals of the distribution process, read the next. Validate the bundle.
Use the CLI to view the status of the bundle push
To see how the cluster bundle push.
For more information, see When to restart Splunk Enterprise after a configuration file change in the Admin Manual..! | https://docs.splunk.com/Documentation/Splunk/7.0.0/Indexer/Updatepeerconfigurations | 2021-02-25T05:25:15 | CC-MAIN-2021-10 | 1614178350717.8 | [array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'],
dtype=object) ] | docs.splunk.com |
You can add a cartridge group using the CLI tool, REST API or the PPaaS Console as shown below:
Adding a cartridge group via the CLI
Overview
Parameter definition
Example
Add a cartridge group that has been defined in the
<PRIVATE_PAAS_CARTRIDGES>/wso2am/1.9.0/samples/cartridge-groups/wso2am-190/wso2am-190-group1.json file.
add-cartridge-group -p <PRIVATE_PAAS_CARTRIDGES>/wso2am/1.9.0/samples/cartridge-groups/wso2am-190/wso2am-190-group1.json
Sample output
Cartridge Group added successfully: [cartridge-group] keymanager-gw-manager-gw-worker
Adding a cartridge group via the REST API
Overview
Example
Add a cartridge group that has been defined in the
<PRIVATE_PAAS_CARTRIDGES>/wso2am/1.9.0/samples/cartridge-groups/wso2am-190/wso2am-190-group1.json file.
cd <PRIVATE_PAAS_CARTRIDGES>/wso2am/1.9.0/samples curl -X POST -H "Content-Type: application/json" -d @'cartridge-groups/wso2am-190/wso2am-190-group1.json' -k -v -u admin:admin
Sample output
> POST /api/cartridgeGroups HTTP/1.1 > Host: localhost:9443 > Authorization: Basic YWRtaW46YWRtaW4= > User-Agent: curl/7.43.0 > Accept: */* > Content-Type: application/json > Content-Length: 233 > < HTTP/1.1 201 Created < Date: Tue, 06 Oct 2015 11:28:13 GMT < Location: < Content-Type: application/json < Transfer-Encoding: chunked < Server: WSO2 Carbon Server < {"status":"success","message":"Cartridge Group added successfully: [cartridge-group] keymanager-gw-manager-gw-worker"}
You will come across the following HTTP status codes while adding a cartridge group:
Adding a cartridge group via the PPaaS Console
Follow the instructions below to add a cartridge group:
- Click Configurations on the home page.
- Click Cartridge Groups.
- Click ADD CARTRIDGE GROUP.
- Define the cartridge group as follows:
- To define the cartridge group details. For information on all the properties that you can use in a cartridge group definition, see the Cartridge Group Resource Definition.
- Define the Group Name.
Define the Startup Order based on the cartridge group you wish to create.
Click +Startup Order.
Click +alias to add dependencies to the startup order.
Example:
cartridge.<CARTRIDGE_ALIAS>,group.<GROUP_ALIAS>
When creating an application, you need to use the same cartridge alias and group alias, which you use here when creating a cartridge group.
- Define the Scaling Dependents based on the application you wish to create.
- Click +Scaling Dependent.
- Click +alias to add dependencies to the startup order.
Example:
cartridge.<ALIAS>,group.<ALIAS>
- Select the Termination Behavior state.
- Click Update to update the group details.
Add dependancies to the cartridge group.
You can not reuse existing cartridge groups as a nested group in a new cartridge group. The cartridge group needs to be always created from scratch.
- To add a cartridge to the main cartridge group or a nested group.
- Click cartridges to expand the cartridge pane. This step can be skipped if the cartridge pane is already expanded.
- Optionally, if you wish to view the details of a cartridge, single click on the respective cartridge in the cartridge pane. The cartridge details appear in the information pane.
- Double click on the cartridge, which you wish to add to the group, for the selected cartridge to appear in the main pane.
- Drag the group connector, which appears underneath the group in the shape of a rectangle, and drop it on top of the cartridge connector, which appears in the shape of a half circle, on top of the cartridge.
- To add a nested group to a cartridge group.
- Click Group Templates to expand the group template pane. This step can be skipped if the group template pane is already expanded.
- Double click on the group node template for the nested group template to appear in the main pane.
- Drag the main group connector, which appears underneath the group in the shape of a rectangle, and drop it on top of the nested group connector, which appears in the shape of a half circle, on top of the main group.
If you wish to reposition the nodes in a more organized manner, click the reposition nodes link.
- To delete a cartridge or nested group,
- Right click on the respective object and click delete.
- Click Yes to confirm the removal of the object from the main panel.
- Click SAVE to add the cartridge group definition.
Sample cartridge group JSON
{ "name": "keymanager-gw-manager-gw-worker", "cartridges": [ "wso2am-190-gw-manager", "wso2am-190-gw-worker", "wso2am-190-km" ], "dependencies": { "terminationBehaviour": "terminate-none" } } | https://docs.wso2.com/display/PP411/Adding+a+Cartridge+Group | 2021-02-25T05:49:14 | CC-MAIN-2021-10 | 1614178350717.8 | [] | docs.wso2.com |
Appendix: Collection of Operational Metrics
This solution includes an option to send anonymous
Similarity Value: The similarity value returned by the Amazon Rekognition
SearchFacesByImageAPI
Request Image Size: The size of the images used for registration and searching
Response Time and Request Latency: The time it takes for the solution to respond to the request
Note that AWS will own the data gathered via this survey. Data collection will be
subject to the AWS Privacy Policy
Mappings: Metrics: Send-Data: SendAnonymousData: "Yes"
to
Mappings: Metrics: Send-Data: SendAnonymousData: "No" | https://docs.aws.amazon.com/solutions/latest/auto-check-in-app/appendix.html | 2021-02-25T05:10:45 | CC-MAIN-2021-10 | 1614178350717.8 | [] | docs.aws.amazon.com |
getchaintxstats JSON-RPC command
get
{ . }
Examples
> bitcoin-cli getchaintxstats > curl --user myusername --data-binary '{"jsonrpc": "1.0", "id":"curltest", "method": "getchaintxstats", "params": [2016] }' -H 'content-type: text/plain;'
Bitcoin Cash Node Daemon version v22.2.0 | https://docs.bitcoincashnode.org/doc/json-rpc/getchaintxstats/ | 2021-02-25T04:43:29 | CC-MAIN-2021-10 | 1614178350717.8 | [] | docs.bitcoincashnode.org |
- Return to Server Management Client and, in the Replace FRU window, click Ok.
- When the Replace FRU confirmation dialog box appears, click Ok to close the FRU replacement session in Server Management Client.
- Add a comment to the summary alert:
- In the Server Management Client window, select the Summary Alerts tab. A list of summary alerts appears.
- Locate the appropriate summary alert.
- On the right side, click the three-bar icon.
- Select Add Comment. The Comments window appears.
- You must add entries in both the Author and the Comment box.
- Click Submit. The comment is attached to the summary alert.
- Clear the summary alert:
- In the Summary Alerts tab, select the alert containing the corresponding failed drive. The synopsis indicates a failed drive (similar to the following): A Node Drive Failure has been detected in SMP.x.x.x.x
- In the Summary Alerts tab, select .
- At the confirmation prompt, click OK. The status of the alert group changes from Active to Cleared (has checkmark). Clearing an alert group removes it from the list of problem scenarios being monitored.
- Close the Maintenance Window:
- In the Server Management Client window, click the Overview tab.
- In the Maintenance Window on the right side of the column, click the trash can icon.
- In the Maintenance Window , click Yes.
- In the Information window, click OK.
- Log off Server Management Client. | https://docs.teradata.com/r/FYVuUwX2OCGLSzLdFg6mPw/QcqkBbuH3IHsk0mkzazM4g | 2021-02-25T05:17:10 | CC-MAIN-2021-10 | 1614178350717.8 | [] | docs.teradata.com |
You can migrate data to the JSON type following these steps:
- (Optional) If converting from XML to JSON, the XML data must be stored in CLOB or VARCHAR columns.
- Verify that the intended JSON data is well-formed and conforms to the rules of JSON formatting.
- Create new versions of the tables using the JSON type for columns that will hold the JSON data.
- Insert the JSON text (for example, the JSON constructor or string) into the JSON columns. See also, Loading JSON Data Using Load Utilities. | https://docs.teradata.com/r/HN9cf0JB0JlWCXaQm6KDvw/ozeXkklaBVKLDIa~Ks3Wsg | 2021-02-25T05:09:04 | CC-MAIN-2021-10 | 1614178350717.8 | [] | docs.teradata.com |
Gets the bone weights for the Mesh.
Use this method instead of Mesh.boneWeights if you want to avoid allocating a new array with every access.
The bone weight at each index corresponds to the vertex with the same index if this mesh has bone weights defined. Otherwise the list will be empty.
This property uses BoneWeight structs, which represent exactly 4 bone weights per vertex. Within each BoneWeight struct in the array, the bone weights are in descending order and add up to 1. If a vertex is affected by fewer than 4 bones, each of the remaining bone weights must be 0.
To work with more or fewer bone weights per vertex, use the newer Mesh.GetAllBoneWeights and Mesh.SetBoneWeights APIs, which use BoneWeight1 structs.
See Also: Mesh.boneWeights, Mesh.GetAllBoneWeights, Mesh.SetBoneWeights, Mesh.GetBonesPerVertex, ModelImporter.maxBonesPerVertex, QualitySettings.skinWeights, SkinnedMeshRenderer.quality. | https://docs.unity3d.com/es/2020.1/ScriptReference/Mesh.GetBoneWeights.html | 2021-02-25T05:54:45 | CC-MAIN-2021-10 | 1614178350717.8 | [] | docs.unity3d.com |
Fluent¶
Fluent is an ORM framework for Swift. It takes advantage of Swift's strong type system to provide an easy-to-use interface for your database. Using Fluent centers around the creation of model types which represent data structures in your database. These models are then used to perform create, read, update, and delete operations instead of writing raw queries.
Configuration¶
When creating a project using
vapor new, answer "yes" to including Fluent and choose which database driver you want to use. This will automatically add the dependencies to your new project as well as example configuration code.
Existing Project¶
If you have an existing project that you want to add Fluent to, you will need to add two dependencies to your package:
- vapor/[email protected]
- One (or more) Fluent driver(s) of your choice
.package(url: "", from: "4.0.0"), .package(url: "-<db>-driver.git", from: <version>),
.target(name: "App", dependencies: [ .product(name: "Fluent", package: "fluent"), .product(name: "Fluent<db>Driver", package: "fluent-<db>-driver"), .product(name: "Vapor", package: "vapor"), ]),
Once the packages are added as dependencies, you can configure your databases using
app.databases in
configure.swift.
import Fluent import Fluent<db>Driver app.databases.use(<db config>, as: <identifier>)
Each of the Fluent drivers below has more specific instructions for configuration.
Drivers¶
Fluent currently has four officially supported drivers. You can search GitHub for the tag
fluent-driver for a full list of official and third-party Fluent database drivers.
PostgreSQL¶
PostgreSQL is an open source, standards compliant SQL database. It is easily configurable on most cloud hosting providers. This is Fluent's recommended database driver.
To use PostgreSQL, add the following dependencies to your package.
.package(url: "", from: "2.0.0")
.product(name: "FluentPostgresDriver", package: "fluent-postgres-driver")
Once the dependencies are added, configure the database's credentials with Fluent using
app.databases.use in
configure.swift.
import Fluent import FluentPostgresDriver app.databases.use(.postgres(hostname: "localhost", username: "vapor", password: "vapor", database: "vapor"), as: .psql)
You can also parse the credentials from a database connection string.
try app.databases.use(.postgres(url: "<connection string>"), as: .psql)
SQLite¶
SQLite is an open source, embedded SQL database. Its simplistic nature makes it a great candiate for prototyping and testing.
To use SQLite, add the following dependencies to your package.
.package(url: "", from: "4.0.0")
.product(name: "FluentSQLiteDriver", package: "fluent-sqlite-driver")
Once the dependencies are added, configure the database with Fluent using
app.databases.use in
configure.swift.
import Fluent import FluentSQLiteDriver app.databases.use(.sqlite(.file("db.sqlite")), as: .sqlite)
You can also configure SQLite to store the database ephemerally in memory.
app.databases.use(.sqlite(.memory), as: .sqlite)
If you use an in-memory database, make sure to set Fluent to migrate automatically using
--auto-migrate or run
app.autoMigrate() after adding migrations.
app.migrations.add(CreateTodo()) try app.autoMigrate().wait()
Tip
The SQLite configuration automatically enables foreign key constraints on all created connections, but does not alter foreign key configurations in the database itself. Deleting records in a database directly, might violate foreign key constraints and triggers.
MySQL¶
MySQL is a popular open source SQL database. It is available on many cloud hosting providers. This driver also supports MariaDB.
To use MySQL, add the following dependencies to your package.
.package(url: "", from: "4.0.0-beta")
.product(name: "FluentMySQLDriver", package: "fluent-mysql-driver")
Once the dependencies are added, configure the database's credentials with Fluent using
app.databases.use in
configure.swift.
import Fluent import FluentMySQLDriver app.databases.use(.mysql(hostname: "localhost", username: "vapor", password: "vapor", database: "vapor"), as: .mysql)
You can also parse the credentials from a database connection string.
try app.databases.use(.mysql(url: "<connection string>"), as: .mysql)
To configure a local connection without SSL certificate involved, you should disable certificate verification. You might need to do this for example if connecting to a MySQL 8 database in Docker.
app.databases.use(.mysql( hostname: "localhost", username: "vapor", password: "vapor", database: "vapor", tlsConfiguration: .forClient(certificateVerification: .none) ), as: .mysql)
Warning
Do not disable certificate verification in production. You should provide a certificate to the
TLSConfiguration to verify against.
MongoDB¶
MongoDB is a popular schemaless NoSQL database designed for programmers. The driver supports all cloud hosting providers and self-hosted installations from version 3.4 and up.
Note
This driver is powered by a community created and maintained MongoDB client called MongoKitten. For the official MongoDB client, see mongo-swift-driver.
To use MongoDB, add the following dependencies to your package.
.package(url: "", from: "1.0.0"),
.product(name: "FluentMongoDriver", package: "fluent-mongo-driver")
Once the dependencies are added, configure the database's credentials with Fluent using
app.databases.use in
configure.swift.
To connect, pass a connection string in the standard MongoDB connection URI format.
import Fluent import FluentMongoDriver try app.databases.use(.mongo(connectionString: "<connection string>"), as: .mongo)
Models¶
Models represent fixed data structures in your database, like tables or collections. Models have one or more fields that store codable values. All models also have a unique identifier. Property wrappers are used to denote identifiers and fields as well as more complex mappings mentioned later. Take a look at the following model which represents a galaxy.
final class Galaxy: Model { // Name of the table or collection. static let schema = "galaxies" // Unique identifier for this Galaxy. @ID(key: .id) var id: UUID? // The Galaxy's name. @Field(key: "name") var name: String // Creates a new, empty Galaxy. init() { } // Creates a new Galaxy with all properties set. init(id: UUID? = nil, name: String) { self.id = id self.name = name } }
To create a new model, create a new class conforming to
Model.
Tip
It's recommended to mark model classes
final to improve performance and simplify conformance requirements.
The
Model protocol's first requirement is the static string
schema.
static let schema = "galaxies"
This property tells Fluent which table or collection the model corresponds to. This can be a table that already exists in the database or one that you will create with a migration. The schema is usually
snake_case and plural.
Identifier¶
The next requirement is an identifier field named
id.
@ID(key: .id) var id: UUID?
This field must use the
@ID property wrapper. Fluent recommends using
UUID and the special
.id field key since this is compatible with all of Fluent's drivers.
If you want to use a custom ID key or type, use the
@ID(custom:) overload.
Fields¶
After the identifier is added, you can add however many fields you'd like to store additional information. In this example, the only additional field is the galaxy's name.
@Field(key: "name") var name: String
For simple fields, the
@Field property wrapper is used. Like
@ID, the
key parameter specifies the field's name in the database. This is especially useful for cases where database field naming convention may be different than in Swift, e.g., using
snake_case instead of
camelCase.
Next, all models require an empty init. This allows Fluent to create new instances of the model.
init() { }
Finally, you can add a convenience init for your model that sets all of its properties.
init(id: UUID? = nil, name: String) { self.id = id self.name = name }
Using convenience inits is especially helpful if you add new properties to your model as you can get compile-time errors if the init method changes.
Migrations¶
If your database uses pre-defined schemas, like SQL databases, you will need a migration to prepare the database for your model. Migrations are also useful for seeding databases with data. To create a migration, define a new type conforming to the
Migration protocol. Take a look at the following migration for the previously defined
Galaxy model.
struct CreateGalaxy: Migration { // Prepares the database for storing Galaxy models. func prepare(on database: Database) -> EventLoopFuture<Void> { database.schema("galaxies") .id() .field("name", .string) .create() } // Optionally reverts the changes made in the prepare method. func revert(on database: Database) -> EventLoopFuture<Void> { database.schema("galaxies").delete() } }
The
prepare method is used for preparing the database to store
Galaxy models.
Schema¶
In this method,
database.schema(_:) is used to create a new
SchemaBuilder. One or more
fields are then added to the builder before calling
create() to create the schema.
Each field added to the builder has a name, type, and optional constraints.
field(<name>, <type>, <optional constraints>)
There is a convenience
id() method for adding
@ID properties using Fluent's recommended defaults.
Reverting the migration undoes any changes made in the prepare method. In this case, that means deleting the Galaxy's schema.
Once the migration is defined, you must tell Fluent about it by adding it to
app.migrations in
configure.swift.
app.migrations.add(CreateGalaxy())
Migrate¶
To run migrations, call
vapor run migrate from the command line or add
migrate as an argument to Xcode's Run scheme.
$ vapor run migrate Migrate Command: Prepare The following migration(s) will be prepared: + CreateGalaxy on default Would you like to continue? y/n> y Migration successful
Querying¶
Now that you've successfully created a model and migrated your database, you're ready to make your first query.
All¶
Take a look at the following route which will return an array of all the galaxies in the database.
app.get("galaxies") { req in Galaxy.query(on: req.db).all() }
In order to return a Galaxy directly in a route closure, add conformance to
Content.
final class Galaxy: Model, Content { ... }
Galaxy.query is used to create a new query builder for the model.
req.db is a reference to the default database for your application. Finally,
all() returns all of the models stored in the database.
If you compile and run the project and request
GET /galaxies, you should see an empty array returned. Let's add a route for creating a new galaxy.
Create¶
Following RESTful convention, use the
POST /galaxies endpoint for creating a new galaxy. Since models are codable, you can decode a galaxy directly from the request body.
app.post("galaxies") { req -> EventLoopFuture<Galaxy> in let galaxy = try req.content.decode(Galaxy.self) return galaxy.create(on: req.db) .map { galaxy } }
Seealso
See Content → Overview for more information about decoding request bodies.
Once you have an instance of the model, calling
create(on:) saves the model to the database. This returns an
EventLoopFuture<Void> which signals that the save has completed. Once the save completes, return the newly created model using
map.
Build and run the project and send the following request.
POST /galaxies HTTP/1.1 content-length: 21 content-type: application/json { "name": "Milky Way" }
You should get the created model back with an identifier as the response.
{ "id": ..., "name": "Milky Way" }
Now, if you query
GET /galaxies again, you should see the newly created galaxy returned in the array.
Relations¶
What are galaxies without stars! Let's take a quick look at Fluent's powerful relational features by adding a one-to-many relation between
Galaxy and a new
Star model.
final class Star: Model, Content { // Name of the table or collection. static let schema = "stars" // Unique identifier for this Star. @ID(key: .id) var id: UUID? // The Star's name. @Field(key: "name") var name: String // Reference to the Galaxy this Star is in. @Parent(key: "galaxy_id") var galaxy: Galaxy // Creates a new, empty Star. init() { } // Creates a new Star with all properties set. init(id: UUID? = nil, name: String, galaxyID: UUID) { self.id = id self.name = name self.$galaxy.id = galaxyID } }
Parent¶
The new
Star model is very similar to
Galaxy except for a new field type:
@Parent.
@Parent(key: "galaxy_id") var galaxy: Galaxy
The parent property is a field that stores another model's identifier. The model holding the reference is called the "child" and the referenced model is called the "parent". This type of relation is also known as "one-to-many". The
key parameter to the property specifies the field name that should be used to store the parent's key in the database.
In the init method, the parent identifier is set using
$galaxy.
self.$galaxy.id = galaxyID
By prefixing the parent property's name with
$, you access the underlying property wrapper. This is required for getting access to the internal
@Field that stores the actual identifier value.
Seealso
Check out the Swift Evolution proposal for property wrappers for more information: [SE-0258] Property Wrappers
Next, create a migration to prepare the database for handling
Star.
struct CreateStar: Migration { // Prepares the database for storing Star models. func prepare(on database: Database) -> EventLoopFuture<Void> { database.schema("stars") .id() .field("name", .string) .field("galaxy_id", .uuid, .references("galaxies", "id")) .create() } // Optionally reverts the changes made in the prepare method. func revert(on database: Database) -> EventLoopFuture<Void> { database.schema("stars").delete() } }
This is mostly the same as galaxy's migration except for the additional field to store the parent galaxy's identifier.
field("galaxy_id", .uuid, .references("galaxies", "id"))
This field specifies an optional constraint telling the database that the field's value references the field "id" in the "galaxies" schema. This is also known as a foreign key and helps ensure data integrity.
Once the migration is created, add it to
app.migrations after the
CreateGalaxy migration.
app.migrations.add(CreateGalaxy()) app.migrations.add(CreateStar())
Since migrations run in order, and
CreateStar references the galaxies schema, ordering is important. Finally, run the migrations to prepare the database.
Add a route for creating new stars.
app.post("stars") { req -> EventLoopFuture<Star> in let star = try req.content.decode(Star.self) return star.create(on: req.db) .map { star } }
Create a new star referencing the previously created galaxy using the following HTTP request.
POST /stars HTTP/1.1 content-length: 36 content-type: application/json { "name": "Sun", "galaxy": { "id": ... } }
You should see the newly created star returned with a unique identifier.
{ "id": ..., "name": "Sun", "galaxy": { "id": ... } }
Children¶
Now let's take a look at how you can utilize Fluent's eager-loading feature to automatically return a galaxy's stars in the
GET /galaxies route. Add the following property to the
Galaxy model.
// All the Stars in this Galaxy. @Children(for: \.$galaxy) var stars: [Star]
The
@Children property wrapper is the inverse of
@Parent. It takes a key-path to the child's
@Parent field as the
for argument. Its value is an array of children since zero or more child models may exist. No changes to the galaxy's migration are needed since all the information needed for this relation is stored on
Star.
Eager Load¶
Now that the relation is complete, you can use the
with method on the query builder to automatically fetch and serialize the galaxy-star relation.
app.get("galaxies") { req in Galaxy.query(on: req.db).with(\.$stars).all() }
A key-path to the
@Children relation is passed to
with to tell Fluent to automatically load this relation in all of the resulting models. Build and run and send another request to
GET /galaxies. You should now see the stars automatically included in the response.
[ { "id": ..., "name": "Milky Way", "stars": [ { "id": ..., "name": "Sun", "galaxy": { "id": ... } } ] } ]
Next steps¶
Congratulations on creating your first models and migrations and performing basic create and read operations. For more in-depth information on all of these features, check out their respective sections in the Fluent guide. | https://docs.vapor.codes/4.0/fluent/overview/ | 2021-02-25T04:23:10 | CC-MAIN-2021-10 | 1614178350717.8 | [] | docs.vapor.codes |
complementary packages.
Falcon generally tries to minimize the number of objects that it instantiates. It does this for two reasons: first, to avoid the expense of creating the object, and second to reduce memory usage by reducing the total number of objects required under highly concurrent workloads. Therefore, when adding a route, Falcon requires an instance of your resource class, rather than the class type. That same instance will be used to serve all requests coming in on that route.:
Tip
Falcon will re-raise errors that do not inherit from
HTTPError unless you have registered a custom error handler for that type (see also: falcon.API). flexibility,.
The Falcon framework is, itself, thread-safe. For example, new
Request and
Response objects are created for each incoming HTTP request. However, a single instance of each resource class attached to a route is shared among all requests. Middleware objects and other types of hooks, such as custom error handlers, are likewise shared. Therefore, as long as you implement these classes and callables in a thread-safe manner, and ensure that any third-party libraries used by your app are also thread-safe, your WSGI app as a whole will be thread-safe.
That being said, IO-bound Falcon APIs are usually scaled via multiple processes and green threads (courtesy of the gevent library or similar) which aren’t truly running concurrently, so there may be some edge cases where Falcon is not thread-safe that we aren’t aware of. If you run into any issues, please let us know. the battle-tested gevent library via Gunicorn or uWSGI to scale IO-bound services. meinheld has also been used successfully by the community to power high-throughput, low-latency services. Note that if you use Gunicorn, you can combine gevent and PyPy to achieve an impressive level of performance. (Unfortunately, uWSGI does not yet support using gevent and PyPy together.) functionality it provides at that level..
It is common to carve out a portion of an app and reimplement it in Falcon to boost performance where it is most needed.
If you have access to your load balancer or reverse proxy configuration, we recommend setting up path or subdomain-based rules to split requests between your original implementation and the parts that have been migrated to Falcon (e.g., by adding an additional
location directive to your NGINX config).
If the above approach isn’t an option for your deployment, you can implement a simple WSGI wrapper that does the same thing:
def application(environ, start_response): try: # NOTE(kgriffs): Prefer the host header; the web server # isn't supposed to mess with it, so it should be what # the client actually sent. host = environ['HTTP_HOST'] except KeyError: # NOTE(kgriffs): According to PEP-3333, this header # will always be present. host = environ['SERVER_NAME'] if host.startswith('api.'): return falcon_app(environ, start_response) elif: return webapp2_app(environ, start_response)
See also PEP 3333 for a complete list of the variables that are provided via
environ.
Suppose you have the following routes:
# Resource Collection GET /resources{?marker, limit} POST (see also this section of the tutorial.)
Alternatively, you can use suffixed responders to map both routes to the same resource class:
class MyResource(object): def on_get(self, req, resp, id): pass def on_patch(self, req, resp, id): pass def on_delete(self, req, resp, id): pass def on_get_collection(self, req, resp): pass def on_post_collection(self, req, resp): pass # ... resource = MyResource() api.add_route('/resources/{id}', resource) api.add_route('/resources', resource, suffix='collection')
Let’s say we have the following URL schema:
GET /game/ping GET /game/{game_id} POST /game/{game_id} GET /game/{game_id}/state POST /game/{game_id}/state
We can break this down into three resources:
Ping: GET /game/ping Game: GET /game/{game_id} POST /game/{game_id} GameState: GET /game/{game_id}/state POST /game/{game_id}/state
GameState may be thought of as a sub-resource of Game. It is a distinct logical entity encapsulated within a more general Game concept.
In Falcon, these resources would be implemented with standard classes:
class Ping(object): def on_get(self, req, resp): resp.body = '{"message": "pong"}' class Game(object): def __init__(self, dao): self._dao = dao def on_get(self, req, resp, game_id): pass def on_post(self, req, resp, game_id): pass class GameState(object): def __init__(self, dao): self._dao = dao def on_get(self, req, resp, game_id): pass def on_post(self, req, resp, game_id): pass api = falcon.API() # Game and GameState are closely related, and so it # probably makes sense for them to share an object # in the Data Access Layer. This could just as # easily use a DB object or ORM layer. # # Note how the resources classes provide a layer # of abstraction or indirection which makes your # app more flexible since the data layer can # evolve somewhat independently from the presentation # layer. game_dao = myapp.DAL.Game(myconfig) api.add_route('/game/ping', Ping()) api.add_route('/game/{game_id}', Game(game_dao)) api.add_route('/game/{game_id}/state', GameState(game_dao))
Alternatively, a single resource class could implement suffixed responders in order to handle all three routes:
class Game(object): def __init__(self, dao): self._dao = dao def on_get(self, req, resp, game_id): pass def on_post(self, req, resp, game_id): pass def on_get_state(self, req, resp, game_id): pass def on_post_state(self, req, resp, game_id): pass def on_get_ping(self, req, resp): resp.data = b'{"message": "pong"}' # ... api = falcon.API() game = Game(myapp.DAL.Game(myconfig)) api.add_route('/game/{game_id}', game) api.add_route('/game/{game_id}/state', game, suffix='state') api.add_route('/game/ping', game, suffix='ping').
You can inject extra responder kwargs from a hook by adding them to the params dict passed into the hook. You can also set custom attributes on the
req.context object, as a way of passing contextual information around:
def authorize(req, resp, resource, params): # Check authentication/authorization # ... req.context.role = 'root' req.context.scopes = ('storage', 'things') req.context.uid = 0 # ... @falcon.before(authorize) def on_post(self, req, resp): pass..
This behavior is an unfortunate artifact of the request body mechanics not being fully defined by the WSGI spec (PEP-3333). This is discussed in the reference documentation for
stream, and a workaround is provided in the form of
bounded_stream.
If your app sets
strip_url_path_trailing_slash to
True, Falcon will normalize incoming URI paths to simplify later processing and improve the predictability of application logic. This can be helpful when implementing a REST API schema that does not interpret a trailing slash character as referring to the name of an implicit sub-resource, as traditionally used by websites to reference index pages.
For example, with this option enabled, adding a route for
'/foo/bar' implicitly adds a route for
'/foo/bar/'. In other words, requests coming in for either path will be sent to the same resource.
Note
Starting with version 2.0, the default for the
strip_url_path_trailing_slash request option changed from
True to
False.
+).
By default, Falcon does not consume request bodies. However, setting the
auto_parse_form_urlencoded to
True on an instance of
falcon.API()..req_options.
The default request/response context type has been changed from dict to a bare class in Falcon 2.0. Instead of setting dictionary items, you can now simply set attributes on the object:
# Before Falcon 2.0 req.context['cache_backend'] = MyUltraFastCache.connect() # Falcon 2.0 req.context.cache_backend = MyUltraFastCache.connect()
The new default context type emulates a dict-like mapping interface in a way that context attributes are linked to dict items, i.e. setting an object attribute also sets the corresponding dict item, and vice versa. As a result, existing code will largely work unmodified with Falcon 2.0. Nevertheless, it is recommended to migrate to the new interface as outlined above since the dict-like mapping interface may be removed from the context type in a future release.
Warning
If you need to mix-and-match both approaches under migration, beware that setting attributes such as items or values would obviously shadow the corresponding mapping interface functions.
If an existing project is making extensive use of dictionary contexts, the type can be explicitly overridden back to dict by employing custom request/response types:
class RequestWithDictContext(falcon.Request): context_type = dict class ResponseWithDictContext(falcon.Response): context_type = dict # ... api = falcon.API(request_type=RequestWithDictContext, response_type=ResponseWithDictContext)(). the body isn’t being returned, it’s probably a bug! Let us know so we can help.
By default, Falcon enables the
secure cookie attribute. Therefore, if you are testing your app over HTTP (instead of HTTPS), the client will not send the cookie in subsequent requests.
(See also the cookie documentation.)
In the
on_get() responder method for the resource, you can tell the user agent to download the file by setting the Content-Disposition header. Falcon includes the
downloadable_as property to make this easy:
resp.downloadable_as = 'report.pdf').
Falcon’s testing framework supports both
unittest and
pytest. In fact, the tutorial in the docs provides an excellent introduction to testing Falcon apps with pytest.
© 2019 by Falcon contributors
Licensed under the Apache License, Version 2.0. | https://docs.w3cub.com/falcon~2.0/user/faq | 2021-02-25T05:12:31 | CC-MAIN-2021-10 | 1614178350717.8 | [] | docs.w3cub.com |
Status¶
The Server Status page has two tabs to summarize the current status of GeoServer. The Status tab provides a summary of server configuration parameters and run-time status. The modules tab provides the status of the various modules installed on the server. This page provides a useful diagnostic tool in a testing environment.
Server Status¶
Module Status¶
The modules tab provides a summary of the status of all installed modules in the running server.
System Status¶
System Status adds some extra information about the system in the GeoServer status page in a tab named
System Status.
Usage¶
The system information will be available in the GeoServer status page in the
System status tab (the following image only shows part of the available system information):
If the
System status tab is not present, it means that the plugin was not installed correctly. The
System status tab content will be refreshed automatically every second.
REST interface¶
It is possible to request the available system information (monitoring data) through GeoServer REST API. The supported formats are XML, JSON and HTML.
The available REST endpoints are:
/geoserver/rest/about/system-status /geoserver/rest/about/system-status.json /geoserver/rest/about/system-status.xml /geoserver/rest/about/system-status.html
The HTML representation of the system data is equal to the
System status }, (...) | https://docs.geoserver.org/latest/en/user/configuration/status.html | 2021-02-25T04:46:10 | CC-MAIN-2021-10 | 1614178350717.8 | [array(['../_images/gui.png', '../_images/gui.png'], dtype=object)] | docs.geoserver.org |
[−][src]Module sequoia_openpgp::
packet:: aed
AEAD encrypted data packets.
An encryption container using Authenticated Encryption with Additional Data.
The AED packet is a new packet specified in Section 5.16 of RFC 4880bis. Its aim is to replace the SEIP packet, whose security has been partially compromised. SEIP's weaknesses includes its use of CFB mode (e.g., EFAIL-style CFB gadgets, see Section 5.3 of the EFAIL paper), its use of SHA-1 for integrity protection, and the ability to downgrade SEIP packets to much weaker SED packets.
Although the decision to use AEAD is uncontroversial, the design specified in RFC 4880bis is. According to RFC 5116, decrypted AEAD data can only be released for processing after its authenticity has been checked:
[The authenticated decryption operation] has only a single output, either a plaintext value P or a special symbol FAIL that indicates that the inputs are not authentic.
The controversy has to do with streaming, which OpenPGP has traditionally supported. Streaming a message means that the amount of data that needs to be buffered when processing a message is independent of the message's length.
At first glance, the AEAD mechanism in RFC 4880bis appears to support this mode of operation: instead of encrypting the whole message using AEAD, which would require buffering all of the plaintext when decrypting the message, the message is chunked, the individual chunks are linked together, and AEAD is used to encrypt and protect each individual chunk. Because the plaintext from an individual chunk can be integrity checked, an implementation only needs to buffer a chunk worth of data.
Unfortunately, RFC 4880bis allows chunk sizes that are, in practice, unbounded. Specifically, a chunk can be up to 4 exbibytes in size. Thus when encountering messages that can't be buffered, an OpenPGP implementation has a choice: it can either release data that has not been integrity checked and violate RFC 5116, or it can fail to process the message. As of 2020, GnuPG and RNP process unauthenticated plaintext. From a user perspective, it then appears that implementations that choose to follow RFC 5116 are impaired: "GnuPG can decrypt it," they think, "why can't Sequoia?" This creates pressure on other implementations to also behave insecurely.
Werner argues that AEAD is not about authenticating the data. That is the purpose of the signature. The reason to introduce AEAD is to get the benefits of more modern cryptography, and to be able to more quickly detect rare transmission errors. Our position is that an integrity check provides real protection: it can detect modified ciphertext. And, if we are going to stream, then this protection is essential as it protects the user from real, demonstrated attacks like EFAIL.
RFC 4880bis has not been finalized. So, it is still possible that the AEAD mechanism will change (which is why the AED packet is marked as experimental). Despite our concerns, because other OpenPGP implementations already emit the AEAD packet, we provide experimental support for it in Sequoia. | https://docs.sequoia-pgp.org/sequoia_openpgp/packet/aed/index.html | 2021-02-25T04:41:31 | CC-MAIN-2021-10 | 1614178350717.8 | [] | docs.sequoia-pgp.org |
Add paths to linker search and installed rpath.
INSTALL_RPATH_USE_LINK_PATH is a boolean that if set to
True will append to the runtime search path (rpath) of installed binaries any directories outside the project that are in the linker search path or contain linked library files. The directories are appended after the value of the
INSTALL_RPATH target property.
This property is initialized by the value of the variable
CMAKE_INSTALL_RPATH_USE_LINK_PATH if it is set when a target is created.
© 2000–2020 Kitware, Inc. and Contributors
Licensed under the BSD 3-clause License. | https://docs.w3cub.com/cmake~3.19/prop_tgt/install_rpath_use_link_path | 2021-02-25T05:11:57 | CC-MAIN-2021-10 | 1614178350717.8 | [] | docs.w3cub.com |
Invoice Export
To enable the electronic exchange of transactional documents between you and your business partners, Codejig ERP has implemented the e-invoicing solution. The solution is represented by the Export invoices functionality. It allows you to convert invoices generated in Codejig ERP into e-invoices in the XML or EDI formats that can be sent electronically to your business partners/customers via a forwarding service. To convert Codejig ERP invoices into e-invoices, the Export invoices functionality subjects data from invoice documents to a mapping process to adapt the invoices to the selected e-invoicing or EDI standard. Generated e-invoices can be automatically sent to a forwarding service integrated with Codejig ERP or you can manually upload them to a forwarding service of your choosing.
The service provider Codejig ERP is integrated with is Apix. Apix enables you to send e-invoices to your business partners directly send e-invoices directly from Codejig ERP to it, you have to manually upload them to the service. To be able to use any forwarding services, it is required that you and your business partner sign an agreement on the forwarding service with your service providers. Also, you have to enter information about your service provider in the system through the use of the Service provider directory and select it as your default provider in My company settings, under the Settings tab. Information indicated about your service provider will be automatically used by the system in the process of e-invoice generation to specify the data regarding the Sender and Provider parties of the e-invoicing process. Your service provider, then, maps the data from the uploaded files to the structure required by the receiver/customer and performs validation checks of the data and file structure. Then, the service provider mediates/routes e-invoices to the receiver/customer.
If you use Apix integration, you have to enter your registration information in the system via the Apix settings directory to ensure the seamless automatic data exchange with Apix and select it as your default service provider in My company settings, under the Settings tab. After uploading your e-invoices to Apix, you will receive an acknowledgment from the service stating that data sent is either correct or contains errors. The acknowledgment will be received as a notification displayed in the Response field; also, status assigned to e-invoices in Codejig ERP will be automatically changed. If you use any other service provider that is not custom intergated with Codejig ERP, Export invoices
- On the Codejig ERP Main menu, click the Sales tab.
- Under the Sales tab, click Export invoices.
More information
My Company Settings: Settings Tab
Apix Settings
Invoices to Customer | https://docs.codejig.com/en/entity2305843015656313960/view/4611686018427409924 | 2021-07-24T06:51:17 | CC-MAIN-2021-31 | 1627046150134.86 | [] | docs.codejig.com |
1. Choose your platform
In order to connect Deko to your webstore, you need to subscribe to a platform that manages payments. Deko has developed plugins to support both ZIP and NewPay products.
This includes the following:
- WooCommerce (coming soon)
- Magento (coming soon)
To sign up for an account with a platform, go to the links provided above. The Platform will provide instructions on how to integrate to Deko.
2. Design a web store
In order to connect Deko to your business, you need a webstore either in development or production.
3. Obtain your Deko credentials
Follow the steps outlined in the platform user guides for adding your Deko credentials: ClientId and Client Secret.
4. Install the Deko plug-in
Follow the steps outlined in the platform user guides for adding your Deko plug-in
5. Run tests
Once installed, complete the following steps:
- Check to verify that the integration appears on your Checkout page, and is displayed correctly.
- Run test transactions, to ensure that once retail finance is approved via Deko, the transaction updates your shopping cart Order Management System to accurately reflect the transaction status.
6. Go Live
You are ready to go live once you have completed your tests and confirmed everything is working as expected.
Go-Live Checklist
- Get your production credentials: ClientId and Client Secret
- Add Deko branding and notices (e.g., add logos and FCA disclaimer to add to your page footer)
- Get access to Backoffice and ensure your staff are trained on how to use it.
- When you are ready to go live, please contact your Deko account manager and request switching your account live.
Updated 2 months ago | https://docs.dekopay.com/v5/docs/platform-integration | 2021-07-24T07:04:06 | CC-MAIN-2021-31 | 1627046150134.86 | [] | docs.dekopay.com |
In order to cool down better my two mac mini servers, I have acquired two HUGE 200mm fans from Thermaltake. I did not necessary need the led lights but it turned out in server rack they look amazing.
It is a short one. Enjoy it and subscribe if you find my things interesting there.
Cheers! ;) | https://docs.gz.ro/thermaltake-pure-20-huge-and-quiet.html | 2021-07-24T07:23:32 | CC-MAIN-2021-31 | 1627046150134.86 | [array(['https://docs.gz.ro/sites/default/files/styles/thumbnail/public/pictures/picture-1-1324065756.jpg?itok=rS4jtWxd',
"root's picture root's picture"], dtype=object) ] | docs.gz.ro |
To execute a conversion with the SnowConvert CLI you have to have an active license. Currently, the licenses for the CLI are different than the UI, but if you already have a license for the UI you should be able to reuse the same license key. In the section below we show how to install a license key.
There are several Command Line Arguments documented below, but the main ones are
-i for the input folder and
-o for the output folder.
To install a license key just execute SnowConvert CLI program with the
-l argument and the license key.
$: snowct -l P-ABCD-12345-EFGHI
To install a license key just execute SnowConvert CLI program with just the
-l and no other arguments.
$: snowct -l
To migrate a folder just execute SnowConvert CLI program with the
-i <INPUT FOLDER> and
-o <OUTPUT FOLDER> arguments.
$: snowct -i ~/Documents/Workspace/Code -o ~/Documents/Workspace/Output
The path to the folder or file containing the input source code.
The path to the output folder where the converted code and reports will be stored.
Flag to indicate whether or not to generate only Assessments files. By default, it's set to FALSE.
The encoding code page number used for parsing the source files. We only accept encodings supported by .NET Core. Here are the ones supported at the moment:
Flag to indicate if user wants to comment nodes that have missing dependencies.
String value specifying the custom schema name to apply. If not specified, either PUBLIC or the original database name will be used. Example: DB1.MyCustomSchema.Table1.
String value specifying the custom database name to apply. Example: MyCustomDB.PUBLIC.Table1.
Integer value for the CHARACTER to Approximate Number transformation (Default:
10).
String value for the Default DATE format (Default:
"YYYY/MM/DD").
String value for the Default TIME format (Default:
"HH:MI:SS") .
String value for the Default TIMESTAMP format (Default:
"YYYY/MM/DD HH:MI:SS").
String value for the Default TIMEZONE format (Default:
"GMT-5").
Flag to indicate whether or not to change string cast to a trimming function when it might truncate data.
Flag to indicate whether or not the user wants to be warned when a string cast should be changed to a trimming function to avoid data truncation.
Flag to indicate whether or not to use the LEFT function in string cast that might truncate data.
Flag to indicate whether Delete All statements must be replaced to Truncate or not.
Shows the license information. If it's followed by a license key, it will attempt to download and install such a license. For example:
Showing license status
snowct -l
Installing a license
snowct -l 12345-ASDFG-67890
Show license terms information.
Display the help information.
Flag to indicate whether EWIs comments (Errors, Warnings, and Issues) will be generated on the converted code. Default is false.
Flag to indicate whether the SQL statements SELECT, INSERT, CREATE, DELETE, UPDATE, DROP, MERGE in Stored Procedures will be tagged on the converted code.!>
Learn more about how you can get access to the SnowConvert for Teradata Command Line Interface tool by filling out the form on our Snowflake Migrations Info page. | https://docs.mobilize.net/snowconvert/for-teradata/cli | 2021-07-24T08:18:01 | CC-MAIN-2021-31 | 1627046150134.86 | [] | docs.mobilize.net |
Tag represents the area of device data identified by the specific address. Address identifies the data type and the dimension (scalar or array). Tags are organized in a hierarchy, which forms the objects structure of the OPC UA items, where the folder is a group of tags, and a variable is a tag. | https://docs.monokot.io/hc/en-us/articles/360034374552-Overview | 2021-07-24T08:38:13 | CC-MAIN-2021-31 | 1627046150134.86 | [] | docs.monokot.io |
Powering your NetApp HCI system off or on
Contributors
Download PDF of this page
You can power off or power on your NetApp HCI system if you have a scheduled outage, need to perform hardware maintenance, or need to expand the system. Use the following tasks to power off or power on your NetApp HCI system as required.
You might need to power off your NetApp HCI system under a number of different circumstances, such as:
Scheduled outages
Chassis fan replacements
Firmware upgrades
Storage or compute resource expansion
The following is an overview of the tasks you need to complete to power off a NetApp HCI system:
Power off all virtual machines except the VMware vCenter server (vCSA).
Power off all ESXi servers except the one hosting the vCSA.
Power off the vCSA.
Power off the NetApp HCI storage system.
The following is an overview of the tasks you need to complete to power on a NetApp HCI system:
Power on all physical storage nodes.
Power on all physical compute nodes.
Power on the vCSA.
Verify the system and power on additional virtual machines. | https://docs.netapp.com/us-en/hci18/docs/concept_nde_hci_power_off_on.html | 2021-07-24T06:50:33 | CC-MAIN-2021-31 | 1627046150134.86 | [] | docs.netapp.com |
DataONE infrastructure environments¶
In addition to the production environment that the end user sees when interacting with DataONE services, DataONE maintains several other independent environments for development and testing. These environments emulate the production environment and allow developing and testing of DataONE components without affecting the production environment.
Each environment is a set of Coordinating Nodes (CNs) along with a set of Member Nodes (MNs) that are registered to those CNs. Each environment maintain sets of content, formats, and DataONE identities independent of each other. By registering a MN to an environment, you enable content to synchronize with the CNs, and be replicated to other nodes in the same environment.
Since there are no connections between the environments, information registered in one environment - certificates, DataONE identities, and formats - cannot be carried over into another environment.
The environments are:
The Production environment is only used by completed MNs that hold production quality data.
If a MN is under development or if it is experimental in nature, for instance, if the purpose is to learn more about the DataONE infrastructure or if the MN will be populated with objects that may not be of production quality, one of the test environments should be used.
To register a Member Node into an environment, follow the steps in Node Registration. | https://dataone-operations.readthedocs.io/en/latest/MN/deployment/environments.html | 2021-07-24T08:48:26 | CC-MAIN-2021-31 | 1627046150134.86 | [] | dataone-operations.readthedocs.io |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.