content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
A newer version of this page is available. Switch to the current version. DrawingObject Interface A base interface for drawing objects. Namespace: DevExpress.XtraRichEdit.API.Native Assembly: DevExpress.RichEdit.v20.2.Core.dll Declaration Remarks The DrawingObject interface is the base interface for the following objects: Shape - a shape embedded in a document. NestedShape - a shape that belongs to a group or drawing canvas. See Also Feedback
https://docs.devexpress.com/OfficeFileAPI/DevExpress.XtraRichEdit.API.Native.DrawingObject?v=20.2
2022-05-16T12:45:08
CC-MAIN-2022-21
1652662510117.12
[]
docs.devexpress.com
HTTP Tests - Introduction - Session / Authentication - Testing JSON APIs - Testing File Uploads - Available Assertions Introduction Laravel provides a very fluent API for making HTTP requests to your application and examining the output. For example, take a look at the test defined below: <?php namespace Tests\Feature; use Tests\TestCase; use Illuminate\Foundation\Testing\RefreshDatabase; use Illuminate\Foundation\Testing\WithoutMiddleware; class ExampleTest extends TestCase { /** * A basic test example. * * @return void */ public function testBasicTest() { $response = $this->get('/'); $response->assertStatus(200); } } The get method makes a GET request into the application, while the assertStatus method asserts that the returned response should have the given HTTP status code. In addition to this simple assertion, Laravel also contains a variety of assertions for inspecting the response headers, content, JSON structure, and more. Customizing Request Headers You may use the withHeaders method to customize the request's headers before it is sent to the application. This allows you to add any custom headers you would like to the request: <?php class ExampleTest extends TestCase { /** * A basic functional test example. * * @return void */ public function testBasicExample() { $response = $this->withHeaders([ 'X-Header' => 'Value', ])->json('POST', '/user', ['name' => 'Sally']); $response ->assertStatus(201) ->assertJson([ 'created' => true, ]); } } {tip} The CSRF middleware is automatically disabled when running tests. Session / Authentication Laravel provides several helpers for working with the session during HTTP testing. First, you may set the session data to a given array using the withSession method. This is useful for loading the session with data before issuing a request to your application: <?php class ExampleTest extends TestCase { public function testApplication() { $response = $this->withSession(['foo' => 'bar']) ->get('/'); } } use App\User; class ExampleTest extends TestCase { public function testApplication() { $user = factory(User::class)->create(); $response = $this->actingAs($user) ->withSession(['foo' => 'bar']) ->get('/'); } } You may also specify which guard should be used to authenticate the given user by passing the guard name as the second argument to the actingAs method: $this->actingAs($user, 'api')() { $response = $this->json('POST', '/user', ['name' => 'Sally']); $response ->assertStatus(201) ->assertJson([ 'created' => true, ]); } } {tip} The assertJsonmethod converts the response to an array and utilizes PHPUnit::assertArraySubsetto verify that the given array exists within the JSON response returned by the application. So, if there are other properties in the JSON response, this test will still pass as long as the given fragment is present. Verifying An Exact JSON Match If you would like to verify that the given array is an exact match for the JSON returned by the application, you should use the assertExactJson method: <?php class ExampleTest extends TestCase { /** * A basic functional test example. * * @return void */ public function testBasicExample() { $response = $this->json('POST', '/user', ['name' => 'Sally']); $response ->assertStatus(201) ->assertExactJson([ 'created' => true, ]); } } Testing File Uploads The Illuminate\Http\UploadedFile class provides a fake method which may be used to generate dummy files or images for testing. This, combined with the Storage facade's fake method greatly simplifies the testing of file uploads. For example, you may combine these two features to easily test an avatar upload form: <?php namespace Tests\Feature; use Tests\TestCase; use Illuminate\Http\UploadedFile; use Illuminate\Support\Facades\Storage; use Illuminate\Foundation\Testing\RefreshDatabase; use Illuminate\Foundation\Testing\WithoutMiddleware; class ExampleTest extends TestCase { public function testAvatarUpload() { Storage::fake('avatars'); $file = UploadedFile::fake()->image('avatar.jpg'); $response = $this->json('POST', '/avatar', [ 'avatar' => $file, ]); // Assert the file was stored... Storage::disk('avatars')->assertExists($file->hashName()); // Assert a file does not exist... Storage::disk('avatars')->assertMissing('missing.jpg'); } } Fake File Customization When creating files using the fake method, you may specify the width, height, and size of the image in order to better test your validation rules: UploadedFile::fake()->image('avatar.jpg', $width, $height)->size(100); In addition to creating images, you may create files of any other type using the create method: UploadedFile::fake()->create('document.pdf', $sizeInKilobytes); Available Assertions Response Assertions Laravel provides a variety of custom assertion methods for your PHPUnit tests. These assertions may be accessed on the response that is returned from the json, get, put, and delete test methods: assertCookie assertCookieExpired assertCookieNotExpired assertCookieMissing assertDontSee assertDontSeeText assertExactJson assertForbidden assertHeader assertHeaderMissing assertJson assertJsonCount assertJsonFragment assertJsonMissing assertJsonMissingExact assertJsonStructure assertJsonValidationErrors assertLocation assertNotFound assertOk assertPlainCookie assertRedirect assertSee assertSeeInOrder assertSeeText assertSeeTextInOrder assertSessionHas assertSessionHasAll assertSessionHasErrors assertSessionHasErrorsIn assertSessionHasNoErrors assertSessionMissing assertStatus assertSuccessful assertViewHas assertViewHasAll assertViewIs assertViewMissing assertCookie Assert that the response contains the given cookie: $response->assertCookie($cookieName, $value = null); assertCookieExpired Assert that the response contains the given cookie and it is expired: $response->assertCookieExpired($cookieName); assertCookieNotExpired Assert that the response contains the given cookie and it is not expired: $response->assertCookieNotExpired($cookieName); assertCookieMissing Assert that the response does not contains the given cookie: $response->assertCookieMissing($cookieName); assertDontSee Assert that the given string is not contained within the response: $response->assertDontSee($value); assertDontSeeText Assert that the given string is not contained within the response text: $response->assertDontSeeText($value); assertExactJson Assert that the response contains an exact match of the given JSON data: $response->assertExactJson(array $data); assertForbidden Assert that the response has a forbidden status code: $response->assertForbidden(); assertHeader Assert that the given header is present on the response: $response->assertHeader($headerName, $value = null); assertHeaderMissing Assert that the given header is not present on the response: $response->assertHeaderMissing($headerName); assertJson Assert that the response contains the given JSON data: $response->assertJson(array $data); assertJsonCount Assert that the response JSON has an array with the expected number of items at the given key: $response->assertJsonCount($count, $key = null); assertJsonFragment Assert that the response contains the given JSON fragment: $response->assertJsonFragment(array $data); assertJsonMissing Assert that the response does not contain the given JSON fragment: $response->assertJsonMissing(array $data); assertJsonMissingExact Assert that the response does not contain the exact JSON fragment: $response->assertJsonMissingExact(array $data); assertJsonStructure Assert that the response has a given JSON structure: $response->assertJsonStructure(array $structure); assertJsonValidationErrors Assert that the response has the given JSON validation errors for the given keys: $response->assertJsonValidationErrors($keys); assertLocation Assert that the response has the given URI value in the Location header: $response->assertLocation($uri); assertNotFound Assert that the response has a not found status code: $response->assertNotFound(); assertOk Assert that the response has a 200 status code: $response->assertOk(); assertPlainCookie Assert that the response contains the given cookie (unencrypted): $response->assertPlainCookie($cookieName, $value = null); assertRedirect Assert that the response is a redirect to a given URI: $response->assertRedirect($uri); assertSee Assert that the given string is contained within the response: $response->assertSee($value); assertSeeInOrder Assert that the given strings are contained in order within the response: $response->assertSeeInOrder(array $values); assertSeeText Assert that the given string is contained within the response text: $response->assertSeeText($value); assertSeeTextInOrder Assert that the given strings are contained in order within the response text: $response->assertSeeTextInOrder(array $values); assertSessionHas Assert that the session contains the given piece of data: $response->assertSessionHas($key, $value = null); assertSessionHasAll Assert that the session has a given list of values: $response->assertSessionHasAll(array $data); assertSessionHasErrors Assert that the session contains an error for the given field: $response->assertSessionHasErrors(array $keys, $format = null, $errorBag = 'default'); assertSessionHasErrorsIn Assert that the session has the given errors: $response->assertSessionHasErrorsIn($errorBag, $keys = [], $format = null); assertSessionHasNoErrors Assert that the session has no errors: $response->assertSessionHasNoErrors(); assertSessionMissing Assert that the session does not contain the given key: $response->assertSessionMissing($key); assertStatus Assert that the response has a given code: $response->assertStatus($code); assertSuccessful Assert that the response has a successful status code: $response->assertSuccessful(); assertViewHas Assert that the response view was given a piece of data: $response->assertViewHas($key, $value = null); assertViewHasAll Assert that the response view has a given list of data: $response->assertViewHasAll(array $data); assertViewIs Assert that the given view was returned by the route: $response->assertViewIs($value); assertViewMissing Assert that the response view is missing a piece of bound data: $response->assertViewMissing($key); Authentication Assertions Laravel also provides a variety of authentication related assertions for your PHPUnit tests:
https://docs.laravel-dojo.com/laravel/5.6/http-tests
2022-05-16T11:50:06
CC-MAIN-2022-21
1652662510117.12
[]
docs.laravel-dojo.com
Overview of Azure Cloud Services (classic) Important Cloud Services (classic) is now deprecated for new customers and will be retired on August 31st, 2024 for all customers. New deployments should use the new Azure Resource Manager based deployment model Azure Cloud Services (extended support).. example, a simple application might use just a single web role, serving a website. A more complex application might use a web role to handle incoming requests from users, and then pass those requests on to a worker role for processing. (This communication might use Azure Service Bus or Azure Queue storage.) As the preceding figure suggests, all the VMs in a single application run in the same cloud service. Users access the application through a single public IP address, with requests automatically load balanced across the application's VMs. The platform scales and deploys the VMs in an Azure Cloud Services application in a way that avoids a single point of hardware failure. Even though applications run in VMs, it's important to understand that Azure Cloud Services provides PaaS, not infrastructure as a service (IaaS). Here's one way to think about it. With IaaS, such as Azure Virtual Machines, you first create and configure the environment your application runs in. Then you deploy your application into this environment. You're responsible for managing much of this world, by Azure Cloud Services, you don't create virtual machines. Instead, you provide a configuration file that tells Azure how many of each you'd like, such as "three web role instances" and "two worker role instances." The platform then creates them for you. You still choose what size those backing VMs should be, but you don't explicitly create them yourself. If your application needs to handle a greater load, you can ask for more VMs, and Azure creates those instances. If the load decreases, you can shut down those instances and stop paying for them. An Azure Cloud Services application is typically made available to users via a two-step process. A developer first uploads the application to the platform's staging area. When the developer is ready to make the application live, they use the Azure portal to swap staging with production. This switch between staging and production can be done with no downtime, which lets a running application be upgraded to a new version without disturbing its users. Monitoring Azure Cloud Services also provides monitoring. Like Virtual Machines, it detects a failed physical server and restarts the VMs that were running on that server on a new machine. But Azure Cloud Services also detects failed VMs and applications, not just hardware failures. Unlike Virtual Machines, it has an agent inside each web and worker role, and so it's able to start new VMs and application instances when failures occur. The PaaS nature of Azure Cloud Services has other implications, too. One of the most important is that applications built on this technology should be written to run correctly when any web or worker role instance fails. To achieve this, an Azure Cloud Services application shouldn't maintain state in the file system of its own VMs. Unlike VMs created with Virtual Machines, writes made to Azure Cloud Services VMs aren't persistent. There's nothing like a Virtual Machines data disk. Instead, an Azure Cloud Services application should explicitly write all state to Azure SQL Database, blobs, tables, or some other external storage. Building applications this way makes them easier to scale and more resistant to failure, which are both important goals of Azure Cloud Services. Next steps Feedback Submit and view feedback for
https://docs.microsoft.com/en-in/azure/cloud-services/cloud-services-choose-me
2022-05-16T13:43:50
CC-MAIN-2022-21
1652662510117.12
[]
docs.microsoft.com
A combination of hint values that specify high-level intended usage patterns for the VisualElement. This property can only be set when the VisualElement is not yet part of a Panel. Once part of a Panel, this property becomes effectively read-only, and attempts to change it will throw an exception. The specification of proper UsageHints drives the system to make better decisions on how to process or accelerate certain operations based on the anticipated usage pattern. Note that those hints do not affect behavioral or visual results, but only affect the overall performance of the panel and the elements within. Generally it advised to always consider specifying the proper UsageHints, but keep in mind that some UsageHints may be internally ignored under certain conditions (e.g. due to hardware limitations on the target platform).
https://docs.unity3d.com/ja/2021.1/ScriptReference/UIElements.VisualElement-usageHints.html
2022-05-16T13:13:44
CC-MAIN-2022-21
1652662510117.12
[]
docs.unity3d.com
offers an integration for reporting your Amazon Athena data. Athena integration: - New Relic polling interval: 5 minutes - Amazon CloudWatch data interval: 1 minute Find and use data To find your integration data, go to one.newrelic.com > Infrastructure > AWS and select an integration. You can query and explore your data using this event type: For more on how to use your data, see Understand and use integration data. Metric data This integration collects Amazon Athena data for WorkGroup.
https://docs.newrelic.com/kr/docs/infrastructure/amazon-integrations/aws-integrations-list/aws-athena-monitoring-integration/
2022-05-16T12:16:07
CC-MAIN-2022-21
1652662510117.12
[]
docs.newrelic.com
The SuppressFormsAuthenticationRedirectModule module prevents the asp.net built in FormsAuthenticationModule from hijacking 401 requests and redirecting to a login page. Normally, this is the desired behavior if you are using a web browser and access an unauthorized page, but in the case of an API, we do not want that. This module uses a hack to get this done. It temporarily replaces the 401 error with a 402 to trick the FormsAuthenticationModule and then puts the 401 back before the request is finished. It only does this on the path for your API, the rest of the website will behave as normal. Note, that there is a non-hack way to do this now, built into .net 4.5 and I have commented the code as to what that is. When appropriate a .net 4.5 package could be released containing this updated code. To use this, first register the <system.web> < <add name="FormsAuthenticationDisposition" type="ServiceStack.SuppressFormsAuthenticationRedirectModule, ServiceStack" /> </ </system.web> <!-- Required for IIS 7.0 (and above?) --> <system.webServer> <validation validateIntegratedModeConfiguration="false" /> < <add name="FormsAuthenticationDisposition" type="ServiceStack.SuppressFormsAuthenticationRedirectModule, ServiceStack" /> </ </system.webServer> next, configure the module with where your API lives - defaults to /api, so in your AppHost Configure: public override void Configure(Funq.Container container) { SetConfig(new HostConfig { HandlerFactoryPath = "/yourapipath", }); //this is the configuration for Hijacking prevention SuppressFormsAuthenticationRedirectModule.PathToSupress = Config.HandlerFactoryPath; }
https://docs.servicestack.net/form-hijacking-prevention
2022-05-16T12:50:51
CC-MAIN-2022-21
1652662510117.12
[]
docs.servicestack.net
BucketId, that you must create in your Thinger.io console. thing.sleep(seconds)function, that will sleep the processor and all WiFi transmissions. After the sleep, the device will start again like a normal reboot. ⚠ DEEPSLEEP CONSIDERATIONS: - To allow the processor to automatically wake up it is mandatory to weld the WKUP connexion of the board bottom as shown in the section "other considerations". - During the deepSleep mode, it is not possible to flash code. To change the program you will need to make a forced flash mode boot up as described in the Uploading firmware section. - Note that, when the processor makes a hard reset, all dynamic variables will lost its values.
https://docs.thinger.io/others/hardware/climastick-devices
2022-05-16T12:08:43
CC-MAIN-2022-21
1652662510117.12
[]
docs.thinger.io
For IT and Ops As an Admin, you have a vital role in ThoughtSpot Cloud: managing user and group access, local authentication, and integration with SAML. These are the Admin tasks that you have to complete to make ThoughtSpot Cloud available for everyone in your organization. We take care of the rest.
https://docs.thoughtspot.com/cloud/latest/it-ops
2022-05-16T12:01:41
CC-MAIN-2022-21
1652662510117.12
[]
docs.thoughtspot.com
Create your First Bot yellow.ai provides it's users who sign up with a business email a bot to explore the platform on a free trial period basis., with no strings attached! Interested? Follow this guide to create your first bot on the yellow.ai platform. ( conditions applied ) Building chatbot is fun. You start with a basic bot which is as simple as greeting Hello World! Then you move on to building a complex bot which can converse with users, answer FAQs and generate Lead for you. - Register yourself on using your email ID or signup using Google/GitHub/Microsoft - If you signup using email ID, once signup is done you should receive an email from yellow.ai. In that email, click on - Post Signup, go to left sidebar and go to Projects. Now click on Create new project. You should see a popup now, click on - Fill bot related details such as Name of your bot, industry, bot description. If you don't have it with you right now, worry not! You can always change it later. - Select the channel where you want publish your bot. For now, let's just select `Website. - On the configure page, submit How do you greet your customer?. This is the very first message (welcome message) your customers will see when they land on the bot, so try to comeup with something interesting ;) Below that add Top questions your customers ask you?. Post bot creation, you can find these questions under FAQs section. - Now your first bot on yellow.ai platform is ready! Click on Go to dashboard, now under Try your botsection click on Startbutton. - Here, you will receive a welcome message from the bot! Congratulation on starting your bot building journey on the yellow.ai platform. Incase you get stuck or have any question, don't hesitate to ask your queries at #Not able to create a bot? If you're facing issues in creating a new bot on our platform; there can be several reasons for the same. You already have a bot, since all new accounts are allowed only one bot you will not be able to create another bot until upgraded your account. Reach out to [email protected] in this case. You are in the wrong subscription, if you had an account with us earlier then you need to create a new subscription and can create a bot in that subscription. There is a technical glitch and we request you to setup a call with us to resolve the same. Please feel free to schedule Call with experts using this link: Just schedule a call as per your availability and we'd be happy to answer any of your bot building related questions.
https://docs.yellow.ai/docs/cookbooks/getting_started/
2022-05-16T11:51:24
CC-MAIN-2022-21
1652662510117.12
[]
docs.yellow.ai
DevOps¶ All administrative tasks are managed by running make tasks using the top-level Makefile in the project folder. Documentation¶ Documentation is based on Sphinx and created and hosted by ReadTheDocs using the belbio user id. Dependabot¶ We use to keep the python module requirements up to date. It uses the belbio user id. Code Quality¶ We are using Code Climate for code quality assessments. We are using CodeCov for code test coverage assessments. Contributor Licensing Agreements¶ All pull requests require signing the [CLA Assistant]( Contributor’s License Agreement.
https://bel-resources.readthedocs.io/en/latest/devops.html
2022-05-16T12:08:33
CC-MAIN-2022-21
1652662510117.12
[]
bel-resources.readthedocs.io
Equal (eq, =) Description You can apply this operation either as a Filter or Create column operation: This operation is case sensitive. Use the Equal - case insensitive (eqic) operation if you need to apply this operation ignoring case. How does it work in the search window? Select Filter / Create column in the search window toolbar, then select the Equal operation. You need to specify two arguments: If you use the Create column operation, the data type of the values in the new column is boolean (true or false). Example In the demo.ecommerce.data table, we want to detect events with status code 200. We will use the Create column operation to add a new Boolean column that shows true when our events have status code 200. We will enter status_code_200 as the column name. The arguments needed are: - Value - statusCode column - is equal to - Click the pencil icon and enter 200 Click Create column and you will see the following result: Click Filter and follow the same steps to filter events with status code 200. How does it work in LINQ? Use the operator where... to apply the Filter operation and as... to apply the Create column operation. These are the valid formats of the Equal operation: field1 = field2 eq(field1, field2) Examples You can copy the following LINQ scripts and try the above example on the demo.ecommerce.data table: from demo.ecommerce.data where statusCode = 200 or from demo.ecommerce.data where eq(statusCode, 200) And this is the same example using the Create column operation: from demo.ecommerce.data select statusCode = 200 as status_code_200 or from demo.ecommerce.data select eq(statusCode, 200) as status_code_200
https://docs.devo.com/confluence/ndt/v7.8.0/searching-data/building-a-query/operations-reference/order-group/equal-eq
2022-05-16T12:07:38
CC-MAIN-2022-21
1652662510117.12
[]
docs.devo.com
Operations log enhancements Important This content is archived and is not being updated. For the latest documentation, go to What's new and planned for Dynamics 365 Business Central. For the latest release plans, go to Dynamics 365 and Microsoft Power Platform release plans. Business value Better traceability of the admin operations. Feature details The Operations log in the Business Central admin center provides an overview of the admin operations in the relevant Business Central online environments, such as restoring, renaming, installing, and uninstalling apps, exporting databases, and moving environments between different Azure Active Directory (Azure AD) tenants. In this release wave, we're adding more operation logs to this view. Administrators will be able to see the following new operations: - Environment was created - Environment was deleted - Environment was copied - Environment properties were updated The administrators will also be able to truncate the log to remove old operations that are no longer relevant.
https://docs.microsoft.com/en-us/dynamics365-release-plan/2021wave2/smb/dynamics365-business-central/operations-log-enhancements
2022-05-16T13:32:04
CC-MAIN-2022-21
1652662510117.12
[]
docs.microsoft.com
Defines the rendering style of control points for PlanarFigure objects. More... #include <mitkPlanarFigureControlPointStyleProperty.h> Defines the rendering style of control points for PlanarFigure objects. Used by PlanarFigureMapper2D to determine which of several control point shapes to use. Currently this is basically the choice between squares and circles. If more options are implemented, this class should be enhanced. After construction, the default shape is a square. Definition at line 34 of file mitkPlanarFigureControlPointStyleProperty.h. Definition at line 47 of file mitkPlanarFigureControlPointStyleProperty.h. Constructor. Sets the decoration type to the given value. If it is not valid, the representation is set to none this function is overridden as protected, so that the user may not add additional invalid types. Reimplemented from mitk::EnumerationProperty. Adds the standard enumeration types with corresponding strings.
https://docs.mitk.org/nightly/classmitk_1_1PlanarFigureControlPointStyleProperty.html
2022-05-16T12:01:33
CC-MAIN-2022-21
1652662510117.12
[]
docs.mitk.org
Select date ranges Choosing the date range to view or download data. To select the date range for the data you want to view or download, simply use the calendar function found in the upper-right corner of many pages in NS8. Select the date range you want by using one of the preset buttons or selecting dates on the calendar, and select Apply to complete the process. Updated over 1 year ago
https://docs.ns8.com/docs/selecting-date-ranges
2022-05-16T12:30:07
CC-MAIN-2022-21
1652662510117.12
[]
docs.ns8.com
Hosted Payment Page The Hosted Payment Page (HPP) gives your customers an easy way to submit a payment to your business by displaying a payment form as an iFrame on your checkout page or as a separate payment page. The HPP gives you a way to add secure payment features to your website while limiting your exposure to PCI regulations. The HPP documentation includes all of the information you will need to configure the HPP and integrate it as part of your website: - Implementation Overview - A high-level description of how the HPP works as part your application. - Prerequisites - A list of requirements that you must have in place before implementing the HPP. - Configuration - A step-by-step guide detailing how to use the vPortal to configure the appearance and features of the HPP. - Integration - A detailed guide for adding the HPP to your web application. Features Vesta’s HPP includes the following features: - Customizable Payment Forms and Email Receipts - Ensure that the appearance of the HPP matches your branding by setting the logo and colors that the payment form displays. - Vesta’s Fraud Protection and Risk Management - Apply all of Vesta’s fraud protection features and receive the same Zero-Fraud Guarantee for every accepted transaction without any additional API requests. - Automated Secure Customer Authentication and Identity Challenges - Perform Secure Customer Authentication by automatically sending the transaction to 3DSecure authentication when required. If a transaction requires an identity challenge, the HPP automatically walks the customer through obtaining and entering a one-time passcode. - Apple Pay Support - Provide your customers with an additional way to pay for purchases by accepting Apple Pay without any additional integration steps. - Billing Address Collection and Automated Tax Retrieval - Support guest checkout by collecting billing address information directly in the payment form. If the customer changes the billing address during checkout, the HPP retrieves the new tax amount from a webhook that you configure in the Vesta Portal. - Installment Payments - Offer 3, 6, 9, or 12 month installment payment plans without any additional coding on your end. Vesta handles the recurring payment solution for you. - Deffered Payment Confirmations - Wait to charge your customer’s card until after the order has shipped. This can improve customer satisfaction, support backorders, and give you time to ensure that the transaction was not fraudulent before filling the order. For deferred confirmations, the transaction will remain open for up to 5 days. You must confirm the transaction using the Dispositionendpoint of the Enterprise Acquiring REST API, or on the transactions lookup screen in the the Vesta portal. - Localization - Set the display language of the payment page using the LocaleCodefield in the body of your request to the OrderCreateendpoint. Implementation Overview The steps below describe the checkout process using HPP: When your customer completes shopping and is ready to check out, display a web form to collect any additional information that is required by the OrderCreateendpoint. When your customer is ready to pay, send a POST request to the OrderCreateendpoint of the HPP REST API with the customer details in the body of the request. The OrderCreateresponse includes a URL for the payment form. Redirect your customer to the payment form URL or display the URL in an iFrame on your checkout page. Note: In order to offer Apple Pay as a payment option, you must redirect your customer to the URL. The Apple Pay option cannot be offered when you display the payment page in an iFrame. Your customer enters payment details, and updates the billing address, if needed. Vesta assesses the transaction for risk and submits the payment for processing. If the customer edited the billing address, Vesta retrieves the updated tax amount from the webhook URL that you set and updates the tax amount to be charged. When Vesta finishes processing the transaction, Vesta POSTs the results to the Order Status webhook URL that you specify during setup, and, if needed, redirects your customer to a confirmation page that you specify. Display a page informing your customer of the results of the transaction and handle order fulfillment as normal. See the Configuration and Integration pages for details about how to incorporate the HPP into your application. Prerequisites Contact your Integration Specialist to set up a walkthrough of the Hosted Payment Page and to enable it for your account. In order to use HPP in your application, you must ensure that the following items are in place before proceeding with the integration: - Enterprise Acquiring Account - HPP requires an enterprise acquiring account so that Vesta can submit transactions for processing on your behalf. - API Password - You must send your API password with every request that you make to the Vesta APIs. Obtain your API password by navigating to the Settings page of the Vesta Portal and selecting API Keys from the Developer Settings pane.
https://docs.vesta.io/enterprise-acquiring/hpp/
2022-05-16T11:01:56
CC-MAIN-2022-21
1652662510117.12
[]
docs.vesta.io
Product feature activations can be done automatically at the site collection level when the product is installed. There are several reasons why you may need to manually activate (or deactivate) Bamboo product features: - You opted out of automatic feature activation during installation. - You add a new site collection and want to add Bamboo features to it. - You want to remove Bamboo product features from an existing site collection. To manually activate (or deactivate) Bamboo product features, follow the instructions below. From the Site Actions menu select Site Settings The Site Settings page opens with a variety of links. Click on the link labeled “Site Collection Features** NOTE: Only Site Collection Administrators can see the Site Collection Administration section in Site Settings. Site Collection Administrators are assigned in SharePoint Central Administration on the SharePoint server. If you are not a site collection administrator, but you have Full Control permission on your site, you will see Site Features on the Site Settings page. But that should not be confused with “Site Collection Features” where a site collection administrator has more control over your site collection as a whole. The page opens showing active and inactive products; if you have many Bamboo products, you can go through each and click the Activate button to activate the product for the entire site collection. In the example below, one of our product features has not been activated and once the site admin clicks the Activate button, the product feature will be available within the site collection. You only have to do this once; unless another site collection administrator changes the feature setting to Deactivated, your products remain activated from now on. Once the feature is activated, the button appears to the right of the feature description; your button will reflect the color scheme of your site.
https://docs.bamboosolutions.com/document/manually_activating_bamboo_web_parts/
2022-05-16T11:25:20
CC-MAIN-2022-21
1652662510117.12
[]
docs.bamboosolutions.com
Manage Rolespermission and that the Cakey Bot's role is above the roles it is trying to assign. !persrole <user> <role>command. !mute <user> <reason>), they will NOT have the mute role added as a persistent role. You will have to run the persistent role command separately OR you can enable "Persistent Mutes" on the moderation section of the web dashboard to have mutes added as persistent roles automatically.
https://docs.cakeybot.app/tools-and-utilities/persistent-roles
2022-05-16T11:39:52
CC-MAIN-2022-21
1652662510117.12
[]
docs.cakeybot.app
XForms (Legacy) This topic links to Optimizely Forms (CMS 12) and the (deprecated) XForms (CMS 11). Note XForms applies to CMS versions 10 and 11. XForms are deprecated. Use Optimizely Forms instead. While existing solutions will continue to work, you should not build new solutions on this API. It will be phased out in the future. Updated 25 days ago Did this page help you?
https://docs.developers.optimizely.com/content-cloud/v12.0.0-content-cloud/docs/xforms-legacy-functionality
2022-05-16T12:09:15
CC-MAIN-2022-21
1652662510117.12
[]
docs.developers.optimizely.com
$ oc new-project hello-openshift A route allows you to host your application at a public URL. It can either be secure or unsecured, depending on the network security configuration of your application. An HTTP-based route is an unsecured route that uses the basic HTTP routing protocol and exposes a service on an unsecured application port. The following procedure describes how to create a simple HTTP-based route to a web application, using the hello-openshift application as an example. You installed the OpenShift CLI ( oc). You are logged in as an administrator. You have a web application that exposes a port and a TCP endpoint listening for traffic on the port. Create a project called hello-openshift by running the following command: $ oc new-project hello-openshift Create a pod in the project by running the following command: $ oc create -f Create a service called hello-openshift by running the following command: $ oc expose pod/hello-openshift Create an unsecured route to the hello-openshift application by running the following command: $ oc expose svc hello-openshift If you examine the resulting Route resource, it should look similar to the following: apiVersion: route.openshift.io/v1 kind: Route metadata: name: hello-openshift spec: host: hello-openshift-hello-openshift.<Ingress_Domain> (1) port: targetPort: 8080 to: kind: Service name: hello-openshift signals Sometimes applications deployed through OKD OKD. OKD provides sticky sessions, which enables stateful application traffic by ensuring all traffic hits the same endpoint. However, if the endpoint pod terminates, whether through restart, scaling, or a change in configuration, this statefulness can disappear. OKD specified cookie name: $ oc annotate route <route_name> router.openshift.io/cookie_name="<cookie_name>" where: <route_name> Specifies the name of the route. <cookie_name> Specifies the name for the cookie. For example, to annotate the route my_route with the cookie name my_cookie: $ oc annotate route my_route router.openshift.io/cookie_name="my_cookie" Capture the route hostname in a variable: $ ROUTE_NAME=$(oc get route <route_name> -o jsonpath='{.spec.host}') where: <route_name> Specifies the name of the route. Save the cookie, and then access the route: $ curl $ROUTE_NAME -k -c /tmp/cookie_jar Use the cookie saved by the previous command when connecting to the route: $ curl $ROUTE_NAME -k -b /tmp/cookie_jar Path-based routes specify a path component that can be compared against a URL, which requires that the traffic for the route be HTTP based. Thus, multiple routes can be served using the same hostname, each with a different path. Routers should match routes based on the most specific path to the least. However, this depends on the router implementation. The following table shows example routes and their accessibility: apiVersion: route.openshift.io/v1 kind: Route metadata: name: route-unsecured spec: host: path: "/test" (1) to: kind: Service name: service-name The Ingress Controller can set the default options for all the routes it exposes. An individual route can override some of these defaults by providing specific configurations in its annotations. Red Hat does not support adding a route annotation to an operator-managed route. TimeUnits are represented by a number followed by the unit: us *(microseconds), ms (milliseconds, default), s (seconds), m (minutes), h *(hours), d (days). The regular expression is: [1-9][0-9]*( us\| ms\| s\| m\| h\| d). apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/timeout: 5500ms (1) ... metadata: annotations: haproxy.router.openshift.io/ip_whitelist: 192.168.1.10 metadata: annotations: haproxy.router.openshift.io/ip_whitelist: 192.168.1.10 192.168.1.11 192.168.1.12 metadata: annotations: haproxy.router.openshift.io/ip_whitelist: 192.168.1.0/24 metadata: annotations: haproxy.router.openshift.io/ip_whitelist: 180.5.61.153 192.168.1.0/24 10.0.0.0/8 apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/rewrite-target: / (1) ... Setting the haproxy.router.openshift.io/rewrite-target annotation on a route specifies that the Ingress Controller should rewrite paths in HTTP requests using this route before forwarding the requests to the backend application. The part of the request path that matches the path specified in spec.path is replaced with the rewrite target specified in the annotation. The following table provides examples of the path rewriting behavior for various combinations of spec.path, request path, and rewrite target. Administrators and application developers can run applications in multiple namespaces with the same domain name. This is for organizations where multiple teams develop microservices that are exposed on the same hostname. ... Some ecosystem components have an integration with Ingress resources but not with Route resources. To cover this case, OKD automatically creates managed route objects when an Ingress object is created. These route objects are deleted when the corresponding Ingress objects are deleted. Define an Ingress object in the OKD console or by entering the oc create command: apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: frontend annotations: route.openshift.io/termination: "reencrypt" (1) spec: rules: - host: (2) paths: - backend: service: name: frontend port: number: 443 path: / pathType: Prefix tls: - hosts: - secretName: example-com-tls-certificate $ oc apply -f ingress.yaml List your routes: $ oc get routes The result includes an autogenerated route whose name starts with frontend-: NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD frontend-gnztq frontend 443 reencrypt/Redirect None If you inspect this route, it looks this: apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend-gnztq ownerReferences: - apiVersion: networking.k8s.io/v1 controller: true kind: Ingress name: frontend uid: 4e6c59cc-704d-4f44-b390-617d879033b6 spec: host: path: / port: targetPort: tls: certificate: | -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- insecureEdgeTerminationPolicy: Redirect key: | -----BEGIN RSA PRIVATE KEY----- [...] -----END RSA PRIVATE KEY----- termination: reencrypt to: kind: Service name: frontend
https://docs.okd.io/4.10/networking/routes/route-configuration.html
2022-05-16T11:13:42
CC-MAIN-2022-21
1652662510117.12
[]
docs.okd.io
Key features of DataFlow Native connectors Support for dozens of the most common databases, data warehouses, file sources, and applications. Point-and-Click Visual guided experience. No SQL or coding skills required. Granular selection Select only the table columns you want to load. No need to move entire data sets. Load incremental data Option to add only new data to existing tables. Or you can overwrite existing tables completely. Data mapping Flexibly map columns from the data source to columns in ThoughtSpot’s in-memory data store. Sync scheduling Define granular data sync schedules: monthly, weekly, daily, down to hourly intervals. TQL interface Run custom commands in an embedded TQL editor. Create database objects, specify conditions, and validate data. Alerts and monitoring Monitor data sync history, view execution logs, and get alerts when problems must be addressed.
https://docs.thoughtspot.com/software/6.2/dataflow-key-features
2022-05-16T11:04:20
CC-MAIN-2022-21
1652662510117.12
[]
docs.thoughtspot.com
Asynchronous publisher example¶ The following example implements a publisher that will respond to RPC commands sent from RabbitMQ and uses delivery confirmations. It will reconnect if RabbitMQ closes the connection and will shutdown if RabbitMQ closes the channel. While it may look intimidating, each method is very short and represents a individual actions that a publisher can do. Asynchronous Publisher Example
https://pika.readthedocs.io/en/latest/examples/asynchronous_publisher_example.html
2022-05-16T10:52:04
CC-MAIN-2022-21
1652662510117.12
[]
pika.readthedocs.io
Every Datadog Agent collection reports a heartbeat called datadog.agent.up with a status UP. You can monitor this heartbeat across one or more hosts. Select your host by name or tag(s). Providing a tag monitors every host that has that tag or tag combination. Choose between a Check Alert or a Cluster Alert. Check Alert: An alert is triggered when a host stops reporting. Select the no-data timeframe: If the heartbeat stops reporting for more than the number of minutes you have selected, you are notified. Cluster Alert: An alert is triggered when a percentage of hosts stop reporting. Decide whether or not to cluster your hosts according to a tag. Select the alerting/warning thresholds percentage for your host monitor. Select the no-data timeframe: If the heartbeat stops reporting for more than the number of minutes you have selected for the choosen percentage of host in the selected cluster, you are notified. Configure your notification options: Refer to the Notifications dedicated documentation page for a detailed walkthrough of the common notification options. Additional helpful documentation, links, and articles:
https://docs.datadoghq.com/monitors/monitor_types/host/
2019-06-16T00:58:25
CC-MAIN-2019-26
1560627997508.21
[]
docs.datadoghq.com
UpdateBatchPrediction Updates the BatchPredictionName of a BatchPrediction. You can use the GetBatchPrediction operation to view the contents of the updated data element. Request Syntax { "BatchPredictionId": " string", "BatchPredictionName": " string" } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. - BatchPredictionId The ID assigned to the BatchPredictionduring creation. Type: String Length Constraints: Minimum length of 1. Maximum length of 64. Pattern: [a-zA-Z0-9_.-]+ Required: Yes - BatchPredictionName A new user-supplied name or description of the BatchPrediction. Type: String Length Constraints: Maximum length of 1024. Pattern: .*\S.*|^$ Required: Yes Response Syntax { "BatchPredictionId": "string" } Response Elements If the action is successful, the service sends back an HTTP 200 response. The following data is returned in JSON format by the service. - BatchPredictionId The ID assigned to the BatchPredictionduring creation. This value should be identical to the value of the BatchPredictionIdin the request. Type: String Length Constraints: Minimum length of 1. Maximum length of 64. Pattern: [a-zA-Z0-9_.-]+ Errors For information about the errors that are common to all actions, see Common Errors. - InternalServerException An error on the server occurred when trying to process a request. HTTP Status Code: 500 - InvalidInputException An error on the client occurred. Typically, the cause is an invalid input value. HTTP Status Code: 400 - ResourceNotFoundException A specified resource cannot be located. HTTP Status Code: 400 Example The following is a sample request and response of the UpdateBatchPrediction.UpdateBatchPrediction { "BatchPredictionId": "bp-exampleBatchPredictionId", "BatchPredictionName": "bp-exampleBatchPredictionName" } Sample Response HTTP/1.1 200 OK x-amzn-RequestId: <RequestId> Content-Type: application/x-amz-json-1.1 Content-Length: <PayloadSizeBytes> Date: <Date> {"BatchPredictionId": "bp-exampleBatchPredictionId"} See Also For more information about using this API in one of the language-specific AWS SDKs, see the following:
https://docs.aws.amazon.com/machine-learning/latest/APIReference/API_UpdateBatchPrediction.html
2019-06-16T01:21:43
CC-MAIN-2019-26
1560627997508.21
[]
docs.aws.amazon.com
This theme uses PhotoBlocks free gallery plugin to allow you to display your travel images on the homepage of your website. Here is the demo of the gallery you can display on the homepage. You need to create an image gallery before configuring it on the homepage of your website.
https://docs.blossomthemes.com/docs/blossom-travel-pro/homepage-settings/how-to-configure-gallery-section/
2019-06-16T01:50:03
CC-MAIN-2019-26
1560627997508.21
[array(['https://docs.blossomthemes.com/wp-content/uploads/2019/04/Gallery-section-demo-blossom-travel-pro.png', 'Gallery section demo blossom travel pro'], dtype=object) array(['https://docs.blossomthemes.com/wp-content/uploads/2019/04/Add-Images.png', 'Add Images'], dtype=object) array(['https://docs.blossomthemes.com/wp-content/uploads/2019/04/Click-on-Publish-to-add-images.png', 'Click on Publish to add images'], dtype=object) array(['https://docs.blossomthemes.com/wp-content/uploads/2019/04/Copy-gallery-shortcode.png', 'Copy gallery short code'], dtype=object) array(['https://docs.blossomthemes.com/wp-content/uploads/2019/04/Select-custom-html-wwdget.png', 'Select custom html widget'], dtype=object) array(['https://docs.blossomthemes.com/wp-content/uploads/2019/04/Configure-gallery-section-blossom-travel-pro.png', None], dtype=object) ]
docs.blossomthemes.com
Introduction The Data Transfer Node (DTN) at Rice facilitates the movement of data into and out of the "/work" shared filesystem available on DAVinCI, NOTS, and Blue Biou and scratch filesystems available on DAVinCI "/dascratch", Blue Biou "/gpfs-biou". DTN2 facilitates the movement of data into and out of Blue Gene/Q "/bgqscratch" and PowerOmics "/poscratch". Both DTN and DTN2 provide massively parallel file transfers to facilitate moving large data sets efficiently. You will be able to move data between participating institutions available via Globus including between DTN and DTN2. Additionally, if you install and configure the Globus Connect software you will be able to transfer files using your local computer as an Endpoint. Prerequisite for Using This System You must have a cluster account. Your Rice netID must be registered with Globus.org. You must read the relevant quick start guides on Using Globus to transfer files You will utilize the Endpoints named: rice#dtn for "/work", "/dascratch", and "/gpfs-biou", rice#dtn2 for "/bgqscratch" and "poscratch". Your Path destination or source will be in either the "/work" shared filesystem available only on DAVinCI and Blue Biou or "scratch" shared filesystems available on DAVinCI, Blue Gene/Q, Blue Biou, and PowerOmics. Minimally to receive or send data you will need to pick a subdirectory with which you have read and write access. For "/work" this will be a subdirectory you create under your sponsor's netID directory within "/work". When using the respective "scratch" directory use a subdirectory you have created. Use "/" for your Path to see all of the filesystems currently available.
https://docs.rice.edu/confluence/pages/viewpage.action?pageId=53149842
2019-06-16T01:54:44
CC-MAIN-2019-26
1560627997508.21
[]
docs.rice.edu
Uses a 3D grid of interpolated light probes. A Light Probe Proxy Volume component which may reside on the same game object or on another game object will be required. In order to use a Light Probe Proxy Volume component which resides on another game object, you must use the Proxy Volume Override property where you can specify the source game object. Surface shaders use the information associated with the proxy volume automatically. To use the proxy volume information in your custom shaders, you can use ShadeSHPerPixel function in your pixel shader. See Also: Light Probes, Light Probe Proxy Volume.
https://docs.unity3d.com/kr/current/ScriptReference/Rendering.LightProbeUsage.UseProxyVolume.html
2019-06-16T00:43:21
CC-MAIN-2019-26
1560627997508.21
[]
docs.unity3d.com
FasterPay Shopify plugin FasterPay partners with Shopify to allow merchants anywhere in the world to start taking payments. Prerequisites - In order to use the FasterPay Shopify Plugin, you need to have an active shopify account. If you do not have one we recommend you to create now. - You’ll also need a FasterPay account. You can register here: FasterPay SignUp. Step 1: Set up your shop You can start setting up your shop using the instructions from Shopify’s getting started tutorial. Step 2: Install the FasterPay Shopify plugin - After you log in to your shop, you will be prompted to install the plugin. Step 3: Configure the plugin - After the plugin is installed, you should be redirected to the following page. - To start testing instantly on your Shopify store, enable Test Mode in your FasterPay Business Area. How to enable Test Mode. - Go to the Integration tab on the left of your account and here are your Test integration keys. Enter the Test Private and Public Keys in the respective fields in the above screen and tick the “Use test mode” check. - Click on Save to confirm the settings and you are now ready to test your integration for FasterPay. Step 4: Configure pingbacks in FasterPay Once you have configured the FasterPay plugin, you have to update the pingback URL in your Business Model Settings. For new account, Business Model settings can be enabled by disabling Test Mode. Following URLs should be used in order to receive the pingbacks in test and live environments. Live mode: Test mode: Step Overview Page. Enable your Store with FasterPay Live keys To set your store live for accepting payments, you can simply go to your FasterPay Business Area, in the top right corner find the Test Mode switcher, disable the Test Mode switcher. Grab the keys from the Integration > API Configuration Page in Business area and update the keys in your Shopify Store as indicated in Step 3. Make sure you uncheck the “Use Test Mode” check. Once these steps are complete, you can now start processing live payments on your Shopify store. Payment Flow Summary Viewing Cart Details Add Shipping Method Select Payment Method FasterPay Checkout Page FasterPay Transaction Successful
https://docs.fasterpay.com/integration/plugins/shopify
2019-06-16T01:39:06
CC-MAIN-2019-26
1560627997508.21
[array(['/textures/pic/plugins/shopify/shopify-login.png', 'Shopify Login Screen'], dtype=object) array(['/textures/pic/plugins/shopify/install.png', 'FasterPay Shopify Payment Gateway install screen'], dtype=object) array(['/textures/pic/plugins/shopify/shopify-config.png', 'FasterPay Shopify Payment Gateway Configuration screen'], dtype=object) array(['/textures/pic/plugins/shopify/fp-businessmodel-config.png', 'FasterPay Shopify Payment Gateway Configuration screen'], dtype=object) array(['/textures/pic/plugins/fp-account-active.png', 'FasterPay Account Active screen'], dtype=object) array(['/textures/pic/plugins/shopify/step1.png', 'FasterPay Shopify Viewing Cart Details'], dtype=object) array(['/textures/pic/plugins/shopify/step2.png', 'FasterPay Shopify Checkout'], dtype=object) array(['/textures/pic/plugins/shopify/step3.png', 'FasterPay Shopify Add Shipping method'], dtype=object) array(['/textures/pic/plugins/shopify/step4.png', 'FasterPay Shopify Select Payment Method'], dtype=object) array(['/textures/pic/plugins/shopify/step5.png', 'FasterPay Checkout Page'], dtype=object) array(['/textures/pic/plugins/shopify/step6.png', 'FasterPay Transaction Successful'], dtype=object)]
docs.fasterpay.com
This section documents the Server tab, which has the following subtabs: - Scheduled Jobs - Active Jobs - Importing Jobs - Deduping Jobs - Recent Jobs - Storage - Reports - Advanced Whenever you make changes, remember to save them. (See Activating Changes.) Scheduled Jobs subtab CFA. Active Jobs subtab Importing Jobs subtab The. Deduping Jobs subtab Currently,. Recent Jobs subtab The. Storage subtab The Storage subtab displays information about your UCAR (unique content-addressable repository) garbage collection system. For clients using deduplication, the UCAR system runs a garbage collection process every day to find and purge any files that are no longer referenced. Some ways that data can become unreferenced “garbage” are when clients are deleted without their jobs being purged or when old jobs were not removed completely. It is recommended to run garbage collection after deleting jobs to insure the data is cleared completely. This is similar to jobs with unreferenced data. (See Unreferenced Data). The garbage collection will be deferred for up to 12 hours before it terminates the process. If the process times out, it will be retried at its next regular time. There is one exception. In the event the system is running low on space, the garbage collection will proceed whether there are jobs deduplicating or not. The Storage screen has the following settings: Also, by scrolling down the same page you may find the Block Deduplication Statistics (raw) section. It will look similar to the one that you see below: Reports subtab The Reports subtab displays daily and weekly summary reports. The Daily Report Preview section is a simple summary of the results for the jobs run for the day. The Weekly Report Preview section is a simple summary of the results for the jobs run for the week. You can change the day and time that each report is sent. Advanced subtab
https://docs.infrascale.com/cfa-management-console-server-tab.html
2019-06-16T01:24:57
CC-MAIN-2019-26
1560627997508.21
[array(['images/cfa-management-console/server-tab-imgs/image7.png', None], dtype=object) array(['images/cfa-management-console/server-tab-imgs/image7.png', None], dtype=object) array(['images/cfa-management-console/server-tab-imgs/image2.png', None], dtype=object) array(['images/cfa-management-console/server-tab-imgs/image2.png', None], dtype=object) array(['images/cfa-management-console/server-tab-imgs/image5.png', None], dtype=object) array(['images/cfa-management-console/server-tab-imgs/image5.png', None], dtype=object) array(['images/cfa-management-console/server-tab-imgs/image6.png', None], dtype=object) array(['images/cfa-management-console/server-tab-imgs/image6.png', None], dtype=object) array(['images/cfa-management-console/server-tab-imgs/image1.png', None], dtype=object) array(['images/cfa-management-console/server-tab-imgs/image1.png', None], dtype=object) array(['images/cfa-management-console/server-tab-imgs/image8.png', None], dtype=object) array(['images/cfa-management-console/server-tab-imgs/image8.png', None], dtype=object) array(['images/cfa-management-console/server-tab-imgs/image9.png', None], dtype=object) array(['images/cfa-management-console/server-tab-imgs/image9.png', None], dtype=object) array(['images/cfa-management-console/server-tab-imgs/image10.png', None], dtype=object) array(['images/cfa-management-console/server-tab-imgs/image10.png', None], dtype=object) array(['images/cfa-management-console/server-tab-imgs/image11.png', None], dtype=object) array(['images/cfa-management-console/server-tab-imgs/image11.png', None], dtype=object) ]
docs.infrascale.com
This section describes the tasks that need to be completed to publish the documentation for an OpenStack release. It is intended to be used by the PTL and release managers. The current release manager for Documentation is listed on the Cross Project Liaisons wiki page. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.
https://docs.openstack.org/doc-contrib-guide/release.html
2019-06-16T01:31:57
CC-MAIN-2019-26
1560627997508.21
[]
docs.openstack.org
Define server classes A server class defines a deployment configuration shared by a group of deployment clients. It defines both the criteria for being a member of the class and the set of content to deploy to members of the class. This content (encapsulated as "deployment apps") can consist of Splunk apps,. In addition to defining attributes and content for specific server classes, you can also define attributes that pertain just to a single app within a server class. A deployment client has its own configuration, defined in deploymentclient.conf. The information in deploymentclient.conf tells the deployment client where to go to get the content that the server class it belongs to says it should have. The next section provides a reference for the server class configuration settings. You might want to read it while referring to the set of simple example configurations presented later in this topic. In addition, there are several longer and more complete examples presented later in this manual, including "Deploy several forwarders". What you can configure for a server class You can specify settings for a global server class, as well as for individual server classes or apps within server classes. There are three levels of stanzas to enable this: Important: When defining app names, you should be aware of the rules of configuration file precedence, as described in the topic "Configuration file precedence" in the Admin manual. In particular, note that app directories are evaluated by ASCII sort order. For example, if you set an attribute/value pair whatever=1 in the file x.conf in an app directory named "A", the setting in app A overrides the setting whatever=0 in x.conf in an app named "B", etc. For details, see the subtopic "How app names affect precedence". Attributes in more specific stanzas override less specific stanzas. Therefore, an attribute defined in a [serverClass:<serverClassName>] stanza will override the same attribute defined in [global]. The attributes are definable for each stanza level, unless otherwise indicated. Here are the most common ones: Note: The most accurate and up-to-date. Examples]!
https://docs.splunk.com/Documentation/Splunk/4.3.7/Deploy/Definedeploymentclasses
2019-06-16T01:02:23
CC-MAIN-2019-26
1560627997508.21
[array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)]
docs.splunk.com
Since Sanic handlers are simple Python functions, you can apply decorators to them in a similar manner to Flask. A typical use case is when you want some code to run before a handler’s code is executed. Let’s say you want to check that a user is authorized to access a particular endpoint. You can create a decorator that wraps a handler function, checks a request if the client is authorized to access a resource, and sends the appropriate response. from functools import wraps from sanic.response import json def authorized(): def decorator(f): @wraps(f) async def decorated_function(request, *args, **kwargs): # run some method that checks the request # for the client's authorization status is_authorized = check_request_for_authorization_status(request) if is_authorized: # the user is authorized. # run the handler method and return the response response = await f(request, *args, **kwargs) return response else: # the user is not authorized. return json({'status': 'not_authorized'}, 403) return decorated_function return decorator @app.route("/") @authorized() async def test(request): return json({'status': 'authorized'})
https://sanic.readthedocs.io/en/latest/sanic/decorators.html
2019-06-16T00:28:22
CC-MAIN-2019-26
1560627997508.21
[]
sanic.readthedocs.io
Developing a MVC Component/Introduction (Redirected from Talk:Developing a MVC Component/3.1/Introduction)) Advertisement
https://docs.joomla.org/Talk:Developing_a_MVC_Component/3.1/Introduction
2017-03-23T04:16:59
CC-MAIN-2017-13
1490218186774.43
[]
docs.joomla.org
Application Icon Creating an Icon for a Xamarin.Mac application - PDF for offline use - - Sample Code: - - Related Articles: - - Related SDKs: - Let us know how you feel about this 0/250 last updated: 2017-03 This article covers creating the images required for a Xamarin.Mac application's Icon, bundling the images into a `.icns` file, and including the Icon in the Xamarin.Mac Project. Contents This article will cover the following topics in detail: - Application Icon - Designing the Icon - Required Image Sizes and Filenames - Packaging the Icon Resources - Using the Icon Overview When working with C# and .NET in a Xamarin.Mac application, a developer has access to the same Image and Icon tools that a developer working in in Objective-C and Xcode does. A great Icon should convey the main purpose of a Xamarin.Mac app and hint experience the user should expect when using the app. This article covers all of the steps necessary to create the Image Assets required for an Icon, packaging those assets into a AppIcons.appiconset file and consuming that file in a Xamarin.Mac app. Application Icon A great Icon should convey the main purpose of a Xamarin.Mac app and hint experience the user should expect when using an app. Every macOS app must include several sizes of its Icon for display in the Finder, Dock, Launchpad, and other locations throughout the computer. Designing the Icon Apple suggests the following tips when designing an application's icon: - Consider giving the icon a realistic and unique shape. - If the macOS app has an iOS counterpart, don’t reuse the iOS app's icon. - Use universal imagery that people can easily recognize. - Strive for simplicity. - Use color and shadow sparingly to help the icon tell the app's story. - Avoid mixing actual text with greeked text or lines to suggest text. - Create an idealized version of the icon's subject rather than using an actual photo. - Avoid using macOS UI elements in the icons. - Don’t use replicas of Apple icons in the icons. Please read the App Icon Gallery and Designing App Icons sections of Apple's OS X Human Interface Guidelines before designing a Xamarin.Mac app's icon. Required Image Sizes and Filenames Like any other Image Resource that the developer is going to use in a Xamarin.Mac app, the app Icon needs to provided both a Standard and Retina Resolution version. Again, like any other image, use a @2x format when naming the Icon files: - Standard-Resolution - ImageName.filename-extension (Example: icon_512x512.png) - High-Resolution - [email protected] (Example: [email protected]) For example, to supply the 512 x 512 version of the app's icon, the file would be named icon_512x512.png and [email protected]. To ensure that the icon looks great in all the places that users see it, provide resources in the sizes listed below: For more information, see Apple's Provide High-Resolution Versions of All App Graphics Resources documentation. Packaging the Icon Resources With the icon designed and saved out to the required file sizes and names, Xamarin Studio makes it easy to assign them to the image assets for use in Xamarin.Mac. Do the following: - In the Solution Explorer, open Assets.xcassets > AppIcons.appiconset: - For each icon size required, click the icon and select the corresponding image file that were created above: - Save your changes. Using the Icon Once the AppIcons.appiconset file has been built, it will need to assign it to the Xamarin.Mac Project in Xamarin Studio. Do the following: - Double-click the Info.plistin the Solution Explorer to open the Project Options. - In the Mac OS X Application Target section and click the App Icons to select the AppIcons.appiconsetfile: - Save the changes. When the app is run, the new icon will be displayed in the dock: Summary This article has taken a detailed look at working with Images required to create an macOS app Icon, packaging an Icon and including an Icon in a Xamarin.Mac.
https://docs.mono-android.net/guides/mac/deployment,_testing,_and_metrics/app-icon/
2017-03-23T04:18:02
CC-MAIN-2017-13
1490218186774.43
[]
docs.mono-android.net
New in version 1.2. procedure Run (Suite : in out Framework.Test_Suite'Class); Run the suite and write the results to a file in XML format. procedure Run (Suite : Framework.Test_Suite_Access); Run the suite and write the results to a file. The routine is identical to the Run (Suite : in out Framework.Test_Suite’Class) procedure, but takes an access parameter to a test suite.
http://docs.ahven-framework.com/2.6/api-ahven-xml_runner.html
2017-03-23T04:14:43
CC-MAIN-2017-13
1490218186774.43
[]
docs.ahven-framework.com
Part 3 - Purchasing Consumable Products - PDF for offline use - Let us know how you feel about this 0/250 Consumable products are the simplest to implement, since there is no ‘restore’ requirement. They are useful for products like in-game currency or a single-use piece of functionality. Users can re-purchase consumable products over-and-over again. Built-In Product Delivery The sample code accompanying this document demonstrates built-in products – the Product IDs are hardcoded into the application because they are tightly coupled to the code that ‘unlocks’ the feature after payment. The purchasing process can be visualized like this: The basic workflow is:. The application enables the product (by updating NSUserDefaultsor some other mechanism), and then calls StoreKit’s FinishTransaction. There is another type of workflow – Server-Delivered Products – that is discussed later in the document (see the section Receipt Verification and Server-Delivered Products). Consumable Products Sample The InAppPurchaseSample code contains a project called Consumables that implements a basic ‘in-game currency’ (called “monkey credits”). The sample shows how to implement two in-app purchase products to allow the user to buy as many “monkey credits” as they wish – in a real application there would also be some way of spending them! The application is shown in these screenshots – each purchase adds more “monkey credits” to the user’s balance: The interactions between custom classes, StoreKit and the App Store look like this: ViewController Methods In addition to the properties and methods required to retrieve product information, the view controller requires additional notification observers to listen for purchase-related notifications. These are just NSObjects that will be registered and removed in ViewWillAppear and ViewWillDisappear respectively. NSObject succeededObserver, failedObserver; The constructor will also create the SKProductsRequestDelegate subclass ( InAppPurchaseManager) that in turn creates and registers the SKPaymentTransactionObserver ( CustomPaymentObserver). The first part of processing an in-app purchase transaction is to handle the button press when the user wishes to buy something, as shown in the following code from the sample application: buy5Button.TouchUpInside += (sender, e) => { iap.PurchaseProduct (Buy5ProductId); }; buy10Button.TouchUpInside += (sender, e) => { iap.PurchaseProduct (Buy10ProductId); }; The second part of the user interface is handling the notification that the transaction succeeded, in this case by updating the displayed balance: priceObserver = NSNotificationCenter.DefaultCenter.AddObserver (InAppPurchaseManager.InAppPurchaseManagerTransactionSucceededNotification, (notification) => { balanceLabel.Text = CreditManager.Balance() + " monkey credits"; }); The final part of the user interface is displaying a message if a transaction is cancelled for some reason. In the example code a message is simply written to the output window: failedObserver = NSNotificationCenter.DefaultCenter.AddObserver (InAppPurchaseManager.InAppPurchaseManagerTransactionFailedNotification, (notification) => { Console.WriteLine ("Transaction Failed"); }); In addition to these methods on the view controller, a consumable product purchase transaction also requires code on the SKProductsRequestDelegate and the SKPaymentTransactionObserver. InAppPurchaseManager Methods The sample code implements a number of purchase related methods on the InAppPurchaseManager class, including the PurchaseProduct method that creates an SKPayment instance and adds it to the queue for processing: public void PurchaseProduct(string appStoreProductId) { SKPayment payment = SKPayment.PaymentWithProduct (appStoreProductId); SKPaymentQueue.DefaultQueue.AddPayment (payment); } Adding the payment to the queue is an asynchronous operation. The application regains control while StoreKit processes the transaction and sends it to Apple’s servers. It is at this point that iOS will verify the user is logged in to the App Store and prompt her for an Apple ID and password if required. Assuming the user successfully authenticates with the App Store and agrees to the transaction, the SKPaymentTransactionObserver will receive StoreKit’s response and call the following method to fulfill the transaction and finalize it. public void CompleteTransaction (SKPaymentTransaction transaction) { var productId = transaction.Payment.ProductIdentifier; // Register the purchase, so it is remembered for next time PhotoFilterManager.Purchase(productId); FinishTransaction(transaction, true); } The last step is to ensure that you notify StoreKit that you have successfully fulfilled the transaction, by calling FinishTransaction: public void FinishTransaction(SKPaymentTransaction transaction, bool wasSuccessful) { // remove the transaction from the payment queue. SKPaymentQueue.DefaultQueue.FinishTransaction(transaction); // THIS IS IMPORTANT - LET'S APPLE KNOW WE'RE DONE !!!! using (var pool = new NSAutoreleasePool()) { NSDictionary userInfo = NSDictionary.FromObjectsAndKeys(new NSObject[] {transaction},new NSObject[] {new NSString("transaction")}); if (wasSuccessful) { // send out a notification that we've finished the transaction NSNotificationCenter.DefaultCenter.PostNotificationName (InAppPurchaseManagerTransactionSucceededNotification, this, userInfo); } else { // send out a notification for the failed transaction NSNotificationCenter.DefaultCenter.PostNotificationName (InAppPurchaseManagerTransactionFailedNotification, this, userInfo); } } } Once the product is delivered, SKPaymentQueue.DefaultQueue.FinishTransaction must be called to remove the transaction from the payment queue. SKPaymentTransactionObserver (CustomPaymentObserver) Methods StoreKit calls the UpdatedTransactions method when it receives a response from Apple’s servers, and passes an array of SKPaymentTransaction objects for your code to inspect. The method loops through each transaction and performs a different function based on the transaction state (as shown here): CompleteTransaction method was covered earlier in this section – it saves the purchase details to NSUserDefaults, finalizes the transaction with StoreKit and finally notifies the UI to update. Purchasing Multiple Products If it makes sense in your application to purchase multiple products, use the SKMutablePayment class and set the Quantity field: public void PurchaseProduct(string appStoreProductId) { SKMutablePayment payment = SKMutablePayment.PaymentWithProduct (appStoreProductId); payment.Quantity = 4; // hardcoded as an example SKPaymentQueue.DefaultQueue.AddPayment (payment); } The code handling the completed transaction must also query the Quantity property to correctly fulfill the purchase: public void CompleteTransaction (SKPaymentTransaction transaction) { var productId = transaction.Payment.ProductIdentifier; var qty = transaction.Payment.Quantity; if (productId == ConsumableViewController.Buy5ProductId) CreditManager.Add(5 * qty); else if (productId == ConsumableViewController.Buy10ProductId) CreditManager.Add(10 * qty); else Console.WriteLine ("Shouldn't happen, there are only two products"); FinishTransaction(transaction, true); } When the user purchases multiple quantities, the StoreKit confirmation alert will reflect the quantity, the unit price and the total price they’ll be charged, as shown in the following screenshot: [ ](Images/image30.png) Handling Network Outages In-app purchases require a working network connection for StoreKit to communicate with Apple’s servers. If a network connection is not available, then in-app purchasing will be unavailable. Product Requests If the network is unavailable while making an SKProductRequest, the RequestFailed method of the SKProductsRequestDelegate subclass ( InAppPurchaseManager) will be called, as shown below: public override void RequestFailed (SKRequest request, NSError error) { using (var pool = new NSAutoreleasePool()) { NSDictionary userInfo = NSDictionary.FromObjectsAndKeys(new NSObject[] {error},new NSObject[] {new NSString("error")}); // send out a notification for the failed transaction NSNotificationCenter.DefaultCenter.PostNotificationName (InAppPurchaseManagerRequestFailedNotification, this, userInfo); } } The ViewController then listens for the notification and displays a message in the purchase buttons: requestObserver = NSNotificationCenter.DefaultCenter.AddObserver (InAppPurchaseManager.InAppPurchaseManagerRequestFailedNotification, (notification) => { Console.WriteLine ("Request Failed"); buy5Button.SetTitle ("Network down?", UIControlState.Disabled); buy10Button.SetTitle ("Network down?", UIControlState.Disabled); }); Because a network connection can be transient on mobile devices, applications may wish to monitor network status using the SystemConfiguration framework, and re-try when a network connection is available. Refer to Apple’s or the that uses it. Purchase Transactions The StoreKit payment queue will store and forward purchase requests if possible, so the effect of a network outage will vary depending on when the network failed during the purchase process. If an error does occur during a transaction, the SKPaymentTransactionObserver subclass ( CustomPaymentObserver) will have the UpdatedTransactions method called and the SKPaymentTransaction class will be in the Failed state. FailedTransaction method detects whether the error was due to user-cancellation, as shown here: public void FailedTransaction (SKPaymentTransaction transaction) { //SKErrorPaymentCancelled == 2 if (transaction.Error.Code == 2) // user cancelled Console.WriteLine("User CANCELLED FailedTransaction Code=" + transaction.Error.Code + " " + transaction.Error.LocalizedDescription); else // error! Console.WriteLine("FailedTransaction Code=" + transaction.Error.Code + " " + transaction.Error.LocalizedDescription); FinishTransaction(transaction,false); } Even if a transaction fails, the FinishTransaction method must be called to remove the transaction from the payment queue: SKPaymentQueue.DefaultQueue.FinishTransaction(transaction); The example code then sends a notification so that the ViewController can display a message. Applications should not show an additional message if the user canceled the transaction. Other error codes that might occur include: FailedTransaction Code=0 Cannot connect to iTunes Store FailedTransaction Code=5002 An unknown error has occurred FailedTransaction Code=5020 Forget Your Password? Applications may detect and respond to specific error codes, or handle them in the same way. Handling Restrictions The Settings > General > Restrictions feature of iOS allows users to lock certain features of their device. You can query whether the user is allowed to make in-app purchases via the SKPaymentQueue.CanMakePayments method. If this returns false then the user cannot access in-app purchasing. StoreKit will automatically display an error message to the user if a purchase is attempted. By checking this value your application can instead hide the purchase buttons or take some other action to help the user. In the InAppPurchaseManager.cs file the CanMakePayments method wraps the StoreKit function like this: public bool CanMakePayments() { return SKPaymentQueue.CanMakePayments; } To test this method, use the Restrictions feature of iOS to disable In-App Purchases: This example code from ConsumableViewController reacts to CanMakePayments returning false by displaying AppStore Disabled text on the disabled buttons. // only if we can make payments, request the prices if (iap.CanMakePayments()) { // now go get prices, if we don't have them already if (!pricesLoaded) iap.RequestProductData(products); // async request via StoreKit -> App Store } else { // can't make payments (purchases turned off in Settings?) // the buttons are disabled by default, and only enabled when prices are retrieved buy5Button.SetTitle ("AppStore disabled", UIControlState.Disabled); buy10Button.SetTitle ("AppStore disabled", UIControlState.Disabled); } The application looks like this when the In-App Purchases feature is restricted – the purchase buttons are disabled. Product information can still be requested when CanMakePayments is false, so the app can still retrieve and display prices. This means if we removed the CanMakePayments check from the code the purchase buttons would still be active, however when a purchase is attempted the user will see a message that In-app purchases are not allowed (generated by StoreKit when the payment queue is accessed): Real-world applications may take a different approach to handling the restriction, such as hiding the buttons altogether and perhaps offering a more detailed message than the alert that StoreKit shows automatically. Part 4 - Purchasing Non-Consumable.
https://docs.mono-android.net/guides/ios/application_fundamentals/in-app_purchasing/part_3_-_purchasing_consumable_products/
2017-03-23T04:24:05
CC-MAIN-2017-13
1490218186774.43
[array(['Images/image30.png', None], dtype=object)]
docs.mono-android.net
24. Visualizing Probabilistic Power Spectral Densities¶ The following code example shows how to use the PPSD class defined in obspy.signal. The routine is useful for interpretation of e.g. noise measurements for site quality control checks. For more information on the topic see [McNamara2004]. >>> from obspy import read >>> from obspy.io.xseed import Parser >>> from obspy.signal import PPSD Read data and select a trace with the desired station/channel combination: >>> st = read("") >>> tr = st.select(id="BW.KW1..EHZ")[0] Metadata can be provided as an Inventory (e.g. from a StationXML file or from a request to a FDSN web service), a Parser (e.g. from a dataless SEED file), a filename of a local RESP file (or a legacy poles and zeros dictionary). Then we initialize a new PPSD instance. The ppsd object will then make sure that only appropriate data go into the probabilistic psd statistics. >>> parser = Parser("") >>> ppsd = PPSD(tr.stats, metadata=parser) Now we can add data (either trace or stream objects) to the ppsd estimate. This step may take a while. The return value True indicates that the data was successfully added to the ppsd estimate. >>> ppsd.add(st) True We can check what time ranges are represented in the ppsd estimate. ppsd.times contains a sorted list of start times of the one hour long slices that the psds are computed from (here only the first two are printed). >>> print(ppsd.times[:2]) [UTCDateTime(2011, 2, 6, 0, 0, 0, 935000), UTCDateTime(2011, 2, 6, 0, 30, 0, 935000)] >>> print("number of psd segments:", len(ppsd.times)) number of psd segments: 47 Adding the same stream again will do nothing (return value False), the ppsd object makes sure that no overlapping data segments go into the ppsd estimate. >>> ppsd.add(st) False >>> print("number of psd segments:", len(ppsd.times)) number of psd segments: 47 Additional information from other files/sources can be added step by step. >>> st = read("") >>> ppsd.add(st) True The graphical representation of the ppsd can be displayed in a matplotlib window.. >>> ppsd.plot() ..or saved to an image file: >>> ppsd.plot("/tmp/ppsd.png") >>> ppsd.plot("/tmp/ppsd.pdf") (Source code, png, hires.png) A (for each frequency bin) cumulative version of the histogram can also be visualized: >>> ppsd.plot(cumulative=True) (Source code, png, hires.png) To use the colormap used by PQLX / [McNamara2004] you can import and use that colormap from obspy.imaging.cm: >>> from obspy.imaging.cm import pqlx >>> ppsd.plot(cmap=pqlx) (Source code, png, hires.png) Below the actual PPSD (for a detailed discussion see [McNamara2004]) is a visualization of the data basis for the PPSD (can also be switched off during plotting). The top row shows data fed into the PPSD, green patches represent available data, red patches represent gaps in streams that were added to the PPSD. The bottom row in blue shows the single psd measurements that go into the histogram. The default processing method fills gaps with zeros, these data segments then show up as single outlying psd lines.
https://docs.obspy.org/tutorial/code_snippets/probabilistic_power_spectral_density.html
2017-03-23T04:12:46
CC-MAIN-2017-13
1490218186774.43
[array(['../../_images/probabilistic_power_spectral_density1.png', '../../_images/probabilistic_power_spectral_density1.png'], dtype=object) array(['../../_images/probabilistic_power_spectral_density31.png', '../../_images/probabilistic_power_spectral_density31.png'], dtype=object) array(['../../_images/probabilistic_power_spectral_density2.png', '../../_images/probabilistic_power_spectral_density2.png'], dtype=object) ]
docs.obspy.org
__init__(DRAWSEGMENT self, BOARD_ITEM aParent=None, KICAD_T idtype=PCB_LINE_T) -> DRAWSEGMENT __init__(DRAWSEGMENT self, BOARD_ITEM aParent=None) -> DRAWSEGMENT __init__(DRAWSEGMENT self) -> DRAWSEGMENT DRAWSEGMENT::DRAWSEGMENT(BOARD_ITEM *aParent=NULL, KICAD_T idtype=PCB_LINE_T) Definition at line 25437 of file pcbnew.py. Clone(DRAWSEGMENT self) -> EDA_ITEM EDA_ITEM * DRAWSEGMENT: 26082 of file pcbnew.py. Draw(DRAWSEGMENT self, EDA_DRAW_PANEL * panel, wxDC * DC, GR_DRAWMODE aDrawMode, wxPoint aOffset) Draw(DRAWSEGMENT self, EDA_DRAW_PANEL * panel, wxDC * DC, GR_DRAWMODE aDrawMode) void DRAWSEGMENT::Draw(EDA_DRAW_PANEL *panel, wxDC *DC, GR_DRAWMODE aDrawMode, const wxPoint &aOffset=ZeroOffset) override Function Draw BOARD_ITEMs have their own color information. Definition at line 25840 of file pcbnew.py. GetBoundingBox(DRAWSEGMENT self) -> EDA_RECT const EDA_RECT DRAWSEGMENT: 25876 of file pcbnew.py. GetMenuImage(DRAWSEGMENT self) -> BITMAP_DEF BITMAP_DEF DRAWSEGMENT::GetMenuImage() const override Function GetMenuImage returns a pointer to an image to be used in menus. The default version returns the right arrow image. Override this function to provide object specific menu images. The menu image associated with the item. Definition at line 26065 of file pcbnew.py. GetMsgPanelInfo(DRAWSEGMENT self, std::vector< MSG_PANEL_ITEM,std::allocator< MSG_PANEL_ITEM > > & aList) void DRAWSEGMENT: 25854 of file pcbnew.py. GetSelectMenuText(DRAWSEGMENT self) -> wxString wxString DRAWSEGMENT: 26045 of file pcbnew.py. HitTest(DRAWSEGMENT self, wxPoint aPosition) -> bool HitTest(DRAWSEGMENT self, EDA_RECT aRect, bool aContained=True, int aAccuracy=0) -> bool HitTest(DRAWSEGMENT self, EDA_RECT aRect, bool aContained=True) -> bool HitTest(DRAWSEGMENT self, EDA_RECT aRect) -> bool bool DRAWSEGMENT: 25894 of file pcbnew.py. Rotate(DRAWSEGMENT self, wxPoint aRotCentre, double aAngle) void DRAWSEGMENT::Rotate(const wxPoint &aRotCentre, double aAngle) override Function Rotate Rotate this object. Parameters: ----------- aRotCentre: - the rotation point. aAngle: - the rotation angle in 0.1 degree. Definition at line 25966 of file pcbnew.py. SetAngle(DRAWSEGMENT self, double aAngle) void DRAWSEGMENT::SetAngle(double aAngle) Function SetAngle sets the angle for arcs, and normalizes it within the range 0 - 360 degrees. Parameters: ----------- aAngle: is tenths of degrees, but will soon be degrees. Definition at line 25482 of file pcbnew.py. TransformShapeWithClearanceToPolygon(DRAWSEGMENT self, SHAPE_POLY_SET & aCornerBuffer, int aClearanceValue, int aCircleToSegmentsCount, double aCorrectionFactor) void DRAWSEGMENT:iamted by segment bigger or equal to the real clearance value (usually near from 1.0) Definition at line 26004 of file pcbnew.py.
http://docs.kicad-pcb.org/doxygen-python/classpcbnew_1_1DRAWSEGMENT.html
2017-03-23T04:11:26
CC-MAIN-2017-13
1490218186774.43
[]
docs.kicad-pcb.org
Writing Job Results into Facebook Custom Audience (BETA) This article explains how to write job results directly to your Facebook Custom Audience. This feature is currently in beta, any feedback would be appreciated. Table of Contents Prerequisites - Basic knowledge of Treasure Data, including the toolbelt. - A Facebook Ad Account. - Authorized Treasure Data Facebook app access to own Facebook Ad Account Step 1: Create a new connection Please visit Treasure Data Connections, search and select Facebook Custom Audience. The dialog below will open. Please select an existing OAuth connection for Facebook, or click the link under OAuth connection to create a new one. Create a new OAuth connection Please login to your Facebook account in popup window: And grant access to Treasure Data app. You will be redirected back to Treasure Data Connections. Please repeat the first step ( Create a new connection) and choose your new OAuth connection. Step 2: Configure to output results to Facebook Custom Audience connection Check to Output results at top of your query editor and select your Facebook Custom Audience connection as follows: There are several parameters to fill out: - Ad Account ID (required): This is your Ad Account ID without act_prefix. - API Version (optional, default v2.8): Facebook API Version, it’s best to keep default value v2.8( v2.7will be deprecated soon). - Custom Audience Name (required): Name of Custom Audience to create/update, if none exists, one will be created. - Important note: If you have many Custom Audiences with same name as this input, the latest one will be used. We recomment to give your Custom Audience a unique name. - Custom Audience Description (optional): Description of Custom Audience. - Initial intervals in milliseconds between retries (optional, default 60000): Interval to retry if a recoverable error happens (in milisecond). - Retry limit (optional, default 5): Number of retries before it gives up. Here is a sample configuration: Step 3: Write the Query to Populate a user list Here is an example Audiences list before outputting a query result: Then, back on Treasure Data, run the following query with Output results into a connection of Facebook your Audience List: was found from query result, an error will be thrown. You can use alias in your query to rename columns of your query result, for example: SELECT an_email_column AS EMAIL, another_phone_column AS PHONE FROM your_table; - Note: column name is case-insensitive, ie. you can use either of Appendix B: Data Normalization Keep it in mind that our result output normalizes your values automatically to follow Facebook’s normalizing rules, see here. All values, uploaded to Facebook for matching, need to be normalized with the normalizing rules of Facebook. The values will just miss chances to match if they are not normalized. If you need to normalize by yourselves, apply your own normalization beforehand. The conversion below is actually applied per type in our (since, please use 2-character ANSI abbreviation code, our platform will not cut off input string (into 2 characters), as it needs to support states outside US. - ZIP (Zip code): trimming leading and trailing whitespace and convert all characters to lowercase, remove any non alphanumeric or whitespace from result. - Note: If your value is US zip code, please use exact 5 digits, our platform will not cut off input string (into 5 characters), as it needs to support UK zip code format. Last modified: Mar 23 2017 04:24:54 UTC If this article is incorrect or outdated, or omits critical information, please let us know. For all other issues, please see our support channels.
https://docs.treasuredata.com/articles/result-into-facebook-custom-audience
2017-03-23T04:24:54
CC-MAIN-2017-13
1490218186774.43
[array(['/images/result-into-facebook-custom-audience-create-connection-0.png', None], dtype=object) array(['/images/result-into-facebook-custom-audience-create-connection-1.png', None], dtype=object) array(['/images/result-into-facebook-custom-audience-create-connection-2.png', None], dtype=object) array(['/images/data-connector-facebook-login.png', None], dtype=object) array(['/images/result-into-facebook-custom-audience-create-connection-3.png', None], dtype=object) array(['/images/result-into-facebook-custom-audience-create-connection-4.png', None], dtype=object) array(['/images/result-into-facebook-custom-audience.gif', None], dtype=object) array(['/images/result-into-facebook-custom-audience-configs.png', None], dtype=object) array(['/images/result-into-facebook-custom-audience-before.png', None], dtype=object) array(['/images/result-into-facebook-custom-audience-after.png', None], dtype=object) ]
docs.treasuredata.com
Vertical Percent Stacked Stick Chart Overview A Vertical Percent Stacked Stick Chart is a multi-series Stick Chart that displays the trend of the percentage each value contributes over time or categories. The categories of this chart are spread among the vertical axis. The concept of stacking in AnyChart is described in this article: Stacked (Overview). Quick Start To build a Vertical Percent Stacked Stick Chart, create a multi-series Vertical Stick Chart and set the stackMode() method into percent: // create a chart var chart = chart.bar(); // enable the value stacking mode chart.yScale().stackMode("percent"); // create stick series var series1 = chart.stick(seriesData_1); var series2 = chart.stick(seriesData_2); Adjusting The Vertical Percent Stacked Stick series' settings are mostly the same as other series' ones. The majority of information about adjusting series in AnyChart is given in the General Settings article.
https://docs.anychart.com/Basic_Charts/Stacked/Percent/Vertical_Stick_Chart
2017-03-23T04:19:04
CC-MAIN-2017-13
1490218186774.43
[]
docs.anychart.com
Reflection Permission Flag Enum Definition Specifies the permitted use of the System.Reflection and System.Reflection.Emit namespaces. This enumeration has a FlagsAttribute attribute that allows a bitwise combination of its member values. public enum class ReflectionPermissionFlag [System.Flags] [System.Runtime.InteropServices.ComVisible(true)] [System.Serializable] public enum ReflectionPermissionFlag type ReflectionPermissionFlag = Public Enum ReflectionPermissionFlag - Inheritance - - Attributes - Fields Examples. ReflectionPermission restrictedMemberAccessPerm = new ReflectionPermission(ReflectionPermissionFlag.RestrictedMemberAccess); Dim restrictedMemberAccessPerm As New ReflectionPermission(ReflectionPermissionFlag.RestrictedMemberAccess) Remarks. Caution Because ReflectionPermission can provide access to private class members, we recommend that you grant ReflectionPermission to Internet code only with the RestrictedMemberAccess flag, and not with any other flags. The RestrictedMemberAccess flag is introduced in the .NET Framework 2.0 SP1. To use this flag, your application should target the .NET Framework 3.5 or later. Important AllFlags does not include the RestrictedMemberAccess flag. To get a mask that includes all flags in this enumeration, you must use the combination of AllFlags with RestrictedMemberAccess..
https://docs.microsoft.com/en-gb/dotnet/api/system.security.permissions.reflectionpermissionflag?view=netframework-3.5
2020-01-17T23:28:15
CC-MAIN-2020-05
1579250591234.15
[]
docs.microsoft.com
TechNet Flash - Volume 20, Issue 25: December 5, 2018 Top News Worldwide Download Azure DevOps Server 2019 RC1 Reduce your potential attack surface using Lateral Movement Paths See what’s new in Windows Defender Advanced Threat Protection Free training: Introduction to Azure Top Downloads Windows Server 2016 Essentials System Center Configuration Manager and Endpoint Protection Create an Azure free account Azure Stack Development Kit Microsoft Intune 30-day trial Microsoft Advanced Threat Analytics Your Featured Content Introducing the new SQL Server VM resource provider Learn about the new resource provider, three new resource types, and how these improve the management of SQL Server on Azure Virtual Machines. Understand the health of your systems with Azure Monitor for virtual machines Find out how to use Azure Monitor for virtual machines to understand the health of your systems and discover how to use the APIs for more advanced monitoring capabilities. Now available: Azure Machine Learning service Build, train, and deploy machine learning models faster with Azure Machine Learning service. Try the Azure Machine Learning service to see how it provides automated machine learning and seamless deployment to the cloud. Take the Microsoft Security Assessment Find out where your organization is vulnerable and get personalized recommendations to help improve your environment's security posture. Best practices for naming your Microsoft Azure resources Get advice for naming and managing virtual machines, servers, storage, and all your other cloud resources. Seven tips for creating better technical content Get essential tips for creating technical content, such as documentation, training materials, or a presentation. Capture and keep your audience’s interest without sacrificing useful technical guidance. Free resources: Get started with Azure SQL Data Warehouse Learn how to use Azure SQL Data Warehouse, which combines SQL relational databases with massively parallel processing. Access free tutorials and documentation that show you how to design, load, manage, and analyze data. Events Live webinar: Deploying and updating Microsoft Office 365 ProPlus at Microsoft December 11, 2018, online Learn how Microsoft automated the upgrade to Office 365 ProPlus using System Center Configuration Manager. You’ll also hear how Microsoft used update channels in Office 365 to specify the cadence for delivering Office updates to users. Live webinar: How Microsoft is modernizing device management December 13, 2018, online Discover how Microsoft uses the Microsoft Enterprise Mobility + Security platform to manage data safety while enabling its employees to work from virtually anywhere. Microsoft Ignite: The Tour Through May 22, 2019, worldwide Attend a free technical training event for tech professionals in a city near you. Explore the latest cloud technologies, interact with other IT professionals, gain practical insights, and learn best practices.
https://docs.microsoft.com/en-us/archive/newsletter/technet/2018/technet-flash-volume-20-issue-25-december-5-2018
2020-01-17T22:25:12
CC-MAIN-2020-05
1579250591234.15
[]
docs.microsoft.com
CachedDataset2¶ Provides CachedDataset2. - class CachedDataset2. CachedDataset2(**kwargs)[source]¶ Somewhat like CachedDataset, but different. Simpler in some sense. And more generic. Caching might be worse. If you derive from this class: - you must override _collect_single_seq - you must set num_inputs (dense-dim of “data” key) and num_outputs (dict key -> dim, ndim-1) - you should set labels - handle seq ordering by overriding init_seq_order - you can set _estimated_num_seqs - you can set _num_seqs or _num_timesteps if you know them in advance init_seq_order(self, epoch=None, seq_list=None)[source]¶ :returns whether the order changed (True is always safe to return) This is called when we start a new epoch, or at initialization. Call this when you reset the seq list. get_target_list(self)[source]¶ Target data keys are usually not available during inference. Overwrite this if your dataset is more custom. - class CachedDataset2. SingleStreamPipeDataset(dim, ndim, sparse=False, dtype='float32')[source]¶ Producer: Gets data from somewhere / an external source, running in some thread. Consumer: The thread / code which calls load_seqs and get_data here.
https://returnn.readthedocs.io/en/latest/api/CachedDataset2.html
2020-01-17T22:02:58
CC-MAIN-2020-05
1579250591234.15
[]
returnn.readthedocs.io
debops.debops_legacy¶ The debops.debops_legacy Ansible role can be used to clean up legacy files, directories, APT packages or dpkg-divert diversions created by DebOps but no longer used. The role is not included in the main DebOps playbook to not cause data destruction by mistake. You are advised to use it with caution - it will destroy data on your DebOps hosts. To check the changes that will be done before implementing them, you can run the role against DebOps hosts with: debops service/debops_legacy -l <host> --diff --check Any changes that the role will create on the hosts can be overridden via the Ansible inventory if needed. debops.debops_legacy - Clean up legacy.
https://docs.debops.org/en/stable-1.0/ansible/roles/debops.debops_legacy/index.html
2020-01-17T21:11:33
CC-MAIN-2020-05
1579250591234.15
[]
docs.debops.org
Move all databases to a different server (Project Server 2010) Applies to: Project Server 2010 Topic Last Modified: 2010-08-25 If you want to move the databases associated with a Microsoft Project Web App (PWA) site to a different instance of Microsoft SQL Server, you can do so by using the procedures in this article. Moving the PWA databases is a major maintenance procedure and should be done at a time of minimal system activity. Users cannot access the PWA site while you are performing these procedures. Note This article describes how to move the Microsoft Project Server 2010 databases to a new instance of SQL Server while keeping the same PWA site. For information about how to move your Project Server databases to a different PWA site, see Move all databases (Project Server 2010). The basic steps involved in moving a database are as follows: Unprovision the PWA site Detach the databases Copy the databases to the new instance of SQL Server Attach the databases to the new instance of SQL Server Reprovision the PWA site Unprovision the PWA site When moving the PWA databases, you must unprovision the PWA site as a first step. This removes references to the site from Microsoft SharePoint Server 2010 without making any changes to the site itself. Once you do this, you can reconfigure PWA by moving the reporting database and then reprovision the PWA site. To unprovision the PWA site In the SharePoint Central Administration Web site, under Application Management, click Manage service applications. On the Manage Service Applications page, click the Project Server service application. On the Manage Project Web App Sites page, point to the PWA site where the databases that you want to move reside, click the arrow that appears, and then click Delete. On the Delete Project Web App Site page: Clear the Delete site collection from SharePoint check box. Important The Delete site collection from SharePoint check box must be cleared or the PWA site will be deleted from the content database. Click Delete. Deprovisioning the site may take several minutes. Once the PWA site is no longer listed on the Manage Project Web App Sites page, it is deprovisioned and you can continue with moving the databases. Move the databases Any combination of the following databases can be moved: The Draft, Published, and Archive databases The Reporting database The SharePoint Server content database where the PWA site and project workspaces reside You can move any or all of these databases to different instances of SQL Server. The Draft, Published, and Archive databases must reside on the same instance of SQL Server. The Reporting database and SharePoint Server content database can reside on different instances of SQL Server if you want. If you plan to move the SharePoint Server content database, you must follow the recommended procedures published for SharePoint Server 2010. For more information, see Move content databases (SharePoint Server 2010). Important If you are going to move the content database, you must do so while the PWA site is deprovisioned. Follow the published SharePoint Server 2010 procedures for moving a content database and then complete the rest of the procedures shown here. Each Project Server database can be moved by detaching it from the current instance of SQL Server and attaching it to the new instance of SQL Server. We recommend that you have your database administrator follow these steps if you are unfamiliar with moving databases in SQL Server. Perform the following procedure for each database that you want to move. To move a database Start SQL Server Management Studio. Connect to the instance of SQL Server where the database is located. In Object Explorer, expand Databases. Right-click the database that you want to move, click Tasks, and then click Detach. Copy the database files (.mdf and .ldf files) to the new instance of SQL Server. In SQL Server Management Studio, in Object Explorer, click Connect, and then click Database Engine. Connect to the instance of SQL Server where you copied the database. Right-click Databases and then click Attach. Click Add. Select the database file (.mdf file) and then click OK. Click OK. Once you have finished moving the databases, you can reprovision the PWA site., under Application Management, click Manage service applications. Click the Project Server service application. On the Manage Project Web App Sites page, click Create Project Web App site. On the Create Project Web App Site page, check all settings to make sure that they match the PWA site that you deprovisioned. Important Database names must exactly match the databases in SQL Server for this PWA site, and database server names must correspond to the instances of SQL Server where you reattached the databases. Click OK. Once the PWA site is reprovisioned (when the status is Provisioned), the users can return to using PWA as usual.
https://docs.microsoft.com/en-us/previous-versions/office/project-server-2010/ff961887(v=office.14)?redirectedfrom=MSDN
2020-01-17T22:46:29
CC-MAIN-2020-05
1579250591234.15
[]
docs.microsoft.com
. Note You must have Internet access on each of the three servers for Windows EBS to complete this task. To verify Internet access, see Access the Internet from Windows Essential Business Server. To activate each server for Windows Essential Business Server.
https://docs.microsoft.com/en-us/previous-versions/tn-archive/cc540513(v=ws.10)?redirectedfrom=MSDN
2020-01-17T21:48:20
CC-MAIN-2020-05
1579250591234.15
[]
docs.microsoft.com
role Metamodel::C3MRO Metaobject that supports the C3 method resolution order Metamodel role for the C3 method resolution order (MRO). Note: this method, along with almost the whole metamodel, is part of the Rakudo implementation. The method resolution order for a type is a flat list of types including the type itself, and (recursively) all super classes. It determines in which order the types will be visited for determining which method to call with a given name, or for finding the next method in a chain with nextsame, callsame, nextwith or callwith. ; # implicitly inherits from Anyis CommonAncestoris CommonAncestoris Child2is Child1 is GrandChild2 ;say Weird.^mro; # OUTPUT: «(Weird) (Child1) (GrandChild2) (Child2) (CommonAncestor) (Any) (Mu)␤» C3 is the default resolution order for classes and grammars in Perl 6. Note that roles generally do not appear in the method resolution order (unless they are punned into a class, from which another type inherits), because methods are copied into classes at role application time. Methods method compute_mro method compute_mro() Computes the method resolution order. method mro method mro() Returns a list of types in the method resolution order, even those that are marked is hidden. say Int.^mro; # OUTPUT: «((Int) (Cool) (Any) (Mu))␤» method mro_unhidden method mro_unhidden() Returns a list of types in method resolution order, excluding those that are marked with is hidden. Type Graph Metamodel::C3MRO
https://docs.perl6.org/type/Metamodel::C3MRO
2020-01-17T22:18:30
CC-MAIN-2020-05
1579250591234.15
[]
docs.perl6.org
Neighbor lists¶ Overview¶ Neighbor lists accelerate the search for pairs of atoms that are within a certain cutoff radius of each other. They are most commonly used in hoomd.md.pair to accelerate the calculation of pair forces between atoms. This significantly reduces the number of pairwise distances that are evaluated, which is \(O(N^2)\) if all possible pairs are checked. A small buffer radius (skin layer) r_buff is typically added to the cutoff radius so that the neighbor list can be computed less frequently. The neighbor list must only be rebuilt any time a particle diffuses r_buff/2. However, increasing r_buff also increases the number of particles that are included in the neighbor list, which slows down the pair force evaluation. A balance can be obtained between the two by optimizing r_buff. A simple neighbor list is built by checking all possible pairs of atoms periodically, which makes the overall algorithm \(O(N^2)\). The neighbor list can be computed more efficiently using an acceleration structure which further reduces the complexity of the problem. There are three accelerators implemented in HOOMD-blue: More details for each can be found below and in M.P. Howard et al. 2016. Each neighbor list style has its own advantages and disadvantages that the user should consider on a case-by-case basis. Cell list¶ The cell-list neighbor list ( hoomd.md.nlist.cell) spatially sorts particles into bins that are sized by the largest cutoff radius of all pair potentials attached to the neighbor list. For example, in the figure below, there are small A particles and large B particles. The bin size is based on the cutoff radius of the largest particles \(r_{\rm BB}\). To find neighbors, each particle searches the 27 cells that are adjacent to its cell, which are shaded around each particle. Binning particles is O(N), and so neighbor search from the cell list is also O(N). This method is very efficient for systems with nearly monodisperse cutoffs, but performance degrades for large cutoff radius asymmetries due to the significantly increased number of particles per cell and increased search volume. For example, the small A particles, who have a majority of neighbors who are also A particles within cutoff \(r_{\rm AA}\) must now search through the full volume defined by \(r_{\rm BB}\). In practice, we have found that this neighbor list style is the best option for most users when the asymmetry between the largest and smallest cutoff radius is less than 2:1. Note Users may find that the cell-list neighbor list consumes a significant amount of memory, especially on CUDA devices. One cause of this can be non-uniform density distributions because the memory allocated for the cell list is proportional the maximum number of particles in any cell. Another common cause is large system volumes combined with small cutoffs, which results in a very large number of cells in the system. In these cases, consider using hoomd.md.nlist.stencil or hoomd.md.nlist.tree. Stenciled cell list¶ Performance of the simple cell-list can be improved in the size asymmetric case by basing the bin size of the cell list on the smallest cutoff radius of all pair potentials attached to the neighbor list (P.J. in’t Veld et al. 2008). From the previous example, the bin size is now based on \(r_{\rm AA}\). A stencil is then constructed on a per-pair basis that defines the bins to search. Some particles can now be excluded without distance check if they lie in bins outside the stencil. The small A particles only need to distance check other A particles in the dark blue cells (dashed outline). This reduces both the number of distances evaluations and the amount of particle data that is read. We have found that the stenciled cell list ( hoomd.md.nlist.stencil) performs well for size asymmetric systems that have comparable concentrations of both small and large particles. Performance may degrade when the fraction of large particles is low (< 20%). The memory consumed by the stenciled cell list is typically much lower than that used for a comparable simple cell list because of the way the stencils constructed to query the cell list. However, this comes at the expense of higher register usage on CUDA devices, which may lead to reduced performance compared to the simple cell list in some cases depending on your CUDA device’s architecture. Note Users may still find that the stenciled cell list consumes a significant amount of memory for systems with large volumes and small cutoffs. In this case, the bin size should be made larger (possibly at the expense of performance), or hoomd.md.nlist.tree should be used instead. LBVH tree¶ Linear bounding volume hierarchies (LBVHs) are an entirely different approach to accelerating the neighbor search. LBVHs are binary tree structures that partition the system based on objects rather than space (see schematic below). This means that the memory they require scales with the number of particles in the system rather than the system volume, which may be particularly advantageous for large, sparse systems. Because of their lightweight memory footprint, LBVHs can also be constructed per-type, and this makes searching the trees very efficient in size asymmetric systems. The LBVH algorithm is O(N log N) to search the tree. We have found that LBVHs ( hoomd.md.nlist.tree) are very useful for systems with size asymmetry greater than 2:1 between the largest and smallest cutoffs, and when the fraction of large particles is dilute (< 20%). These conditions are typical of many colloidal systems. Additionally, LBVHs can be used advantageously in sparse systems or systems with large volumes, where they have less overhead and memory demands than cell lists. Multiple neighbor lists¶ Multiple neighbor lists can be created to accelerate simulations where there is significant disparity in the pairwise cutoffs between pair potentials. If one pair force has a maximum cutoff radius much smaller than another pair force, the pair force calculation for the short cutoff will be slowed down considerably because many particles in the neighbor list will have to be read and skipped because they lie outside the shorter cutoff. Attaching each potential to a different neighbor list may improve performance of the pair force calculation at the expense of duplicate computation of the neighbor list. When using multiple neighbor lists, it may be advantageous to adopt two different neighbor list styles. For example, in a colloidal suspension of a small number of large colloids dispersed in many solvent particles, a modest performance gain may be achieved by computing the solvent-solvent neighbors using hoomd.md.nlist.cell, but the solvent-colloid and colloid-colloid interactions using hoomd.md.nlist.tree. Particles can be excluded from neighbor lists by setting their cutoff radius to False or a negative value.
https://hoomd-blue.readthedocs.io/en/stable/nlist.html
2020-01-17T23:03:15
CC-MAIN-2020-05
1579250591234.15
[array(['_images/cell_list.png', 'Cell list schematic'], dtype=object) array(['_images/stencil_schematic.png', 'Stenciled cell list schematic'], dtype=object) array(['_images/tree_schematic.png', 'LBVH tree schematic'], dtype=object)]
hoomd-blue.readthedocs.io
Overview Channelize.io allows you to embed real-time messaging, video & voice calling into your Applications in the most seamless development process. It also allows you to build your standalone chat application using Platform APIs, JavaScript SDK, and Mobile SDKs. Our JavaScript SDK provides you with various methods to initialize, configure, and build the chat having below capabilities: - 1-to-1 Conversation - Group Conversation - Text Messages, Media Files, Location, Stickers, GIFs and Meta-messages/Information Messages support - Quote/Reply Messages - @mentions - Forward Messages - Broadcast Messages - Online Presence Indicator - Delivery & Read Receipt Indicator - Typing Indicator - Delete messages for everyone - Text Messages Formatting - Rich URL Preview - Video & Voice Calling - External Chatbots integration support - Block/Unblock User - User Relationships i.e Friend & Follow - Users & Groups Search - Smart Push Notifications (FCM) - Webhooks - Multi-lingual support - Real-time events for real-time communication.
https://docs.channelize.io/javascript-sdk-introduction-overview/
2020-01-17T22:41:10
CC-MAIN-2020-05
1579250591234.15
[]
docs.channelize.io
Deployment–Using existing SQL instances with PDT During.
https://docs.microsoft.com/en-us/archive/blogs/privatecloud/deploymentusing-existing-sql-instances-with-pdt
2020-01-17T23:11:22
CC-MAIN-2020-05
1579250591234.15
[]
docs.microsoft.com
AutoResolvedWinner Property Returns a Boolean that determines if the item is a winner of an automatic conflict resolution. Read-only. Note A value of False does not necessarily indicate that the item is a loser of an automatic conflict resolution. The item should be in conflict with another item. expression.AutoResolvedWinner *expression * Required. An expression that returns one of the objects in the Applies To list. Remarks If an item has its Conflicts.Count property greater than zero and if its AutoResolvedWinner property is True, it is a winner of an automatic conflict resolution. On the other hand, if the item is in conflict and has its AutoResolvedWinner property as False, it is a loser in an automatic conflict resolution. Example The following Microsoft Visual Basic for Applications (VBA) example used the AutoResolvedWinner property to determine if an item is a winner or loser in an automatic conflict resolution. To run this example, make sure an e-mail item is open in the active window. Sub ConflictStatus() Dim myOlApp As New Outlook.Application Dim mail As Outlook.MailItem Set mail = myOlApp.ActiveInspector.CurrentItem If mail.Conflicts.Count > 0 Then If mail.AutoResolvedWinner = True Then MsgBox "This item is a winner in an automatic conflict resolution." Else MsgBox "This item is a loser in an automatic conflict resolution." End If Else MsgBox "This item is not in conflict with any item." End If
https://docs.microsoft.com/en-us/previous-versions/office/developer/office-2003/aa211414%28v%3Doffice.11%29
2020-01-17T21:50:08
CC-MAIN-2020-05
1579250591234.15
[]
docs.microsoft.com
Thispncpimghfponcpjkgihfikppbbhchil/). 2 months ago
https://docs.uipath.com/studio/docs/chrome-extension
2020-01-17T22:53:13
CC-MAIN-2020-05
1579250591234.15
[]
docs.uipath.com
INSERT Statement Impala supports inserting into tables and partitions that you create with the Impala CREATE TABLE statement, or pre-defined tables and partitions created through Hive. Syntax: [with_clause] INSERT [hint_clause] { INTO | OVERWRITE } [TABLE] table_name [(column_list)] [ PARTITION (partition_clause)] { [hint_clause] select_statement | VALUES (value [, value ...]) [, (value [, value ...]) ...] } partition_clause ::= col_name [= constant] [, col_name [= constant] ...] hint_clause ::= hint (CDH 5.. - An optional hint clause immediately either before the SELECT keyword or after the INSERT keyword, to fine-tune the behavior when doing an INSERT ... SELECT operation into partitioned Parquet tables. The hint clause cannot be specified in multiple places.. Amazon S3 considerations:. ADLS considerations: In CDH 5.12 / Impala 2.9 and higher, the Impala DML statements (INSERT, LOAD DATA, and CREATE TABLE AS SELECT) can write data into a table or partition that resides in the Azure Data Lake Store (ADLS). ADLS Gen2 is supported in CDH 6.1 and higher. In theCREATE TABLE or ALTER TABLE statements, specify the ADLS location for tables and partitions with the adl:// prefix for ADLS Gen1 and abfs:// or abfss:// for ADLS Gen2 in the LOCATION attribute. If you bring data into ADLS using the normal ADLS transfer mechanisms instead of Impala DML statements, issue a REFRESH statement for the table before using Impala to query the ADLS data. See Using Impala with the Azure Data Lake Store (ADLS) for details about reading and writing ADLS data with Impala. ‑‑insert_inherit_permissions startup option for the impalad daemon. Inserting Into Partitioned Tables with PARTITION Clause For a partitioned table, the optional PARTITION clause identifies which partition or partitions the values are inserted into. All examples in this section will use the table declared as below: CREATE TABLE t1 (w INT) PARTITIONED BY (x INT, y STRING); Static Partition Inserts In a static partition insert where a partition key column is given a constant value, such as PARTITION (year=2012, month=2), the rows are inserted with the same values specified for those partition key columns. The number of columns in the SELECT list must equal the number of columns in the column permutation. The PARTITION clause must be used for static partitioning inserts. Example: INSERT INTO t1 PARTITION (x=10, y='a') SELECT c1 FROM some_other_table; Dynamic Partition Inserts In a dynamic partition insert where a partition key column is in the INSERT statement but not assigned a value, such as in PARTITION (year, region)(both columns unassigned) or PARTITION(year, region='CA') (year column unassigned), the unassigned columns are filled in with the final columns of the SELECT or VALUES clause. In this case, the number of columns in the SELECT list must equal the number of columns in the column permutation plus the number of partition key columns not assigned a constant value. See Static and Dynamic Partitioning Clauses for examples and performance characteristics of static and dynamic partitioned inserts. The following rules apply to dynamic partition inserts. The columns are bound in the order they appear in the INSERT statement. The table below shows the values inserted with the INSERT statements of different column orders. - When a partition clause is specified but the non-partition columns are not specified in the INSERT statement, as in the first example below, the non-partition columns are treated as though they had been specified before the PARTITION clause in the SQL. Example: These three statements are equivalent, inserting 1 to w, 2 to x, and ‘c’ to y columns. INSERT INTO t1 PARTITION (x,y) VALUES (1, 2, ‘c’); INSERT INTO t1 (w) PARTITION (x, y) VALUES (1, 2, ‘c’); INSERT INTO t1 PARTITION (x, y='c') VALUES (1, 2); - The PARTITION clause is not required for dynamic partition, but all the partition columns must be explicitly present in the INSERT statement in the column list or in the PARTITION clause. The partition columns cannot be defaulted to NULL. Example: The following statements are valid because the partition columns, x and y, are present in the INSERT statements, either in the PARTITION clause or in the column list. INSERT INTO t1 PARTITION (x,y) VALUES (1, 2, ‘c’); INSERT INTO t1 (w, x) PARTITION (y) VALUES (1, 2, ‘c’); The following statement is not valid for the partitioned table as defined above because the partition columns, x and y, are not present in the INSERT statement. INSERT INTO t1 VALUES (1, 2, 'c'); - If partition columns do not exist in the source table, you can specify a specific value for that column in the PARTITION clause. Example: The source table only contains the column w and y. The value, 20, specified in the PARTITION clause, is inserted into the x column. INSERT INTO t1 PARTITION (x=20, y) SELECT * FROM source;
https://docs.cloudera.com/documentation/enterprise/6/6.1/topics/impala_insert.html
2020-01-17T22:07:38
CC-MAIN-2020-05
1579250591234.15
[]
docs.cloudera.com
There are many advantages to setting up a private network of devices that need to be managed remotely. Much like a virtual private network (VPN),it resides. - Diagnose LoopEdge issues without requiring on-premise intervention. Using the network's unique Node ID as a secure token, an end-to-end handshake between devices ensures a direct, secure connection. To Refer to the following sections on this page to configure remote access to a LoopEdge gateway: In LoopCloud Create a LoopEdge model by selecting the MQTT plain TCP connection. See Create a Device Model. Note that the model's JASON schema includes a remote network parameter. Configure Cloud Connectivity icon:
https://docs.litmusautomation.com/pages/diffpagesbyversion.action?pageId=9994362&originalVersion=3&revisedVersion=65
2020-01-17T21:11:32
CC-MAIN-2020-05
1579250591234.15
[]
docs.litmusautomation.com
for Visual Studio (or the Visual Studio Command Prompt in Windows 7). For more information, see Command Prompts. Two versions of Mage.exe and MageUI.exe are included as a component of Visual Studio. To see version information, run MageUI.exe, select Help, and select About. This documentation describes version 4.0.x.x of Mage.exe and MageUI.exe. Note MageUI.exe does not support the compatibleFrameworks element when saving an application manifest that has already been signed with a certificate using MageUI.exe. Instead, you must use Mage.exe. UIElement List The following table lists the menu and toolbar items that are available. Preferences Dialog Box The Preferences dialog box contains the following elements.. Tab and Panel Descriptions When you open a document with MageUI.exe, it appears within its own tab page. Each tab contains a set of property panels. The panels contain grouped subsets of the document's data. Application Manifest Tab The Application Manifest tab displays the contents of an application manifest. The application manifest describes all files included with the deployment, and the permissions required for the application to run on the client. The Application Manifest tab contains the following tabs. Name Tab The Name tab is displayed when you first create or open an application manifest. It uniquely identifies the deployment, and optionally specifies a valid target platform. Description Tab This information is usually provided within the deployment manifest. These fields can only be modified if the Use Application Manifest Trust Information check box is selected on the Application Options tab. Application Options Tab Files Tab Permissions Required Tab Use the Permissions Required tab if you need to grant your application more access to the local computer than is granted by default. For more information, see Securing ClickOnce Applications. Deployment Manifest Tab The Deployment Manifest tab contains the following tabs. Name Tab The Name tab is displayed when you first create or open a deployment manifest. It uniquely identifies the deployment, and optionally specifies a valid target platform. Description Tab Deployment Options Tab Update Options Tab The Update Options tab only contains options mentioned here when the Application Type selection box on the Name tab is set to Install Locally. Application Reference Tab The Application Reference tab contains the same fields as the Name tab described earlier in this topic. The one exception is the following field. See also Feedback
https://docs.microsoft.com/en-us/dotnet/framework/tools/mageui-exe-manifest-generation-and-editing-tool-graphical-client
2020-01-17T22:24:43
CC-MAIN-2020-05
1579250591234.15
[]
docs.microsoft.com
appendChild Method Appends a new child node as the last child of the node. JScript Syntax var objXMLDOMNode = oXMLDOMNode.appendChild(newChild); Parameters newChild An object. Address of the new child node to be appended at the end of the list of children belonging to this node. Return Value An object. Returns the new child node successfully appended to the list. Example); } Resource File The JScript example uses the following file. XSLT File: appendChild.xml <?xml version='1.0'?> <root> <firstChild/> </root> Output The JScript example outputs the following in a message box. Before appendChild: <root> <firstChild/> </root> After appendChild: <root> <firstChild/> <newChild/></root> C/C++ Syntax HRESULT appendChild( IXMLDOMNode *newChild, IXMLDOMNode **outNewChild); Parameters newChild[in] The address of the new child node to be appended to the end of the list of children of this node. outNewChild[out, retval] The new child node successfully appended to the list. If Null, no object is created. Return Values S_OK The value returned if successful. E_INVALIDARG The value returned if the newChild parameter is Null. E_FAIL The value returned if an error occurs. Remarks 6.0 validates additions to the DOM only when the user explicitly calls validate Method1. This means that you do not have to add nodes to the tree in the same order as they are defined in the schema. The implication is that there may be intermediate states between calls to validate where the DOM is invalid against the schema. Versioning Implemented in: MSXML 3.0 and MSXML 6.0 Applies to See Also save Method (DOMDocument) xml Property1 ownerDocument Property
https://docs.microsoft.com/en-us/previous-versions/windows/desktop/ms766535%28v%3Dvs.85%29
2020-01-17T22:28:46
CC-MAIN-2020-05
1579250591234.15
[]
docs.microsoft.com
Table of Contents Product Index.
http://docs.daz3d.com/doku.php/public/read_me/index/59903/start
2020-01-17T22:17:50
CC-MAIN-2020-05
1579250591234.15
[]
docs.daz3d.com
Lomb-Scargle periodogram¶ The Lomb-Scargle periodogram ([Barning1963], [Vanicek1969], [Scargle1982], [Lomb1976]) is one of the best known and most popular period finding algorithms used in astrononomy. If you would like to learn more about least-squares methods for periodic signals, see the review article by [VanderPlas2017]. The LS periodogram is a least-squares estimator for the following model and it is equivalent to the Discrete Fourier Transform in the regularly-sampled limit. For irregularly sampled data, LS is a maximum likelihood estimator for the parameters \(\theta\) in the case where the noise is Gaussian. The periodogram has many normalizations in the literature, but cuvarbase adopts where is the goodness-of-fit statistic for the optimal parameters \(\theta\) and is the goodness-of-fit statistic for a constant fit, and \(\bar{y}\) is the weighted mean, where \(w_i \propto 1/\sigma_i^2\) and \(\sum_iw_i = 1\). The closed form of the periodogram is given by Where For the original formulation of the Lomb-Scargle periodogram without the constant offset term. Adding a constant offset¶ Lomb-Scargle can be extended in many ways, most commonly to include a constant offset [ZK2009]. This protects against cases where the mean of the data does not correspond with the mean of the underlying signal, as is usually the case with sparsely sampled data or for signals with large amplitudes that become too bright or dim to be observed during part of the signal phase. With the constant offset term, the closed-form solution to \(P(\omega)\) is the same, but the terms are slightly different. Derivations of this are in [ZK2009]. Getting \(\mathcal{O}(N\log N)\) performance¶ The secret to Lomb-Scargle’s speed lies in the fact that computing it requires evaluating sums that, for regularly-spaced data, can be evaluated with the fast Fourier transform (FFT), which scales as \(\mathcal{O}(N_f\log N_f)\) where \(N_f\) is the number of frequencies. For irregularly spaced data, however, we can employ tricks to get to this scaling. - We can “extirpolate” the data with Legendre polynomials to a regular grid and then perform the FFT [PressRybicki1989], or, - We can use the non-equispaced fast Fourier transform (NFFT) [DuttRokhlin1993], which is tailor made for this exact problem. The latter was shown by [Leroy2012] to give roughly an order-of-magnitude speed improvement over the [PressRybicki1989] method, with the added benefit that the NFFT is a rigorous extension of the FFT and has proven error bounds. It’s worth mentioning the [Townsend2010] CUDA implementation of Lomb-Scargle, however this uses the \(\mathcal{O}(N_{\rm obs}N_f)\) “naive” implementation of LS without any FFT’s. Estimating significance¶ See [Baluev2008] for more information (TODO.) Example: Basic¶ import skcuda.fft import cuvarbase.lombscargle as gls import numpy as np import matplotlib.pyplot as plt t = np.sort(np.random.rand(300)) y = 1 + np.cos(2 * np.pi * 100 * t - 0.1) dy = 0.1 * np.ones_like(y) y += dy * np.random.randn(len(t)) # Set up LombScargleAsyncProcess (compilation, etc.) proc = gls.LombScargleAsyncProcess() # Run on single lightcurve result = proc.run([(t, y, dy)]) # Synchronize all cuda streams proc.finish() # Read result! freqs, ls_power = result[0] ############ # Plotting # ############ f, ax = plt.subplots() ax.set_xscale('log') ax.plot(freqs, ls_power) ax.set_xlabel('Frequency') ax.set_ylabel('Lomb-Scargle') plt.show() Example: Batches of lightcurves¶ import skcuda.fft import cuvarbase.lombscargle as gls import numpy as np import matplotlib.pyplot as plt nlcs = 9 def lightcurve(freq=100, ndata=300): t = np.sort(np.random.rand(ndata)) y = 1 + np.cos(2 * np.pi * freq * t - 0.1) dy = 0.1 * np.ones_like(y) y += dy * np.random.randn(len(t)) return t, y, dy freqs = 200 * np.random.rand(nlcs) data = [lightcurve(freq=freq) for freq in freqs] # Set up LombScargleAsyncProcess (compilation, etc.) proc = gls.LombScargleAsyncProcess() # Run on batch of lightcurves results = proc.batched_run_const_nfreq(data) # Synchronize all cuda streams proc.finish() ############ # Plotting # ############ max_n_cols = 4 ncols = max([1, min([int(np.sqrt(nlcs)), max_n_cols])]) nrows = int(np.ceil(float(nlcs) / ncols)) f, axes = plt.subplots(nrows, ncols, figsize=(3 * ncols, 3 * nrows)) for (frqs, ls_power), ax, freq in zip(results, np.ravel(axes), freqs): ax.set_xscale('log') ax.plot(frqs, ls_power) ax.axvline(freq, ls=':', color='r') f.text(0.05, 0.5, "Lomb-Scargle", rotation=90, va='center', ha='right', fontsize=20) f.text(0.5, 0.05, "Frequency", va='top', ha='center', fontsize=20) for i, ax in enumerate(np.ravel(axes)): if i >= nlcs: ax.axis('off') f.tight_layout() f.subplots_adjust(left=0.1, bottom=0.1) plt.show()
https://cuvarbase.readthedocs.io/en/stable/lomb.html
2020-01-17T22:52:31
CC-MAIN-2020-05
1579250591234.15
[]
cuvarbase.readthedocs.io
PlayerChannels This object exposes properties and methods to do with players that the local player is speaking to. Count : int The number of players which the local player is a speaking to. Contains(PlayerChannel) : bool Returns a boolean value indicating if the local player is speaking to the given channel. Open(string, [bool], [ChannelPriority], [float]) : PlayerChannel Opens a channel to begin speaking to the given player and returns a PlayerChannel object(PlayerChannel) : bool Closes the given channel and returns a boolean indicating if the channel was open in the first place.
https://dissonance.readthedocs.io/en/latest/Reference/Other/PlayerChannels/index.html
2020-01-17T21:09:53
CC-MAIN-2020-05
1579250591234.15
[]
dissonance.readthedocs.io
Hacking on cider-nrepl Hacking on cider-nrepl requires nothing bit a bit of knowledge of Clojure and nREPL. In this section we’ll take a look at some practical tips to make you more productive while working on the project. Makefile cider-nrepl has some pretty complicated Lein profiles, as it has to deal with multiple version of Clojure and ClojureScript, plus dependency inlining with Mr. Anderson. That’s why we’ve added a good old Makefile to save you the trouble of having to think about the profiles and just focus on the tasks at hand. Obviously you can still work with Lein directly, but you won’t have to do this most of the time. Testing your changes You’ve got several options for doing this: Installing a snapshot of your work recently and doing tests against it (e.g. with make install). Relying solely on the unit tests. You better write good unit tests, though! Spinning new versions of nREPL from the REPL, and connecting some client to them to test your changes. If you’re already using a client that depends on cider-nrepl (e.g. CIDER) making changes to the cider-nrepl code will normally result in those changes becoming immediately available to your client. Running the tests Just do: $ make test That’s going to handle the dependency inlining behind the scenes. By default the tests are going to be run against the most recent Clojure version that’s supported. Linting cider-nrepl uses eastwood and cljfmt. Make sure your changes conform to the project’s baseline by doing: $ make eastwood $ make cljfmt
https://docs.cider.mx/cider-nrepl/hacking.html
2020-01-17T22:08:47
CC-MAIN-2020-05
1579250591234.15
[]
docs.cider.mx
public interface TypeCodec<JavaTypeT> Type codec implementations: nullvalues and empty byte buffers (i.e. ) in a reasonable way; usually,) in a reasonable way; usually, Buffer.remaining()== 0 NULLCQL values should map to nullreferences, but exceptions exist; e.g. for varchar types, a NULLCQL value maps to a nullreference, whereas an empty buffer maps to an empty String. For collection types, it is also admitted that NULLCQL values map to empty Java collections instead of nullreferences. In any case, the codec's behavior with respect to nullvalues and empty ByteBuffers should be clearly documented. PrimitiveBooleanCodecfor boolean. This allows the driver to avoid the overhead of boxing when using primitive accessors such as GettableByIndex.getBoolean(int). ByteBufferinstances by performing relative read operations that modify their current position; codecs should instead prefer absolute read methods or, if necessary, duplicatetheir byte buffers prior to reading them. @NonNull GenericType<JavaTypeT> getJavaType() @NonNull DataType getCqlType() default boolean accepts(@NonNull GenericType<?> javaType) The default implementation is invariant with respect to the passed argument (through the usage of GenericType.equals(Object)) and it's strongly recommended not to modify this behavior. This means that a codec will only ever accept the exact Java type that it has been created for. If the argument represents a Java primitive type, its wrapper type is considered instead. default boolean accepts(@NonNull Class<?> javaClass) This implementation simply compares the given class (or its wrapper type if it is a primitive type) against this codec's runtime (raw) class; it is invariant with respect to the passed argument (through the usage of Object.equals(Object) and it's strongly recommended not to modify this behavior. This means that a codec will only ever return true for the exact runtime (raw) Java class that it has been created for. Implementors are encouraged to override this method if there is a more efficient way. In particular, if the codec targets a final class, the check can be done with a simple ==. default boolean accepts(@NonNull Object value) The object's Java type is inferred from its runtime (raw) type, contrary to accepts(GenericType) which is capable of handling generic types. Contrary to other accept methods, this method's default implementation is covariant with respect to the passed argument (through the usage of Class.isAssignableFrom(Class)) and it's strongly recommended not to modify this behavior. This means that, by default, a codec will accept any subtype of the Java type that it has been created for. This is so because codec lookups by arbitrary Java objects only make sense when attempting to encode, never when attempting to decode, and indeed the encode method is covariant with JavaTypeT. It can only handle non-parameterized types; codecs handling parameterized types, such as collection types, must override this method and perform some sort of "manual" inspection of the actual type parameters. Similarly, codecs that only accept a partial subset of all possible values must override this method and manually inspect the object to check if it complies or not with the codec's limitations. Finally, if the codec targets a non-generic Java class, it might be possible to implement this method with a simple instanceof check. default boolean accepts(@NonNull DataType cqlType) @Nullable ByteBuffer encode(@Nullable JavaTypeT value, @NonNull ProtocolVersion protocolVersion) nullinput as the equivalent of an empty collection. @Nullable JavaTypeT decode(@Nullable ByteBuffer bytes, @NonNull ProtocolVersion protocolVersion). @NonNull String format(@Nullable JavaTypeT value) Implementors should take care of quoting and escaping the resulting CQL literal where applicable. Null values should be accepted; in most cases, implementations should return the CQL keyword "NULL" for null inputs. Implementing this method is not strictly mandatory. It is used: AggregateMetadata.describe(boolean); toString()representation of some driver objects (such as UdtValueand TupleValue), which is only used in driver logs; QueryBuilder#literal(Object, CodecRegistry)and QueryBuilder#literal(Object, TypeCodec)). @Nullable JavaTypeT parse(@Nullable). If you choose not to implement this method, don't throw an exception but instead return null.
https://docs.datastax.com/en/drivers/java/4.1/com/datastax/oss/driver/api/core/type/codec/TypeCodec.html
2020-01-17T22:43:43
CC-MAIN-2020-05
1579250591234.15
[]
docs.datastax.com
View source for First Things to Know about TouchDesigner You do not have permission to edit this page, for the following reason: The action you have requested is limited to users in the group: administators. You can view and copy the source of this page. Return to First Things to Know about TouchDesigner.
https://docs.derivative.ca/index.php?title=First_Things_to_Know_about_TouchDesigner&action=edit&oldid=9807
2020-01-17T22:58:01
CC-MAIN-2020-05
1579250591234.15
[]
docs.derivative.ca
January 2001 XML in .NET: .NET Framework XML Classes and C# Offer Simple, Scalable Data Manipulation Digital Dashboards: Web Parts Integrate with Internet Explorer and Outlook to Build Personal Portals Windows CE: eMbedded Visual Tools 3.0 Provide a Flexible and Robust Development Environment Pocket PC: Migrating a GPS App from the Desktop to eMbedded Visual Basic 3.0. Dave Grundgeiger and Patrick Escarcega Editor's Note: More Blasts from the Past New Stuff: Resources for Your Developer Toolbox Theresa W. Carey Web Q&A: Printing from a Web Page, Screen Scraping, Origin of an HTTP Request, and More Robert Hess Serving the Web: Stored Procedure Wizard in Visual Basic Boosts Productivity Ken Spencer Cutting Edge: Binary Behaviors in Internet Explorer 5.5 Dino Esposito Visual Programmer: Advanced ASP.NET Server-side Controls George Shepherd C++ Q&A: Browser Detection in the Registry, Changing Cursors in Windows, Avoiding Resource ID Collision Paul DiLascia MSDN Update: News this Month from MSDN
https://docs.microsoft.com/en-us/archive/msdn-magazine/2001/january/msdn-magazine-january-2001
2020-01-17T23:12:17
CC-MAIN-2020-05
1579250591234.15
[]
docs.microsoft.com
Internet Explorer version 11.0.9600.17031 can cause some problems with Orchestrator's user interface if the Font Download feature is not enabled. To enable the Font Download feature, follow the steps below: - In Internet Explorer, click Tools > Internet Options. The Internet Options window is displayed. - In the Security tab, click Custom level. The Security Settings - Internet Zone window is displayed. - Look for the Font download feature, and set it to Enable. - Click OK. Your settings are saved. Another issue can be encountered while logging in to Orchestrator after you set up Auto Login. The user is not redirected to the login page if he/she closes the Windows Authentication form. This happens if the Allow META REFRESH option is disabled. Learn more about Setting Up Auto Login for Users Under an Active Directory Group. Updated about a month ago
https://docs.uipath.com/orchestrator/docs/ie-110960017031-issues
2020-01-17T23:02:58
CC-MAIN-2020-05
1579250591234.15
[]
docs.uipath.com
Robot tray. From a Windows Service to User Mode - Open an elevated Command Prompt instance. - Use the Net Stop UiRobotSvccommand. This stops the Robot Windows Service. - Use the cd %ProgramFiles(x86)%\UiPath\Studiocommand to navigate to the UiPath installation folder. - Use the UiPath.Service.Host.exe uninstallcommand to remove the UiPath Robot Service. Your Robot now runs in User Mode. Find out more about Converting Robot Deployment Types. 2 months ago
https://docs.uipath.com/robot/docs/using-mapped-network-drives
2020-01-17T23:11:03
CC-MAIN-2020-05
1579250591234.15
[]
docs.uipath.com
Attributes Learn what are Attributes and how to use them! Basics With User.com you can gather detailed data about everyone, who visited your website. Those informations are stored as attributes. You can create attributes not only for users but for deals and companies as well. Standard and custom attributes You can divide attributes in two sections: Standard and Custom. Standard attributes are the most common and simple solution. Attributes can be information gathered as user enters your website(Country, browser, language) or after submitting a form(name, surname, phone number). You can create Custom Attributes yourself. Here you can learn more about the process. Attribute types: String: - "Name of the purchased product" - "URL of the purchased product" Intiger: - Price of the product (USD) - Amount of products in a cart - Number of Agents created Date&time: - Scheduled delivery date - Scheduled subscription renewal Boolean True/False: - Opted for express delivery - User used a coupon Fixed choice attribute: - Name - Value - Multiple choice Floating-point number - A number which decimal point can float List of standard attributes: Can be modified: - Name (String) - Last Name (String) - Phone Number (String) - Gender (Logical value - 1-undefined, 2-female, 3-male) - Last Seen (date&time) - Last message received (date&time) - Country (String) - Region (String) - City (String) - URL (String) - IP Address (String) - Referrer (String) - Timezone (String) - Device (Logical value) - Browser (String) - Browser language (String) - Browser version (String) - OS (String) - Hostname (String) - Resolution (String) - Company (String) - Status (Logical value; visitor -1, user - 2) - Assigned to (Logical value) - Restricted to (String) - Unsubscribed (Logical value) - Facebook (String) - LinkedIn (String) - Twitter (String) - Google+ (String) - User ID (String) - Enable notifications (Logical value) Cannot be modified: - Page visits (Intiger) - First seen (date&time) - Last seen (date&time) - Created (date&time) - Updated on (date&time) - Key (String) - WebPush enabled (Logical value) List of custom attributes To see a full list of custom attributes go to Settings --> App Settings --> User attributes. Where can I find user attributes: You can find user attributes by going in to the users profile. You can look desired user up or just click on it in the People section. Attributes are listed on the right Second place you can see user attributes is the conversations section(chat). It makes it easy to instantly recognise a user. Creating/Updating attributes: To learn about updating attributes visit this page. What can I use attributes for? Filters - For the start, you can filter your users using various attributes in the data base. Click on the filter icon to see all the attributes you can filter with. - Depending on the attribute type you can use various filters: Automations - There are several action modules that involve attribute: - Client's attribute change - automation starts when someone changes the attribute. - Filters - i.e you can trigger an action when attribute will have desired value. - A/B test (link) - i.e you can take action when value of one attribute is bigger than other. - Attribute updated (link) - you can update an attribute. - Changing custom attribute (link)- you can add or take the value from the intiger attribute. Conversations - You can use attributes to personalize messages. Using snippet tags will give you quicker way to read attributes from the user's profile. - Full list of snippet tags you will find in Tools --> Snippet tags. Note: On all user attributes you are allowed to send empty values, in which case the attribute's value is set to empty. There is one exception and it is user_id, which we do not allow to be set to empty. We use user_id as an identifier and you could by one rather minor mistake break down your database which would be extra troublesome to fix. All in all when it comes to using user_id in our JavaScript integrations you can set it or change it to other value, but you cannot remove the value - if you want to create another user (accordingly to this instruction) you need to use UE.resetAuth(data). If you however try to set it to empty value the result will be a 400 error and no pagehit will be added to user's timeline.
https://docs.user.com/attributes/
2020-01-17T23:09:34
CC-MAIN-2020-05
1579250591234.15
[array(['https://app.userengage.com/media/uploads/149/2018-07-24-14-18-00-window.png', None], dtype=object) array(['https://app.userengage.com/media/uploads/149/2018-07-24-11-05-06-window-1.png', None], dtype=object) ]
docs.user.com
Multilingual Extension¶ For translating CKAN’s web interface see Translating CKAN. In addition to user interface internationalization, a CKAN administrator can also enter translations into CKAN’s database for terms that may appear in the contents of datasets, groups or tags created by users. When a user is viewing the CKAN site, if the translation terms database contains a translation in the user’s language for the name or description of a dataset or resource, the name of a tag or group, etc. then the translated term will be shown to the user in place of the original. Setup and Configuration¶ By default term translations are disabled. To enable them, you have to specify the multilingual plugins using the ckan.plugins setting in your CKAN configuration file, for example: # List the names of CKAN extensions to activate. ckan.plugins = multilingual_dataset multilingual_group multilingual_tag Of course, you won’t see any terms getting translated until you load some term translations into the database. You can do this using the term_translation_update and term_translation_update_many actions of the CKAN API, See API guide for more details. If you want to quickly test the term translation feature without having to provide your own translations, you can load CKAN’s test translations into the database by running this command from your shell: paster --plugin=ckan create-test-data translations See Command Line Interface for more details. Testing The Multilingual Extension¶ If you have a source installation of CKAN you can test the multilingual extension by running the tests located in ckanext/multilingual/tests. You must first install the packages needed for running CKAN tests into your virtual environment, and then run this command from your shell: nosetests --ckan ckanext/multilingual/tests See Testing CKAN for more information.
http://docs.ckan.org/en/ckan-2.4.4/maintaining/multilingual.html
2017-12-11T07:52:15
CC-MAIN-2017-51
1512948512584.10
[]
docs.ckan.org
Table of Contents Product Index.
http://docs.daz3d.com/doku.php/public/read_me/index/1569/start
2017-12-11T07:30:53
CC-MAIN-2017-51
1512948512584.10
[]
docs.daz3d.com
Below is a list of the top-level property groups, as displayed in the Property Group View, when NVIDIA Iray is set as the active Render Engine. Each of the pages linked below provides descriptions of the property groups and properties found within these top-level groups. The Tone Mapping and Environment property groups are displayed here because the settings are (currently) unique to this render engine. In future versions, as these settings become more generalized in the application, these property groups will be moved to the Environment (WIP) pane.
http://docs.daz3d.com/doku.php/public/software/dazstudio/4/referenceguide/interface/panes/render_settings/engine/nvidia_iray/start
2017-12-11T07:31:58
CC-MAIN-2017-51
1512948512584.10
[]
docs.daz3d.com
To upgrade Ontolica modules you will need to run the Setup.exe program from the main folder of the installation package on one of the Web front-end servers in your SharePoint farm, preferably where the SharePoint Central Administration site is hosted. If the version that is being installed is later than existing one the Ontolica installer automatically detects it and provides the Upgrade option, which will be selected by default. Remaining two options are Keep and Remove. The installer does not make any changes if the Keep option is selected to for the selected module and it deletes the current version of the module completely if the Remove option is selected. After upgrading you will need to deactivate and then re-activate the Ontolica features first at the SharePoint farm level Central Administration > System Settings > Manage farm features and then at the site collection level site collection root site > Site Actions > Site Settings > Site collection features. The upgrade process automatically sets the free evaluation license, which comes with the installer, as active. You will need to go to General Application Settings > Manage Ontolica Licenses and production or development license as active. Please remember that with most upgrades you will need a new license file, which can be requested at support.surfray.com. Post your comment on this topic.
http://docs.surfray.com/ontolica-search-preview/1/en/topic/upgrading-ontolica-modules
2017-12-11T07:19:30
CC-MAIN-2017-51
1512948512584.10
[]
docs.surfray.com
Update a scroll bar window xcall U_UPDATESB(scroll_id, maximum, current) Arguments scroll_id The ID of the scroll bar window. (n) maximum The new maximum value. (n) current The new current value. (n) Discussion U_UPDATESB updates a scroll bar window in UNIX and OpenVMS environments. (This routine does work on Windows, but the results are non‑standard.) A scroll bar’s progress/regress indicator shows the relationship of what is currently displayed versus the total possible. (For example, you may only be viewing 10 out of a possible 100 entries in a list.) This relationship is referred to as the “ratio of current to maximum.” The ratio of current to maximum is used to determine the placement of the progress/regress indicator within the scroll bar window. There are a few special cases where this ratio is superseded: The endpoint indicators, if used, are displayed according to the following rules: See also Examples The following example would place the indicator at position 1. This could indicate that the user is viewing the first part of the list. If present, the endpoint indicators would be showing “no more at beginning” and “more at end.” xcall u_updatesb(scrlid, 50, 1) The second example updates the indicator which could reflect that the user is viewing approximately the middle of the list. If present, the endpoint indicators would be showing “more at beginning” and “more at end.” xcall u_updatesb(scrlid, 50, 25)
http://docs.synergyde.com/tk/tkChap16UUPDATESB.htm
2017-12-11T07:37:23
CC-MAIN-2017-51
1512948512584.10
[]
docs.synergyde.com
Activity State Participants of an activity may want to share state information with other participants. This state information could help raise awareness of what the local user is doing in the activity and to help coordinate actions. State is shared in a unidirectional fashion. Each session has its own state in the activity. That state can be published to other sessions in the system, but a session can not modify the state of another session. Note: Multiple sessions always have distinct state within an activity even if those sessions belonging to the same user. Activity state is simply a key-value pair where the keys are strings and the values are any plain Javascript object. It might be visualized as such: { "openFiles": ["file1", "file2"], "available": true, "activeFile": "file1" } Publishing State State is set and removed via the setState() and removeState() methods on an Activity object. Publishing can mean adding new key-value pairs, or it can mean overwriting an existing key's value. Removing is preferred over setting states to null to represent the absence of state, unless null has a special specific meaning in an application. Some examples are shown below: // Adds a single key / value activity.setState("key", "value"); // Adds multiple keys/values at once activity.setState({"key1": "value 1", "key2": "value2"}); // Removes a single key. activity.removeState("key"); // Removes multiple keys at once activity.removeState(["key1", "key2"]); Local Activity State You can access the currently published state of the local session via state() method. The state is a JavaScript Map so you can easily get individual state element via the Map's get(key) method. // Get all state for the local session const allState = activity.state(); // Get only the viewport key const viewport = activity.state().get("viewport"); Participants' Activity State The participants() method returns a set of ActivityParticipant objects. The state for each participant can be obtained using the ActivityParticipant.state() method. The state is returned as a JavaScript Map object. Modifying this map will have no affect on the participants stored state. const participant = activity.participant("someSessionId"); // Get all state for a participant const stateMap = participant.state; // Get just the viewport key for a participant const stateMap = participant.state.get ("viewport"); // Get all viewports by sessionId const allViewports = activity.participants().map((p) => { return {sessionId: p.sessionId, viewport: p.state.get("viewport")}; }); Events The Activity object emits two events that are useful for determining how state is changing over time. Examples activity.on("state_set", (e) => { console.log(e.sessionId, e.state); }); activity.on("state_cleared", (e) => { console.log(e.sessionId, e.keys); });
https://docs.convergence.io/guide/activities/state.html
2017-12-11T07:40:57
CC-MAIN-2017-51
1512948512584.10
[]
docs.convergence.io
UDN Search public documentation: ScaleformWork Workflow Scaleform GFx Workflow Planning One of the most difficult aspects of development is estimating how much time a particular task will take. Scaleform introduces a new toolset into the mix - Flash - which can make this even more difficult. Keep in mind that unless your UI artist(s) already know or are at least familiar with Flash and its interface, there may be some growing pains in the beginning of the UI creation process. For instance, with Gears of War3, scenes might take anywhere up to two weeks to complete on the Flash side. however, nearer to the end of the development process, the artists were cranking out scenes in closer to two days. This is because they had to learn Flash, and also there was a learning process of how to best set up scenes for use with Scaleform and UE3. Once this learning process was out of the way, things flowed much more smoothly and quickly. The scripting side of creating Scaleform UIs is fairly trivial in general. There was no great learning curve, but depending on how quickly the artists can pump out scenes on the Flash side, scripting can quickly become the bottleneck in the pipeline. Design Have a solid design in mind before starting! One way to waste time is to finish a UI only to have to scrap it and start over or make sweeping changes because there was no clear direction or design from the beginning. Small iterations near the end will not cost nearly as much in terms of man-hours lost or headaches gained as a complete redesign. Rough In A good way to get started for the first few scenes is to actually make an ugly version of the scene yourself. Make the roughed in classes, add the clips to the movie with the right bagging, add simple roughed out animations, etc. This way, you can make sure it is laid out the way it needs to be for the script to work. Even if you only do this for test scenes, it is a great exercise. Debugging Flash is black magic that can only be learned from doing. Seeing how even simple things can fail is the best way to do this. Flesh out Once the rough scene is created and functional, the artist can go in and put his touches on the scene to make it look good. Having a scene already set up to start from should give the artist a better understanding of the proper scene hierarchy needed to make things work and be able to use that to create their own scenes from scratch later on; saving time for the scripters as they become the bottleneck in the development process. During this process, the artist will no doubt break things - it will undoubtedly be necessary in some cases to create the look and experience the design calls for. This generally only means small fixes on the scripting side and, over time, the artist will learn what breaks things and how to avoid them. At the same time, scripters will learn what changes cause things to break in what ways and be able to use that to diagnose problems down the line. Iterate Requirements change, designs change, systems change; it is a necessary evil in game development. Iteration is going to happen and you should be prepared for it. Hopefully, with a solid design agreed on at the beginning, the changes will only be minimal. This isn't always the case, though. There will be complete redesigns and entire scenes scrapped, but the goal is that it results in a better experience for the player so it should be worth it. Polish After things are working and looking pretty, you may want to do a polish pass to make sure the animations, the graphics, etc. are just right. Be careful! It is easy to break something without even trying. Is the benefit of the change you are making really worth any potential problems? If yes, then go for it. if not, leave it alone! Test, Test, Test Test, and test often. Test in the GFx player, but also test everything in the game. The GFx player gives a good representation of how the scene will look and behave, but there is no substitute for the real thing. The littlest changes can potentially break things that seem completely unrelated and the longer you go between testing, the harder it is to track down which change caused which problem. When in doubt, test.
https://docs.unrealengine.com/udk/Three/ScaleformWorkflow.html
2017-12-11T07:32:16
CC-MAIN-2017-51
1512948512584.10
[array(['rsrc/Three/ScaleformWorkflow/workflow.jpg', 'workflow.jpg'], dtype=object) ]
docs.unrealengine.com
See Also: StatusStrip Members System.Windows.Forms.StatusStrip replaces the System.Windows.Forms.StatusBar control. Special features of System.Windows.Forms.StatusStrip include a custom table layout, support for the form's sizing and moving grips, and support for the ToolStripStatusLabel.Spring property, which allows a System.Windows.Forms.ToolStripStatusLabel to fill available space automatically. The following items are specifically designed to work seamlessly with both System.Windows.Forms.ToolStripSystemRenderer and System.Windows.Forms.ToolStripProfessionalRenderer in all orientations. They are available by default at design time for the System.Windows.Forms.StatusStrip control: System.Windows.Forms.ToolStripStatusLabel System.Windows.Forms.ToolStripDropDownButton System.Windows.Forms.ToolStripSplitButton System.Windows.Forms.ToolStripProgressBar A System.Windows.Forms.StatusStrip control displays information about an object being viewed on a System.Windows.Forms.Form, the object's components, or contextual information that relates to that object's operation within your application. Typically, a System.Windows.Forms.StatusStrip control consists of System.Windows.Forms.ToolStripStatusLabel objects, each of which displays text, an icon, or both. The System.Windows.Forms.StatusStrip can also contain System.Windows.Forms.ToolStripDropDownButton, System.Windows.Forms.ToolStripSplitButton, and System.Windows.Forms.ToolStripProgressBar controls. The default System.Windows.Forms.StatusStrip has no panels. To add panels to a System.Windows.Forms.StatusStrip, use the ToolStripItemCollection.AddRange(ToolStripItem[]) method, or use the StatusStrip Items Collection Editor at design time to add, remove, or reorder items and modify properties. Use the StatusStrip Tasks Dialog Box at design time to run common commands. Although System.Windows.Forms.StatusStrip replaces and extends the System.Windows.Forms.StatusBar control of previous versions, System.Windows.Forms.StatusBar is retained for both backward compatibility and future use if you choose.
http://docs.go-mono.com/monodoc.ashx?link=T%3ASystem.Windows.Forms.StatusStrip
2017-12-11T07:16:57
CC-MAIN-2017-51
1512948512584.10
[]
docs.go-mono.com
All Testing will take place on our Sandbox environment. Sandbox will either directly reflect or be slightly ahead of Prod. Our plan for how to manage API versions is currently under consideration. In order to be able to test on Sandbox, you will need: - An existing API agreement with Blueprint Title (please ask your primary Blueprint contact about this if you don't have one) - To be set up as a Client with at least one Client Admin User on Sandbox. This is a manual process that our Engineering team can get going for you. - To log in to our Sandbox environment, set up a Bearer Token, and enter your testing Callback URL(s) Once these steps are complete, you should be able to test the API using the hostname sandbox.blueprinttitle.com in place of app.blueprinttitle.com. Please note that many events (such as certain Callback events being triggered) will require Manual intervention from a Blueprint team member. We are still in the process of defining how this will normally work, so please expect a bit of tinkering in this respect. We're here to help.
https://api-docs.blueprinttitle.com/reference/testing
2022-06-25T07:03:43
CC-MAIN-2022-27
1656103034877.9
[]
api-docs.blueprinttitle.com
CreateTopic Creates a topic to which notifications can be published. Users can create at most 100,000 standard topics (at most 1,000 FIFO topics). For more information, see Creating an Amazon SNS topic in the Amazon SNS Developer Guide.. FifoTopic– Set to true to create a FIFO topic. Policy– The policy that defines who can access your topic. By default, only the topic owner can publish or subscribe to the topic. The following attribute applies only to server-side encryption: The following attributes apply only to FIFO topics: FifoTopic– When this is set to true, a FIFO topic is created. ContentBasedDeduplication– Enables content-based deduplication for FIFO topics. By default, ContentBasedDeduplicationis set to false. If you create a FIFO topic and this attribute is false, you must specify a value for the MessageDeduplicationIdparameter for the Publish action. When you set ContentBasedDeduplicationto true, Amazon SNS uses a SHA-256 hash to generate the MessageDeduplicationIdusing the body of the message (but not the attributes of the message). (Optional) To override the generated value, you can specify a value for the MessageDeduplicationIdparameter for the Publishaction.. For a FIFO (first-in-first-out) topic, the name must end with the .fifosuffix. Type: String Required: Yes - Tags.member.N The list of tags to add to a new topic. Note To be able to tag a topic on creation, you must have the sns:CreateTopicand sns:TagResourcepermissions. Type: Array of Tag objects Required: No - ConcurrentAccess Can't perform multiple operations on a tag simultaneously. Perform the operations sequentially. HTTP Status Code: 400 - - Examples The structure of AUTHPARAMS depends on the signature of the API request. For more information, see Examples of the complete Signature Version 4 signing process (Python) in the Amazon General Reference. Example This example illustrates one usage of CreateTopic. Amazon SDKs, see the following:
https://docs.amazonaws.cn/sns/latest/api/API_CreateTopic.html
2022-06-25T08:15:15
CC-MAIN-2022-27
1656103034877.9
[]
docs.amazonaws.cn
Features imported from CDX and CDXML files The cdx format of CambridgeSoft's ChemDraw is imported and exported by Marvin. The import of cdxml format is supported, but export isn't. [ Atom properties: ](features-imported-from-cdx-and-cdxml-files.md#src-5308662-featuresimportedfromcdxandcdxmlfiles-cdxatom) Following properties are supported: charge, isotope, substituents, free sites, unsaturation, reaction stereo, enhanced stereochemistry, radical, ring bond count. Element: The 'Element' type node is read as a single atom. For the supported properties see the supported Atom properties list. Element List: Element Lists are read as Atom List (or not list). For the supported properties see the supported node properties list. Label: Labels are converted to an S-group and imported in its contracted state. Nickname: Nicknames are converted to an S-group and imported in its contracted state. Bond types: Single bonds, double bonds, query bonds, topology, reaction centers are imported. See details. Fragment Generic Label: Generic Labels are read as Generic query atoms. Attachment Points: Attachment Points are imported. Reaction arrows: All types of arrows are imported, but only one arrow per file. Bracketed groups: All groups are imported. Alternative Group: Alternative Groups are read as R-groups, up to two connections per R-group. Anonymous Alternative Group: An Anonymous Alternative Group is imported as R-group and is assigned the R-group number n+1 where n is the largest R-group number in the file. R-logic Link Node: Link Node is read as Link Node. Variable Attachment and Multi-Center Attachment: These are imported as Multicenter S-groups. Text Box: Marvin reads the position and the formatted text. Basic Graphic Objects: Ellipses, rectangles and some of the symbols are read. See details. Graphic Objects: Tables and TLC plate drawings are imported as graphics. See details. Formatted text in atom labels: supported since 6.2.0. Code: cdx, cdxml ElementListNickname ElectronFlow arrow Complex graphical objects (e.g. laboratory equipment drawings, biological drawings) Only ASCII characters are imported. R-logic import is supported, export not yet. Chemical structures Formatted text R-groups: R-logic is not supported. Atom and bond query properties Rectangles, rounded rectangles and ellipses
https://docs.chemaxon.com/display/lts-europium/chemdraw-sketch-file-cdx-cdxml.md
2022-06-25T08:07:34
CC-MAIN-2022-27
1656103034877.9
[]
docs.chemaxon.com
Low-poly game character. 9 Skins Low-poly game character 9 Skins Video Preview: Polycount of Character: Rigged: (Yes) Rigged to Epic skeleton: (Yes) If rigged to the Epic skeleton, IK bones are included: (Yes) Animated: (No) Number of Animations: Retarget Vertex counts of characters: 17 195 Number of characters: 1 Number of Materials: 45 Number of Textures: 90 Texture Resolutions: (4 096px x 4 096px)
https://docs.unrealengine.com/marketplace/ko/product/sci-fi-player-woman
2022-06-25T07:22:05
CC-MAIN-2022-27
1656103034877.9
[]
docs.unrealengine.com
While a process instance is running, it can encounter several different types of errors. Depending on the type of error, the affect on the runtime behavior of the process will be different. The three types of process errors are: This page describes each type of process error that you can encounter and where to view the errors. An unattended node is a node that uses system logic to perform a task. If an error occurs on an unattended node: num_problem_tasksprocess metric in process reports. An attended node is a node that requires a user to perform a task.. See Automatic Error Handling for more information. Task errors and process errors will generate an alert. The following task errors will, by default, create an alert: The following process errors will, by default, create an alert: Process errors are visible from: When errors occur, they are initially marked as unresolved. They become resolved when the following events happen: By default, in Appian Designer and the Process Modeler, only unresolved errors are shown for the developer's convenience. However, you can view all errors by selecting the Show resolved errors checkbox in the All Process Errors and Process Details dialogs in Appian Designer and the Process Modeler, respectively. Resolved errors will also display the resolution date,time, and the user who resolved the error. On This Page
https://docs.appian.com/suite/help/22.2/Process_Errors.html
2022-06-25T08:24:59
CC-MAIN-2022-27
1656103034877.9
[]
docs.appian.com
.NET SDK v3.0 Release Notes - November 2nd, 2018 We have released version 3.0 of the Remote Pay SDK for .NET.. IMPORTANT Deprecation of Windows 7 Support Microsoft’s extended support for Windows 7 is ending in January 2020. Clover will no longer support Windows 7 in future versions of the .NET Remote Pay SDK. Source code and more information You can find detailed information about and source code for this release in our GitHub repo for .NET. For questions and feedback, use community.clover.com. To receive third-party developers communication from Clover, sign up here. Updated 5 months ago
https://docs.clover.com/docs/net-sdk-v30-release-notes
2022-06-25T07:14:52
CC-MAIN-2022-27
1656103034877.9
[]
docs.clover.com
@NotThreadSafe public class AlluxioProperties extends Object Configurationclass is supposed to handle the type conversion on top of the source of truth of the properties. For a given property key, the order of preference of its value is (from highest to lowest) (1) runtime config (2) system properties, (3) properties in the specified file (site-properties), (4) default property values. clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait public AlluxioProperties() public AlluxioProperties(AlluxioProperties alluxioProperties) alluxioProperties- properties to copy @Nullable public String get(PropertyKey key) key- the key to query public void clear() public void put(PropertyKey key, String value, Source source) key- key to put value- value to put source- the source of this value for the key public void set(PropertyKey key, String value) key- key to put value- value to put public void merge(Map<?,?> properties, Source source) properties- the source Propertiesto be merged source- the source of the the properties (e.g., system property, default and etc) public void remove(PropertyKey key) key- key to remove public boolean isSet(PropertyKey key) key- the key to check public boolean isSetByUser(PropertyKey key) key- the key to check public Set<Map.Entry<PropertyKey,String>> entrySet() public Set<PropertyKey> keySet() public Set<PropertyKey> userKeySet() public void forEach(java.util.function.BiConsumer<? super PropertyKey,? super String> action) action- the operation to perform on each key value pair public AlluxioProperties copy() public void setSource(PropertyKey key, Source source) key- property key source- the source public Source getSource(PropertyKey key) key- property key public String hash()
https://docs.alluxio.io/os/javadoc/2.3/alluxio/conf/AlluxioProperties.html
2022-06-25T08:55:58
CC-MAIN-2022-27
1656103034877.9
[]
docs.alluxio.io
Adopting CFEngine What does adoption involve? CFEngine is a framework and a methodology with far reaching implications for the way you do IT management. The CFEngine approach asks you to think in terms of promises and cooperation between parts; it automates repair and maintenance processes and provides simple integrated Knowledge Management. To use CFEngine effectively, you should spend a little time learning about the approach to management, as this will save you a lot of time and effort in the long run. The Mission Plan At CFEngine, we refer to the management of your datacentre as The Mission. The diagram below shows the main steps in preparing mission control. Some training is recommended, and as much planning as you can manage in advance. Once a mission is underway, you should expect to work by making small corrections to the mission plan, rather than large risky changes. Planning does not mean sitting around a table, or in front of a whiteboard. Successful planning is a dialogue between theory and practice. It should include test pilots and proof-of-concept implementations. Commercial or Free? The first decision you should make is whether you will choose a route of commercial assistance or manage entirely on your own. You can choose different levels of assistance, from just training, to consulting, to commercial versions of the software that simplify certain processes and offer extended features. At the very minimum, we recommend that you take a training course on CFEngine. Users who don't train often end up using only a fraction of the software's potential, and in a sub-optimal way. Think of this as an investment in your future. The advantages of the commercial products include greatly simplified set up procedures, continuous monitoring and automatic knowledge integration. See the CFEngine Nova Supplement for more information. Installation or Pilot You are free to download Community Editions of CFEngine at any time to test the software. There is a considerable amount of documentation and example policy available on the cfengine.com web-site to try some simple examples of system management. If you intend to purchase a significant number of commercial licenses for CFEngine software, you can request a pilot process, during which a specialist will install and demonstrate the commercial edition on site. Identifying the Team CFEngine will become a core discipline in your organization, taking you from reactive fire-fighting to proactive and strategic practices. You should invest in a team that embraces its methods. The CFEngine team will become the enabler of business agility, security, reliability and standardization. The CFEngine team needs to have administrator or super-user access to systems, and it needs the headroom or slack to think strategically. It needs to build up processes and workflows that address quality assurance and minimize the risk of change. All teams are important centres for knowledge, and you should provide incentives to keep the core team strong and in constant dialogue with your organization's strategic leadership. Treat your CFEngine team as a trusted partner in business. Training and Certification Once you have tried the simplest examples using CFEngine, we recommend at least three days of in-depth training. We can also arrange more in-depth training to qualify as a CFEngine Mission Specialist. Mission Goal and Knowledge Management The main aim of Knowledge Management is to learn from experience, and use the accumulated learning to improve the predictability of workflow processes. During every mission, there will be unexpected events, and an effective team will use knowledge of past and present to respond to these unpredictable changes with confidence The goal of an IT mission is a predictable operational state that lives up to specific policy-determined promises. You need to work out what this desired state should be before you can achieve it. No one knows this exactly in advance, and most organizations will change course over time. However, with good planning and understanding of the mission, such adjustments to policy can be small and regular. Many small changes are less risky than few large changes, and the culture of agility keeps everyone on their toes. Using CFEngine to run your mission, you will learn to work pro-actively, adjusting the system by refining the mission goal rather than reacting to unexpected events. To work consistently and predictably, even when understaffed, requires a strategy for describing system resources, policy and state. CFEngine can help with all of these. See the Special Topics Guide on Knowledge Management. A major component of a successful mission, is documenting intentions. What is the goal, and how does it break down into concrete, achievable states? CFEngine can help you in this process, with training and Professional Services, but you must establish a culture of commitment to the mission and learn how to express these commitments in terms of CFEngine promises. Build, Deploy, Manage, Audit The four mission phases are sometimes referred to as Build A mission is based on decisions and resources that need to be put.
https://docs.cfengine.com/docs/3.10/guide-special-topics-adopting-cfengine.html
2022-06-25T07:28:41
CC-MAIN-2022-27
1656103034877.9
[array(['./adopting-cfengine-mission-plan.png', 'Mission Plan'], dtype=object) ]
docs.cfengine.com
Inheritance Mapping - 3 minutes to read When a persistent object is saved for the first time or the database schema is updated, XPO collects a list of the persistent types for the objects being persisted and creates all the necessary tables and relations between them. To mark the types you want to persist, use the PersistentAttribute and NonPersistentAttribute attributes. By default, each persistent object type is stored in its own table with a distinctive set of columns associated with this type. The names of these tables and columns precisely resemble the type and property (field) names. This mapping behavior can be customized by applying the MapInheritanceAttribute, DbTypeAttribute and PersistentAttribute attributes. XPO provides two general solutions for mapping inheritance into a relational database: - Single Table Inheritance - map the entire class hierarchy to a single table. - Class Table Inheritance - map each class to its own table. These strategies are not mutually exclusive. In one hierarchy you can mix patterns. For instance, you can have several classes pulled together by a Single Table Inheritance and use Class Table Inheritance for a few specific cases. Note that this will increase the complexity. Map Hierarchy To A Single Table Following this strategy you store all persistent properties and fields of a class in the same table as the properties of its parent class. In this instance, you should apply the MapInheritanceAttribute attribute to a persistent class. The attribute’s MapInheritanceAttribute.MapType property must be set to the MapInheritanceType.ParentTable value. Single Table Inheritance using DevExpress.Xpo; public class Person : XPObject { public string Name { get { return fName; } set { SetPropertyValue(nameof(Name), ref fName, value); } } string fName = ""; } [MapInheritance(MapInheritanceType.ParentTable)] class Customer : Person { public string Preferences { get { return fPreferences; } set { SetPropertyValue(nameof(Preferences), ref fPreferences, value); } } string fPreferences = ""; } [MapInheritance(MapInheritanceType.ParentTable)] public class Employee : Person { public int Salary { get { return fSalary; } set { SetPropertyValue(nameof(Salary), ref fSalary, value); } } int fSalary = 1000; } [MapInheritance(MapInheritanceType.ParentTable)] public class Executive : Employee { public int Bonus { get { return fBonus; } set { SetPropertyValue(nameof(Bonus), ref fBonus, value); } } int fBonus = 100; } Advantage: it puts all the stuff in one place, which makes modifications easier and avoids joins. Disadvantage: large; wastes space since each row has to have columns for all possible subtypes and this leads to empty columns. Map Each Class To Its Own Table This is the simplest relationship between the classes and the tables. As a result, a single table is created for each class in a hierarchy. To use this type of mapping inheritance, you should apply the MapInheritanceAttribute attribute with the MapInheritanceAttribute.MapType property set to the MapInheritanceType.OwnTable value. Class Table Inheritance Disadvantage - multiple joins are needed to load a single object, which usually reduces performance. Member Table Task-Based Help - Add Persistence to an Existing Hierarchy: Change the Base Inheritance - Add Persistence to an Existing Hierarchy: Session-less Persistent Objects - How to: Change Inheritance Mapping - How to: Map to Custom Tables (Views) and Columns - How to: Link Classes Located in Different Assemblies
https://docs.devexpress.com/XPO/2125/create-a-data-model/inheritance-mapping
2022-06-25T08:13:56
CC-MAIN-2022-27
1656103034877.9
[array(['/XPO/images/orinh_single4930.png', 'ORInh_Single'], dtype=object) array(['/XPO/images/orinheachclass4931.png', 'ORInhEachClass'], dtype=object) ]
docs.devexpress.com
In-App Replies This page contains an overview of the content available in the In-App Replies sections of the Instabug Docs for iOS apps. Integrating Instabug To be able to use Instabug's In-App Replies product, you must first integrate the SDK. Your users can send in questions, feedback, bugs, and more; you can reply to them! This section covers the replies and how to configure your users ability to reply to your messages. The breakdown is as follows: 1. Show Replies List If you need to manually show the replies page, you can do that via the API detailed in this section. 2. Reply to Users This section explains how to navigate the dashboard so that you can reply to submitted reports (bugs, feedback, or questions), crashes, and surveys. Also detailed is an explanation of the chat workflow. 3. Managing Notifications Learn how to set up push notifications, disable in-app notifications, as well as customize these notifications for your own purposes. 4. Event Handlers If you wish to have any blocks of code run when a new message is received, this section details just how you can do that. 5. Disabling/Enabling In-App Replies Thinking of completely hiding the replies page? This section covers how to do that. Updated over 2 years ago
https://docs.instabug.com/docs/ios-in-app-replies
2022-06-25T07:32:05
CC-MAIN-2022-27
1656103034877.9
[]
docs.instabug.com
Cyborg architecture¶ Cyborg design can be described by following diagram: cyborg-api - cyborg-api is a cyborg service that provides REST API interface for the Cyborg project. It supports POST/PUT/DELETE/GET operations and interacts with cyborg-agent and cyborg-db via cyborg-conductor. cyborg-conductor - cyborg-conductor is a cyborg service that coordinates interaction, DB access between cyborg-api and cyborg-agent. cyborg-agent - cyborg-agent is a cyborg service that is responsible for interaction with accelerator backends via the Cyborg Driver. For now the only implementation in play is the Cyborg generic Driver. It will also handle the communication with the Nova placement service. Cyborg-Agent will also write to a local cache for local accelerator events. Vendor drivers - Cyborg can be integrated with drivers for various accelerator device types, such as FPGA, GPU, NIC, and so forth. You are welcome to extend your own driver for a new type of accelerator device.
https://docs.openstack.org/cyborg/yoga/user/architecture.html
2022-06-25T07:40:38
CC-MAIN-2022-27
1656103034877.9
[array(['../_images/cyborg-architecture.png', '../_images/cyborg-architecture.png'], dtype=object)]
docs.openstack.org
Divine - Social Media Icons Setup This theme uses the Simple Social Icons Plugin, which can be obtained here or by searching in Dashboard>Plugins>Add New. In your Widgets menu, drag the widget labeled “Simple Social Icons” into a widgetized Area. – The demo displays them in the Nav Social Menu area. Configure the widget by choosing a title, icon size and color, and the URLs to your various social profiles. Here is a screenshot of the settings used in the Divine demo: Note: In order for the Simple Social Icons to display from the Nav Social Menu widget, a menu must be created and assigned to the Secondary Navigation Menu position. This menu can be empty.
https://docs.restored316.com/article/1188-divine-social-media-icons-setup
2022-06-25T08:38:53
CC-MAIN-2022-27
1656103034877.9
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/57c34265903360342852ecfb/images/5817ca22c697915f88a3b218/file-ar7fMWeTAQ.png', None], dtype=object) ]
docs.restored316.com
Dynamic Data Binding I want my test to dynamically bind to a different data set at run-time. The data I want the test to use will depend on certain conditions of my application, but the test steps will always be the same. Solution 1. Start with a master test. This test may be data bound, but it's not required. Use some criteria and your code to decide which data file you want to use during that test run. Swap the physical file on disk with the real file you want to use in your data bound test. Have your master test call a sub-test that is bound to this data file. For example: Create a sub-test and bind it to an Excel spreadsheet named Book1.xlsx. In a coded step in your main test, swap Book1.xlsx for the file you actually want to be used by your sub-test. Keep in mind that when Test Studio binds a test to an Excel file, it actually makes a copy of the file in the Project\Data folder. Ensure you swap the file located in the Project\Data folder, and not the file in its original location. Solution 2 Put your data into a SQL data base and Use T-SQL to Pull the Datayou want from the database. Optionally, you can Generate a Random Number in your T-SQL to point to the row of data to pull. Solution 3 You can Create Your Own Test Extension DLL and implement the OnInitializeDataSource in which you can do pretty much anything in code. Place the DLL into the following directory: - C:\Program Files (x86)\Telerik\Test Studio\Bin\Plugins\ As of release 2017 R3 (v. 2017.3.1010) the default installation path for new installation is C:\Program Files (x86)\Progress\Test Studio. It will be called for every test you execute. Whatever OnInitializeDataSource creates and returns will be used as the data source, overriding any file to which the test may have been previously bound.
https://docs.telerik.com/teststudio/knowledge-base/data-driven-testing-kb/dynamic-data-binding
2022-06-25T07:01:51
CC-MAIN-2022-27
1656103034877.9
[]
docs.telerik.com
The Application Maps are JSON files consisting of various Applications with definitions, which can be used while creating Business Policies. In the Operator panel, clickto perform the following activities. - Upload Application Map – Allows to upload the JSON file with the applications and definitions. See Upload Application Map. - Clone Application Map – Creates a new Application Map by cloning an existing Application Map file. See Clone Application Map. - Modify Application Map – Allows to add or update the application details available in the selected Application Map. See Modify Application Map. - Refresh Application Map – Updates the Application definitions listed in the selected Application Maps. See Refresh Application Map. - Push Application Map – Pushes the latest updates of the Application definitions available in the Application Maps to the associated SD-WAN Edges. See Push Application Map. - Delete Application Map – Deletes the selected Application Maps. You cannot delete a map that has been assigned to an Operator profile.
https://docs.vmware.com/en/VMware-SD-WAN/5.0/vmware-sd-wan-operator-guide/GUID-FC27C0E6-558F-4720-861E-E6B1235C2800.html
2022-06-25T09:08:50
CC-MAIN-2022-27
1656103034877.9
[array(['images/GUID-20E38DC8-53EB-4A3A-9062-ADA3D007042C-low.png', None], dtype=object) ]
docs.vmware.com
The user may also be notified about the long-standing submissions based on the parameters set. Abandoned Submission By turning on the e-mail notifications option the user can get feedback about his/her activity (bulk upload and individual registration as well) from the sender's address. Since CompReg version: 19.13.0-1907171250 Upon assigning a submission to a user, an e-mail is sent to inform the new assignee. {primary} Before the e-mail settings configuration please visit the Users page and set up an e-mail address for the user(s) who are about to receive the notification after the actions. Bulk upload reports: The following variables can be used in the template after attemptSummary: library: It is e.g LIBRARY_1, LIBRARY_2. This is an identifier for a bulk uploaded file. It can be seen in the Upload page and Submission page. startDate: Start date of the upload. endDate: End date of the upload. successCount: Number of successfully registered compounds. failedCount: Number of failed compounds. NOTE: Counts above may sometimes not indicate the correct number, because of asynchronous registrations. The bulk upload summary page will always show accurate counts. From version 19.21.2-2001241202 a new variable is available to be used in the bulk upload notification email template: ${attemptSummary.uploadId}. If you change ${attemptSummary.attemptId}) to (${attemptSummary.uploadId}) in the ' E-mail template for bulk upload report' there will be the id of the submission the e-mail notification: Registration attempt (id of the submission) finished. If you change ${attemptSummary.attemptId}) to (http:// ...server address.... /client/index.md#/summary/${attemptSummary.uploadId}) there will be a link to the submission in the e-mail notification: Registration attempt (link to the submission) finished. Individual registrations: The following variables can be used in the template: pcn: PCN - Parent Compound Number cn: CN - Compound Number lnbref: LnbRef -Acronym for (Electronic) Laboratory NoteBook (LNB) Reference lotid: A number representing the lot status: Submission status (e.g Invalid LnbRef) message: Message in case of failed registration. e.g LnbRef is not valid according to the regular expressions. userid: The user who initiated the registration creatorid: The submitter of the compound source: e.g Registrar created: Date when the registration was initiated submissiontype = e.g. AutoRegister, AdvancedRegister Submission assignments: The following variables can be used in the template: actor: Who did the assignment. assignedOn: When the assignment happened. submissionId: Link to the submission. e.g.baseUrl+/client/index.html#/submission?submissionId=66 id: From version 19.21.2-2001241202 you can use this to create your own link to the submission. You can change this: ${submissionId} to this: http:// ...server link..../client/index.html#/submission?submissionId=${id} to get link to the submission. previousAssignee: Previous assignee. createdBy: Created by this user. createdOn: When the submission has been created. modifiedBy: Modified by this user. modifiedOn: When the submission was modified. {info} If the "Automatically assign failed submissions to submitter" switcher is turned ON on Administration/General settings page the system will not send e-mail about the auto-assignment to the submitter. The correct settings of the SMTP server are essentials for the e-mail forwarding, but in order to be able to verify the settings, you must always save the changes first.
https://docs.chemaxon.com/display/lts-europium/notifications.md
2022-06-25T08:17:30
CC-MAIN-2022-27
1656103034877.9
[]
docs.chemaxon.com
Restricting rows in datasets based on SQL query In CDP Data Visualization, you can easily restrict the table rows in the dataset by changing the SQL definition of that dataset. SQL-defined datasets make it easy to limit their content to specific rows. - Switch to Dataset Detail interface, and edit SQL text window by applying the following statement: select county, stname, ctyname, tot_pop, tot_male, tot_female from main.us_counties where stname in ('Arizona','New Mexico', 'California','Nevada','Colorado','Utah') - Click Save. - In the Refresh dataset table column information modal window, click Close. - Switch back to the Data Model interface, click Show Data, and notice that the dataset is limited to the states specified in the SQL statement. - If you were to test it by creating a simple map visual on the dataset, it would look something like this:
https://docs.cloudera.com/data-visualization/7/work-with-data/topics/viz-edit-dataset-query-restrict-rows.html
2022-06-25T08:33:34
CC-MAIN-2022-27
1656103034877.9
[]
docs.cloudera.com
Disabling/Enabling APM This page explains the API to disable and enable App Performance Monitoring on Android. You can disable APM by calling the following API right after initializing the SDK. This completely prevents the SDK from collecting any performance data or executing any APM related logic: // Enable/disable APM APM.setEnabled(true); APM.setEnabled(false); // Enable/disable APM APM.setEnabled(true) APM.setEnabled(false) Please note that disabling APM will also clear any data that has already been collected and stored on your users' devices but hasn't been synced to our servers yet. Updated over 1 year ago
https://docs.instabug.com/docs/android-apm-disabling-enabling
2022-06-25T08:15:00
CC-MAIN-2022-27
1656103034877.9
[]
docs.instabug.com
Jobs¶ Jobs are for scheduled execution of Automation Tasks and Workflows. Jobs can be set to execute on a schedule, at one specific point in time, and/or execute manually (on-demand). Jobs are linked to existing Tasks or Workflows, and allow for custom configuration options. Jobs can be associated with Instances, Servers, or have no association, such as a job for an SSH task. Jobs allow for scheduled execution of nearly anything as Tasks Types include Bash, Powershell, HTTP/API, Ansible, Chef, Puppet, Groovy, Python, jRuby, Javascript, and library scripts and templates, which can be configured for resource, remote, or local execution targets. If you need something to execute on a schedule, Morpheus Jobs can deliver. Jobs are configured in the JOBS tab, and the JOB EXECUTIONS tab contains Job execution history with result output. Jobs¶ Role Permissions¶ Provisioning: Jobs None: Cannot access Provisioning > Jobs > Jobs tab Read: Can access Provisioning > Jobs > Jobs tabbut cannot create, edit, or delete Jobs Full: Full permissions to create, view, edit, and delete Jobs Provisioning: Job Executions None: Cannot access Provisioning > Jobs > Job Executions tab Read: Can access and view Provisioning > Jobs > Job Executions tabincluding job execution history, status, and Job output Creating Jobs¶ Note Jobs require existing Tasks or Workflows. See the appropriate section of Morpheus docs for more on creating Tasks and Workflows. To create a new job: Navigate to Provisioning > Jobs Select + ADD Enter the following NAME: Name of the Job in Morpheus JOB TYPE: Task: Job will execute a selected Task Workflow: Job will execute a selected Workflow ENABLED: When checked, the Job will run as scheduled Select NEXT Configure the Job - Task Jobs TASK: Select target Task. If relevant to the Task,. - Workflow Jobs WORKFLOW: Select target Workflow. If relevant to the Workflow,. Select NEXT Select COMPLETE Creating and Running Security Scan Jobs¶ Security Scan Jobs allow users to create and schedule SCAP program (Security Content Automation Program) scans for groups of managed systems. These Jobs can call in existing SCAP packages and checklists, which are used to scan the targeted systems on-demand or on a scheduled basis. Historical data for these scans is saved in the Job Execution list and in the software section of server detail pages. Detailed scan reports can also be viewed for each system as needed once the scan is complete. See the SCAP documentation on the NIST website for information on developing your own scanning procedures. Note Creating and editing Security Scan Jobs requires the “Security: Scanning” Role permission set to Full. Viewing Security Scan Jobs and seeing the results for scanned servers requires at least a Read-level permission. Add a new Security Package¶ Navigate to Provisioning > Jobs > Security Packages Tab Click +ADD > SCAP Package Provide a name in addition to a URL to source the package Click SAVE CHANGES Note Currently URL is the only source option for security packages Add a new Security Scan Job¶ Navigate to Provisioning > Jobs > Jobs Tab Click +ADD Set the Job type to “Security Scan Job” and provide a friendly name for the Job Click NEXT Select a security package, see the previous section to add a new one Enter your Scan Checklist (XML document) and Security Profile (XCCDF document), more information on these can be found in the SCAP documentation linked above Set a schedule or leave as Manual to only run this scan on-demand (new execution schedules can be created in Provisioning > Automation if needed) Set the context, can be Instance or Server. Select as many Instances or Servers as needed for this scanning run Click NEXT After final review, click COMPLETE Running Security Scan Jobs¶ Once created, Security Scan Jobs will run based on the configured schedule. They can also be run on-demand when needed: Navigate to Provisioning > Jobs > Jobs Tab Click MORE Click “Execute” Viewing Completed Security Scan Jobs¶ To view a list of completed Security Scan Jobs (and Jobs of other types): Navigate to Provisioning > Jobs > Job Executions Tab Additional details can be viewed by clicking (i) To view scan results for specific servers: Navigate to the server detail page (Infrastructure > Hosts > Virtual Machines tab > Selected server) Click on the Software tab part way down the page, then click on the Security subtab High level details on previous scans is viewable here To view the full report, click (i) Security Drift¶ In addition to tracking the scan results over time as described in the previous section, Morpheus also provides detail into the change from the most recent scan to the one prior. This information is displayed in the Software tab (and Security subtab) of the detail page for the virtual machine (accessed from the associated Instance detail page or at Infrastructure > Hosts > Virtual Machines). The information surfaced by this view is listed below. If there is no change, you’ll simply see a “No Drift” message. Title: The criteria for the test that has newly passed or failed Severity: The severity level for the indicated security requirement Result: The indicator for whether this test has newly passed or failed New Pass: The number of tests that have newly passed compared to the prior scan New Fail: The number of tests that have newly failed compared to the prior scan Status: An indicator of the change in security posture since the prior scan. A net gain in test failures will yield a negative status indicator while net gains in passed tests (or no change) will yield a positive status indicator Job Executions¶ The Job Executions tab contains execution history of completed Jobs, including any process outputs and error messages. Information included in the Job Executions list include: JOB: The name of the executed Job DESCRIPTION: When the Job Execution is expanded, the name of each executed task in the Job is listed in this column TYPE: The Job type, either Task or Workflow. When a Workflow Job is expanded, each individual Task making up the Workflow is identified as a Task in this column START DATE: The date and time the Job Execution kicked off. When expanded, the start date and time of each individual Task are also shown ETA/TIME: The time taken for the Job to complete. When expanded, the time to complete each individual Task is also shown ERROR: Any errors surfaced are shown here. When expanded, any surfaced errors for individual Tasks are also shown Click the ⓘ icon at the end of the row for a Job Execution or individual Task (when a Job Execution is expanded) to view the Execution Detail modal which provides the following information: Name of the Job or individual Task Description Start Date Created By Duration Status: Completed, Running, or Failed PROCESS OUTPUT: Returned values and outputs from the completed Job ERRORS: Any errors surfaced from the completed Job
https://docs.morpheusdata.com/en/5.2.16/provisioning/jobs/jobs.html
2022-06-25T07:31:08
CC-MAIN-2022-27
1656103034877.9
[array(['../../_images/1add_package.png', '../../_images/1add_package.png'], dtype=object) array(['../../_images/8securityDrift.png', '../../_images/8securityDrift.png'], dtype=object)]
docs.morpheusdata.com
mars.tensor.random.gamma# - mars.tensor.random.gamma(shape, scale=1.0, size=None, chunk_size=None, gpu=None, dtype=None)[source]# Draw samples from a Gamma distribution. Samples are drawn from a Gamma distribution with specified parameters, shape (sometimes designated “k”) and scale (sometimes designated “theta”), where both parameters are > 0. - Parameters shape (float or array_like of floats) – The shape of the gamma distribution. Should be greater than zero. scale (float or array_like of floats, optional) – The scale of the gamma distribution. Should be greater than zero. Default is equal to 1. size (int or tuple of ints, optional) – Output shape. If the given shape is, e.g., (m, n, k), then m * n * ksamples are drawn. If size is None(default), a single value is returned if shapeand scaleare both scalars. Otherwise, np.broadcast(shape, gamma distribution. - Return type Tensor or scalar See also scipy.stats.gamma probability density function, distribution or cumulative density function, etc. Notes The probability density for the Gamma distribution is\[p(x) = x^{k-1}\frac{e^{-x/\theta}}{\theta^k\Gamma(k)},\] where \(k\) is the shape and \(\theta\) the scale, and \(\Gamma\) is the Gamma function. The Gamma distribution is often used to model the times to failure of electronic components, and arises naturally in processes for which the waiting times between Poisson distributed events are relevant. References - 1 Weisstein, Eric W. “Gamma Distribution.” From MathWorld–A Wolfram Web Resource. - 2 Wikipedia, “Gamma distribution”, Examples Draw samples from the distribution: >>> import mars.tensor as mt >>> shape, scale = 2., 2. # mean=4, std=2*sqrt(2) >>> s = mt.random.gamma(shape, scale, 1000).execute() Display the histogram of the samples, along with the probability density function: >>> import matplotlib.pyplot as plt >>> import scipy.special as sps >>> import numpy as np >>> count, bins, ignored = plt.hist(s, 50, normed=True) >>> y = bins**(shape-1)*(np.exp(-bins/scale) / ... (sps.gamma(shape)*scale**shape)) >>> plt.plot(bins, y, linewidth=2, color='r') >>> plt.show()
https://docs.pymars.org/en/latest/reference/tensor/generated/mars.tensor.random.gamma.html
2022-06-25T08:47:39
CC-MAIN-2022-27
1656103034877.9
[]
docs.pymars.org
• Chip2Chip AXI Mode : The Chip2Chip AXI Mode configuration option determines AXI Chip2Chip Master or Slave mode of operation. • AXI Clocking Mode : The AXI Chip2Chip core can be configured with either Independent or Common Clock domains. The Independent Clock configuration allows you to implement unique clock domains on the AXI interface and FPGA I/Os. The AXI Chip2Chip core handles the synchronization between clock domains. Both the AXI interface and FPGA I/Os can also be maintained in a single clock domain. The AXI Chip2Chip core can be used to generate a core optimized for a single clock by selecting the Common Clock option. • Chip2Chip AXI4-Lite Mode : The Chip2Chip AXI4-Lite Mode configuration option determines AXI4-Lite Master or Slave mode of operation, as shown in Table: AXI4-Lite Configuration Options . When AXI4-Lite interfacing is not required, this configuration option should be set to “None.” • AXI Data Width : The AXI Data Width user option allows the width of AXI data to be configured. Valid settings for the AXI Data Width are 32, 64 and 128. This setting must be maintained the same in both Master and Slave AXI Chip2Chip cores. • AXI ID Width : The AXI ID provides an identification tag for the group of signals in the channel. AXI ID is supported for all write and read channels. ID width can be configured from 0 to 12 bits. This setting must be maintained the same in both Master and Slave AXI Chip2Chip cores. • AXI WUSER Width : AXI WUSER defines sideband information that can be transmitted with the write data channel. The valid range for WUSER width is from 0 to 4 bits. This setting must be maintained the same in both Master and Slave AXI Chip2Chip cores. IMPORTANT: Because the AXI Chip2Chip core supports a maximum ID width of 12, ensure that the propagated ID width to the AXI Chip2Chip core is less than or equal to 12. This commonly happens in Zynq ® -7000 device systems because the ID width of GP ports is 12. To avoid this scenario, the ID widths of the GP ports can be compressed by modifying the Static Remap option available in the processing system. TIP: The AXI ID Width of the AXI Chip2Chip Slave core should match the AXI ID Width of the AXI Chip2Chip Master core. IMPORTANT: In IP integrator, the AXI ID and WUSER Width of the interconnect are automatically propagated to the AXI Chip2Chip Master core. However for the AXI Chip2Chip Slave core, you have to override the AXI ID Width and WUSER Width so that it matches the parameters of the Master AXI Chip2Chip core. • Chip2Chip PHY Type : The Chip2Chip PHY type can be set to either “SelectIO ™ SDR”, “SelectIO ™ DDR”, "Aurora64B66B" or “Aurora 8B/10B”. This setting must be maintained the same in both Master and Slave AXI Chip2Chip cores. The AXI Chip2Chip IP does not instantiate an Aurora core, but it does provide an interface to connect to it. Be sure to select the right device when simulating, synthesizing, and implementing the example design of AXI Chip2Chip with the PHY Type set as Aurora. • Chip2Chip PHY Width : The Chip2Chip PHY Width configuration determines I/Os used for device-to-device SelectIO™ interfacing. This setting must be maintained the same in both Master and Slave AXI Chip2Chip cores. Table: FPGA SelectIO Utilization provides the mapping between Chip2Chip PHY width and the number of input and output I/Os utilized with the selected option. • Chip2Chip PHY Frequency : When using the SelectIO ™ FPGA interface, the Chip2Chip PHY implements the mixed-mode clock manager (MMCM) on the PHY input clocks. MMCMs are used for clock phase alignment, clock slew reduction, and for compensating clock buffer delays. For common clock AXI Chip2Chip Slave operations, the m_aclk_out output is generated from the MMCM. The Chip2Chip PHY Frequency provides the clock frequency parameter to the MMCM. For Common clock, C_SELECTIO_PHY_CLK must be set to the s_aclk frequency. For Independent clock, C_SELECTIO_PHY_CLK must be to set to the axi_c2c_phy_clk frequency. This setting must be maintained the same in both Master and Slave AXI Chip2Chip cores. IMPORTANT: In IP integrator, the PHY Frequency parameter is automatically computed based on the clock frequency of the port connected to axi_c2c_phy_clk (Master Independent clocking configuration) or axi_c2c_selio_rx_*_clk_in* port(s) (Slave configuration). In Master Common clocking configuration, the frequency of the connected AXI clock is propagated to the PHY Frequency parameter. • No. of Lanes : Number of Lanes to be selected in Aurora IP when configuring the C2C Core with Aurora Mode. • Enable Link Handler : By enabling this option the core will handle the graceful exit of the Pending AXI Transactions. When it is selected there will be an additional port ‘axi_c2c_lnk_hndlr_in_progress’.
https://docs.xilinx.com/r/en-US/pg067-axi-chip2chip/User-Tab
2022-06-25T08:02:15
CC-MAIN-2022-27
1656103034877.9
[]
docs.xilinx.com
Configuration¶ The RDMO application uses the Django settings module for it’s configuration. RDMO uses a file: config/settings/local.py which is ignored by git and is meant to contain your local adjustments and secret information (e.g. database connections). As part of the installation process config/settings/local.py should be created from the template config/settings/sample.local.py. (Most of the Django configuration you might know from other projects is located in rdmo/rdmo/core/settings.py of the rdmo package repository.) While technically the local settings file config/settings/local.py can be used to override all of the settings in config/settings/sample.local.py, it should be used to customize the settings already available in config/settings/sample.local.py. - General settings - Databases - Authentication - Export Formats - Cache - Logging - Project settings - Multisite
https://rdmo.readthedocs.io/en/latest/configuration/index.html
2022-06-25T08:23:48
CC-MAIN-2022-27
1656103034877.9
[]
rdmo.readthedocs.io
enable_merge_strategies¶ astropy.utils.metadata. enable_merge_strategies(*merge_strategies)[source]¶ Context manager to temporarily enable one or more custom metadata merge strategies. - Parameters - *merge_strategies MergeStrategy Merge strategies that will be enabled.), # left side types ... (int, float)) # right side types ... @classmethod ... def merge(cls, left, right): ... return [left, right] By defining this class the merge strategy is automatically registered to be available for use in merging. However, by default new merge strategies are not enabled. This prevents inadvertently changing the behavior of unrelated code that is performing metadata merge operations. In order to use the new merge strategy, use this context manager as in the following example: >>> from astropy.table import Table, vstack >>> from astropy.utils.metadata import enable_merge_strategies >>> t1 = Table([[1]], names=['a']) >>> t2 = Table([[2]], names=['a']) >>> t1.meta = {'m': 1} >>> t2.meta = {'m': 2} >>> with enable_merge_strategies(MergeNumbersAsList): ... t12 = vstack([t1, t2]) >>> t12.meta['m'] [1, 2] One can supply further merge strategies as additional arguments to the context manager. As a convenience, the enabling operation is actually done by checking whether the registered strategies are subclasses of the context manager arguments. This means one can define a related set of merge strategies and then enable them all at once by enabling the base class. As a trivial example, all registered merge strategies can be enabled with: >>> with enable_merge_strategies(MergeStrategy): ... t12 = vstack([t1, t2])
https://docs.astropy.org/en/stable/api/astropy.utils.metadata.enable_merge_strategies.html
2022-06-25T08:13:50
CC-MAIN-2022-27
1656103034877.9
[]
docs.astropy.org
Consolidated billing for AWS Organizations You can use the consolidated billing feature in AWS Organizations to consolidate billing and payment for multiple AWS accounts or multiple Amazon Internet Services Pvt. Ltd (AISPL) accounts. Every organization in AWS Organizations has a management account that pays the charges of all the member accounts. For more information about organizations, see the AWS Organizations User. For more information, see Volume discounts. No extra fee – Consolidated billing is offered at no additional cost. The member account bills are for informational purpose only. The management account might reallocate the additional volume discounts, Reserved Instance, or Savings Plans discounts that your account receives. If you have access to the management account, you can see a combined view of the AWS charges that the member accounts incur. You also can get a cost report for each member account. AWS and AISPL accounts can't be consolidated together. If your contact address is in India, you can use AWS Organizations to consolidate AISPL accounts within your organization. When a member account leaves an organization, the member account can no longer access Cost Explorer data that was generated when the account was in the organization. The data isn't deleted, and the management account in the organization can still access the data. If the member account rejoins the organization, the member account can access the data again. Topics
https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/consolidated-billing.html
2022-06-25T09:00:12
CC-MAIN-2022-27
1656103034877.9
[]
docs.aws.amazon.com
Once the user has been logged in, they must complete the protocol workflow so they can ultimately be logged into the client. To facilitate this, the login page is passed a returnUrl query parameter which refers to the URL the prior request came from. This URL is, in essence, the same authorization endpoint to which the client made the original authorize request. In the request to your login page where it logs the user in with a call to SignInAsync, it would then simply use the returnUrl to redirect the response back. This will cause the browser to re-issue the original authorize request from the client allowing your IdentityServer to complete the protocol work. An example of this redirect can be seen in the local login topic.. Keep in mind that this returnUrl is state that needs to be maintained during the user’s login workflow. If your workflow involves page post-backs, redirecting the user to an external login provider, or just sending the user through a custom workflow, then this value must be preserved across all of those page transitions.
https://docs.duendesoftware.com/identityserver/v6/ui/login/redirect/
2022-06-25T08:20:07
CC-MAIN-2022-27
1656103034877.9
[]
docs.duendesoftware.com
Reply to Affected Users This page explains how you can send in-app chats to your users who have experienced crashes in your app. Separate Conversations Each open conversation can only be viewed from its related issue. If you reply to a user who reported a specific bug, you can only access that conversation from that specific bug report. The same is true for crash report and survey response. Reply to an Individual Affected User A crash is a negative experience for users of an application. A way to let your users know that you are aware of the crash they encountered and that you're working on a fix for it is by replying directly to an occurrence of a crash. From the occurrence page, you can select the Reply to User (or View Chat if one already exists) button and the chat pop-up will appear. Click on this button to open the chat pop-up and send a message to the individual user affected by that crash occurrence. Reply to All Affected Users You can also send. Reply to Multiple Affected Users To see a list of all users affected by a particular crash, select View all users from the chat pop-up. Click on this link to see a list of all users affected by that crash. From this page, you can send a single reply a single user or all users. This is particularly useful when you want to reach out to specific app users affected by a crash, but not all affected users. This page in your dashboard lists all of your app users affected by a crash. Updated about 2 years ago Talk to your users often? Enable notifications so that they don't miss your message! You can also use rules to send automatic replies to crash occurrences.
https://docs.instabug.com/docs/react-native-reply-to-affected-users
2022-06-25T08:22:59
CC-MAIN-2022-27
1656103034877.9
[array(['https://files.readme.io/b5b8b71-Crash_Reporting_-_Reply_Occurrence.png', 'Crash Reporting - Reply Occurrence.png Click on this button to open the chat pop-up and send a message to the individual user affected by that crash occurrence.'], dtype=object) array(['https://files.readme.io/b5b8b71-Crash_Reporting_-_Reply_Occurrence.png', 'Click to close... Click on this button to open the chat pop-up and send a message to the individual user affected by that crash occurrence.'], dtype=object) array(['https://files.readme.io/b91cbe9-Crash_Reporting_-_Reply_Button.png', 'Crash Reporting - Reply Button.png Click on the **Reply to Users** button to open the chat pop-up and send a message to all users affected by that crash.'], dtype=object) array(['https://files.readme.io/b91cbe9-Crash_Reporting_-_Reply_Button.png', 'Click to close... Click on the **Reply to Users** button to open the chat pop-up and send a message to all users affected by that crash.'], dtype=object) array(['https://files.readme.io/4945fb3-Crash_Reporting_-_Reply_Many_Copy.png', 'Crash Reporting - Reply Many Copy.png Click on this link to see a list of all users affected by that crash.'], dtype=object) array(['https://files.readme.io/4945fb3-Crash_Reporting_-_Reply_Many_Copy.png', 'Click to close... Click on this link to see a list of all users affected by that crash.'], dtype=object) array(['https://files.readme.io/8ecc122-Crash_Reporting_-_Reply_Many.png', 'Crash Reporting - Reply Many.png This page in your dashboard lists all of your app users affected by a crash.'], dtype=object) array(['https://files.readme.io/8ecc122-Crash_Reporting_-_Reply_Many.png', 'Click to close... This page in your dashboard lists all of your app users affected by a crash.'], dtype=object) ]
docs.instabug.com
Op Codes. Localloc Field Definition Important Some information relates to prerelease product that may be substantially modified before it’s released. Microsoft makes no warranties, express or implied, with respect to the information provided here. Allocates a certain number of bytes from the local dynamic memory pool and pushes the address (a transient pointer, type *) of the first allocated byte onto the evaluation stack. public: static initonly System::Reflection::Emit::OpCode Localloc; public static readonly System.Reflection.Emit.OpCode Localloc; staticval mutable Localloc : System.Reflection.Emit.OpCode Public Shared ReadOnly Localloc As OpCode Field Value Remarks The following table lists the instruction's hexadecimal and Microsoft Intermediate Language (MSIL) assembly format, along with a brief reference summary: The stack transitional behavior, in sequential order, is: The:
https://docs.microsoft.com/en-us/dotnet/api/system.reflection.emit.opcodes.localloc?redirectedfrom=MSDN&view=net-6.0
2022-06-25T09:34:30
CC-MAIN-2022-27
1656103034877.9
[]
docs.microsoft.com
PowerDNS¶ Overview¶ Morpheus integrates directly with PowerDNS to automatically create DNS entries for Instances provisioned to a configured Cloud or Group. Morpheus also syncs in PowerDNS Domains for easy selection while provisioning, or setting as the default Domain on a Cloud or Network. Add PowerDNS Integration¶ PowerDNS can be added in the Administration or Infrastructure sections: In Administration -> Integrations, select + New Integration In Infrastructure -> Networks -> Services, select Add Service Provide the following: - TYPE PowerDNS - NAME Name for the Integration in Morpheus - API HOST URL of PowerDNS API. Example: - Token PowerDNS API Token - Version PowerDNS API Version Once saved the Integration will be added and visible in both Administration -> Integrationsand Infrastructure -> Networks -> Services Note All fields can be edited after saving. Domains¶ Once the integration is added, PowerDNS Domains will sync and listed under Infrastructure -> Networks -> Domains. Note Default Domains can be set on Networks and Clouds, and can be selected when provisioning. Additional configuration options are available by editing a domain in Networks -> Domains Configuring PowerDNS.
https://docs.morpheusdata.com/en/5.2.11/integration_guides/DNS/PowerDNS.html
2022-06-25T07:23:04
CC-MAIN-2022-27
1656103034877.9
[]
docs.morpheusdata.com